neuromorphic computer
Controlling AI's Growing Energy Needs
The huge amount of energy required to train artificial intelligence (AI) is becoming a concern. To train the large language model (LLM) powering Chat GPT-3, for example, almost 1,300 megawatt hours of energy was used, according to an estimate by researchers from Google and the University of California, Berkeley, a similar quantity of energy to what is used by 130 American homes in one year. Furthermore, an analysis by OpenAI suggests that the amount of power needed to train AI models has been growing exponentially since 2012, doubling roughly every 3.4 months as the models become bigger and more sophisticated. However, our energy production capacity is not increasing as steeply, and doing so is likely to further contribute to global warming: generating electricity is the single biggest contributor to climate change given that coal, oil, and gas are still widely used to generate electricity, compared to cleaner energy sources. "At this rate, we are running into a brick wall in terms of the ability to scale up machine learning networks," said Menachem Stern, a theoretical physicist at the AMOLF research institute in the Netherlands.
- North America > United States > California > Alameda County > Berkeley (0.25)
- Europe > Netherlands (0.25)
- North America > United States > Pennsylvania (0.05)
- (2 more...)
- Energy (1.00)
- Information Technology > Hardware (0.30)
Neuromorphic Programming: Emerging Directions for Brain-Inspired Hardware
Abreu, Steven, Pedersen, Jens E.
The value of brain-inspired neuromorphic computers critically depends on our ability to program them for relevant tasks. Currently, neuromorphic hardware often relies on machine learning methods adapted from deep learning. However, neuromorphic computers have potential far beyond deep learning if we can only harness their energy efficiency and full computational power. Neuromorphic programming will necessarily be different from conventional programming, requiring a paradigm shift in how we think about programming. This paper presents a conceptual analysis of programming within the context of neuromorphic computing, challenging conventional paradigms and proposing a framework that aligns more closely with the physical intricacies of these systems. Our analysis revolves around five characteristics that are fundamental to neuromorphic programming and provides a basis for comparison to contemporary programming methods and languages. By studying past approaches, we contribute a framework that advocates for underutilized techniques and calls for richer abstractions to effectively instrument the new hardware class.
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > Massachusetts > Suffolk County > Boston (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- (6 more...)
- Overview (0.68)
- Research Report (0.64)
- Education (0.93)
- Health & Medicine > Therapeutic Area > Neurology (0.46)
- Government > Regional Government (0.46)
Intel reveals world's biggest 'brain-inspired' neuromorphic computer
Intel has created the world's largest neuromorphic computer, a device intended to mimic the operation of the human brain. The firm hopes that it will be able to run more sophisticated AI models than is possible on conventional computers, but experts say there are engineering hurdles to overcome before the device can compete with the state of the art, let alone exceed it. Expectations for neuromorphic computers are high because they are inherently different to traditional machines. While a regular computer uses its processor to carry out operations and stores data in separate memory, a neuromorphic device uses artificial neurons to both store and compute, just as our brains do. This removes the need to shuttle data back and forth between components, which can be a bottleneck for current computers.
Opportunities for neuromorphic computing algorithms and applications - Nature Computational Science
With the end of Moore's law approaching and Dennard scaling ending, the computing community is increasingly looking at new technologies to enable continued performance improvements. Neuromorphic computers are one such new computing technology. The term neuromorphic was coined by Carver Mead in the late 1980s1,2, and at that time primarily referred to mixed analogue–digital implementations of brain-inspired computing; however, as the field has continued to evolve and with the advent of large-scale funding opportunities for brain-inspired computing systems such as the DARPA Synapse project and the European Union's Human Brain Project, the term neuromorphic has come to encompass a wider variety of hardware implementations. We define neuromorphic computers as non-von Neumann computers whose structure and function are inspired by brains and that are composed of neurons and synapses. Von Neumann computers are composed of separate CPUs and memory units, where data and instructions are stored in the latter.
2D Materials could be used to simulate brain synapses in computers
Researchers from KTH Royal Institute of Technology and Stanford University have now fabricated a material for computer components that enable the commercial viability of computers that mimic the human brain. Electrochemical random access (ECRAM) memory components made with 2D titanium carbide showed outstanding potential for complementing classical transistor technology, and contributing toward commercialization of powerful computers that are modeled after the brain's neural network. Such neuromorphic computers can be thousands times more energy efficient than today's computers. These advances in computing are possible because of some fundamental differences from the classic computing architecture in use today, and the ECRAM, a component that acts as a sort of synaptic cell in an artificial neural network, says KTH Associate Professor Max Hamedi. "Instead of transistors that are either on or off, and the need for information to be carried back and forth between the processor and memory -- these new computers rely on components that can have multiple states, and perform in-memory computation," Hamedi says.
Japanese robot uses neurons grown in the lab to avoid obstacles
Japanese researchers have built a robot with brain-like neurons that were grown in the lab, in order to teach it to'think like us'. In experiments at the University of Tokyo, the compact robotic vehicle on wheels, small enough to fit in a person's palm, was placed in a simple maze. The robot was connected to a culture of brain neurons, also known as nerve cells, that were grown from living cells. When these artificial neurons were electrically stimulated, the machine successfully reached its goal – a black circular box. A neuron, also known as nerve cell, is an electrically excitable cell that takes up, processes and transmits information through electrical and chemical signals.
- Asia > Japan > Honshū > Kantō > Tokyo Metropolis Prefecture > Tokyo (0.26)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.05)
Breakthrough optical sensor mimics human eye, a key step toward better artificial intelligence
Researchers at Oregon State University are making key advances with a new type of optical sensor that more closely mimics the human eye's ability to perceive changes in its visual field. The sensor is a major breakthrough for fields such as image recognition, robotics and artificial intelligence. Findings by OSU College of Engineering researcher John Labram and graduate student Cinthya Trujillo Herrera were published today in Applied Physics Letters. Previous attempts to build a human-eye type of device, called a retinomorphic sensor, have relied on software or complex hardware, said Labram, assistant professor of electrical engineering and computer science. But the new sensor's operation is part of its fundamental design, using ultrathin layers of perovskite semiconductors--widely studied in recent years for their solar energy potential--that change from strong electrical insulators to strong conductors when placed in light.
Brain-inspired computing boosted by new concept of completeness
The next generation of high-performance, low-power computer systems might be inspired by the brain. However, as designers move away from conventional computer technology towards brain-inspired (neuromorphic) systems, they must also move away from the established formal hierarchy that underpins conventional machines -- that is, the abstract framework that broadly defines how software is processed by a digital computer and converted into operations that run on the machine's hardware. This hierarchy has helped enable the rapid growth in computer performance. Writing in Nature, Zhang et al.1 define a new hierarchy that formalizes the requirements of algorithms and their implementation on a range of neuromorphic systems, thereby laying the foundations for a structured approach to research in which algorithms and hardware for brain-inspired computers can be designed separately. The performance of conventional digital computers has improved over the past 50 years in accordance with Moore's law, which states that technical advances will enable integrated circuits (microchips) to double their resources approximately every 18–24 months.
What Is Neuromorphic Computing & How Is It Transforming AI Research
Recently Intel Corp. delivered fifty million artificial neurons to Sandia National Laboratories, which is equivalent to the brain of a small mammal. The shipment is first in a three-year series, by the end of which they are expecting the number of experimental neurons in the final model to reach 1 billion or more. This collaboration aims to boost neuromorphic computing solutions to newer heights while prototyping the software, algorithms, and architectures. "With a neuromorphic computer of this scale, we have a new tool to understand how brain-based computers can do impressive feats that we cannot currently do with ordinary computers," said Craig Vineyard, project leader at Sandia. Researchers believe that improved algorithms and computer circuitry can create broader applications for neuromorphic computers.
- Semiconductors & Electronics (0.69)
- Information Technology (0.51)
- Energy (0.51)
- Government > Regional Government > North America Government > United States Government (0.36)
50 million artificial neurons to facilitate machine-learning research
Fifty million artificial neurons--a number roughly equivalent to the brain of a small mammal--were delivered from Portland, Oregon-based Intel Corp. to Sandia National Laboratories last month, said Sandia project leader Craig Vineyard. The neurons will be assembled to advance a relatively new kind of computing, called neuromorphic, based on the principles of the human brain. Its artificial components pass information in a manner similar to the action of living neurons, electrically pulsing only when a synapse in a complex circuit has absorbed enough charge to produce an electrical spike. "With a neuromorphic computer of this scale," Vineyard said, "we have a new tool to understand how brain-based computers are able to do impressive feats that we cannot currently do with ordinary computers." Improved algorithms and computer circuitry can create wider applications for neuromorphic computers, said Vineyard. Sandia manager of cognitive and emerging computing John Wagner said, "This very large neural computer will let us test how brain-inspired processors use information at increasingly realistic scales as they come to actually approximate the processing power of brains.
- Government (0.40)
- Information Technology (0.36)
- Energy (0.36)