Light forms the global backbone of information transmission yet is rarely used for information transformation. Digital optical logic faces fundamental physical challenges1. Many analog approaches have been researched2,3,4, but analog optical co-processors have faced major economic challenges. Optical systems have never achieved competitive manufacturability, nor have they satisfied a sufficiently general processing demand better than digital electronic contemporaries. Incipient changes in the supply and demand for photonics have the potential to spark a resurgence in optical information processing.
Deep learning is having a serious moment right now in the world of AI. Loosely based on the brain's computing architecture, artificial neural networks have vastly outperformed their predecessors in a variety of tasks that had previously stumped our silicon-minded comrades. But as these algorithms continuously forge new grounds in machine intelligence, we're coming to an uncomfortable realization: transistor-based computers have hard limits, and those limits are approaching rapidly. Now, thanks to a new system developed by Princeton engineers, we may have one way to smash the speed barrier of our current processors: neuromorphic computing running on photons, not electrons, with silicon chips that work at the speed of light. Published this week on Arxiv, the new photonic neural network is so blazingly efficient that when pitted against a conventional CPU in solving differential equations, it performed roughly 2,000 times faster.
Researchers at the National Institute of Standards and Technology (NIST) have made a silicon chip that distributes optical signals precisely across a miniature brain-like grid, showcasing a potential new design for neural networks. The human brain has billions of neurons (nerve cells), each with thousands of connections to other neurons. Many computing research projects aim to emulate the brain by creating circuits of artificial neural networks. But conventional electronics, including the electrical wiring of semiconductor circuits, often impedes the extremely complex routing required for useful neural networks. The NIST team proposes to use light instead of electricity as a signaling medium.
As developments are made in neural computing, we can continue to push artificial intelligence further. A fairly recent technology, neural networks have been taking over the world of data processing, giving machines advanced capabilities such as object recognition, face recognition, natural language processing, and machine translation. These sound like simple things, but they were way out of reach for processors until scientists began to find way to make machines behave more like human brains in the way they learned and handled data. To do this, scientists have been focusing on building neuromorphic chips, circuits that operate in a similar fashion to neurons. Now, a team at Princeton University has found a way to build a neuromorphic chip that uses light to mimic neurons in the brain, and their study has been detailed in Cornell University Library.
While there are lots of things that artificial intelligence can't do yet--science being one of them--neural networks are proving themselves increasingly adept at a huge variety of pattern recognition tasks. These tasks can range anywhere from recognizing specific faces in photos to identifying specific patterns of particle decays in physics. Right now, neural networks are typically run on regular computers. Unfortunately, those networks are a poor architectural match; neurons combine both memory and calculations into a single unit, while our computers keep those functions separate. For this reason, some companies are exploring dedicated neural network chips.