If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Yes, once upon a time, it was a Steven Spielberg movie (and an underrated masterpiece on top of that). But today "AI" is a buzz phrase that pops up with more and more frequency in all kinds of social, political and especially technological contexts. AI stands, of course, for "artificial intelligence," also often referred to as "machine learning." Ask most folks to define AI, however, and it's much like asking people how a toilet works: You think you know, until you have to explain it to someone else. Artificial intelligence is a moving target, which makes it a challenge for even professionals in tech to fully understand it.
The differences between 5G and 6G are not just about what collection of bandwidths will make up 6G in the future and how users will connect to the network, but also about the intelligence built into the network and devices. "The collection of networks that will create the fabric of 6G must work differently for an augmented reality (AR) headset than for an e-mail client on a mobile device," says Shahriar Shahramian, a research lead with Nokia Bell Laboratories. "Communications providers need to solve a plethora of technical challenges to make a variety of networks based on different technologies work seamlessly," he says. Devices will have to jump between different frequencies, adjust data rates, and adapt to the needs of the specific application, which could be running locally, on the edge of the cloud, or on a public service. "One of the complexities of 6G will be, how do we bring the different wireless technologies together so they can hand off to each other, and work together really well, without the end user even knowing about it," Shahramian says.
Machine vision is increasingly important for many applications, such as object classification. However, relying on conventional RGB imaging is sometimes insufficient – the input images are just too similar, regardless of algorithmic sophistication. Hyperspectral imaging adds the extra dimension of wavelength to conventional images, providing a much richer data set. Rather than expressing an image using red, green, and blue (RGB) values at each pixel location, hyperspectral cameras instead record a complete spectrum at each point to create a 3D data set, sometimes referred to as a hyperspectral data cube. The additional spectral dimension facilitates supervised learning algorithms that can characterize visually indistinguishable objects – capabilities that are highly desirable across multiple application sectors.
A team of scientists has used GPU-accelerated deep learning to show how color can be brought to night-vision systems. In a paper published this week in the journal PLOS One, a team of researchers at the University of California, Irvine led by Professor Pierre Baldi and Dr. Andrew Browne, describes how they reconstructed color images of photos of faces using an infrared camera. The study is a step toward predicting and reconstructing what humans would see using cameras that collect light using imperceptible near-infrared illumination. The study's authors explain that humans see light in the so-called "visible spectrum," or light with wavelengths of between 400 and 700 nanometers. Typical night vision systems rely on cameras that collect infrared light outside this spectrum that we can't see.
The monochromatic black-and-green that defined night vision for decades is quickly receding into the past. The U.S. military already issues night-vision goggles that outline people and other objects in bright white, and researchers across the world are racing to develop even more advanced ways of seeing in the dark. A new proof-of-principle study offers intriguing hints about how the next generation of such technology might work. In a paper published Wednesday in the academic journal PLOS ONE, researchers demonstrate that a deep learning algorithm can build a full-color reconstruction of a scene using only infrared images the human eye can't see. These findings suggest an exciting new future for night-vision technology.
Night vision is typically monotone--everything the wearer can see is colored in the same hue, which is mostly shades of green. But by using varying wavelengths of infrared light and a relatively simple AI algorithm, scientists from the University of California, Irvine have been able to bring back some color into these desaturated images. Their findings are published in the journal PLOS ONE this week. Light in the visible spectrum, similar to an FM radio, consists of many different frequencies. Both light and radio are part of the electromagnetic spectrum.
Scientists from the University of Irvine have developed a camera system that combines artificial intelligence (AI) with an infrared camera to capture full-color photos even in complete darkness. Human vision perceives light on what is known as the "visible spectrum," wavelengths of light between about 300 and 700 nanometers. Infrared light exists beyond 700 nanometers and is invisible to humans without the help of special technology, and many night vision systems can detect infrared light and transpose it into a digital display that provides humans with a monochromatic view. Scientists endeavored to take that process one step further and combined that infrared data with an AI algorithm that predicts color to render images in the same way they would appear if the light existed in the visible spectrum. Typical night vision systems render scenes as a monochromatic green display, and newer night vision systems use ultrasensitive cameras to detect and amplify visible light. The scientists say that computer vision tasks with low illuminance imaging have employed image enhancement and deep learning to aid in object detection and characterization from the infrared spectrum, but not with accurate interpretation of the same scene in the visible spectrum.
Night-vision cameras convert infrared light – outside the spectrum visible to humans – into visible light so we can "see in the dark". But this infrared information only allows a black-and-white image to be constructed. Now, AI can colourise these images for a more natural feel. Andrew Browne at the University of California, Irvine, and his colleagues used a camera that can detect both visible light and part of the infrared spectrum to take 140 images of different faces. The team then trained a neural network to spot correlations between the way objects appeared in infrared and their colour in the visible spectrum. Once trained, this AI could predict the visible colouring from pure infrared images, even those originally taken in total darkness.
It has been suggested that in a rapid enough takeoff scenario, governance would not be useful, because the transition to superintelligence would be too rapid for human actors - whether governments, corporations, or individuals - to respond to. This seems to imply that we only care about takeoff speed. And if that is the only relevant factor, the case for governance only applies if you believe slow takeoff is likely. Of course, it also matters how long we have until takeoff - but even so, I think this leaves a fair amount on the table in terms of what governance could do, and I want to try to make the case that even in that world, governance (still defined broadly1) is important - though in different ways. To make the argument, I will lay out three possibilities about AI alignment which are orthogonal to takeoff speed and timing; alignment-by-default, prosaic alignment, and provable alignment.
The world has always been a lopsided, unfair mess--a statement that holds true regardless of whatever business sector you talk about or whichever country you visit. The rich, despite constituting less than 5% of the global population, always seem to wield an unfair influence over the rest--in a relative sense, the have-nots. Giant corporations trample over local businesses when they set up shop in a new country. Issues such as racism, sexism and unfair economic divide have been prevalent for what feels like an eternity. Technologies such as AI, computer vision and NLP were supposed to bridge this gap.