The end is nigh! A killer robot has been taught how to hunt predators

AITopics Original Links

Scientists have taught a robot how to hunt and destroy prey in a chilling new experiment. The test comes as experts warm AI could wipe out a tenth of the global population in five years. The ability to identify and zone in on a specific target will be crucial for any useful robotic technology like driverless cars, the researchers at the University of Zurich in Switzerland believe. And despite the chilling prospect of allowing a robot to mark up a target, they believe their research will prove more useful than deadly. Scientists have taught a robot how to hunt and destroy prey in a chilling new experiment.

Scientists Taught a Robot to Hunt Prey


Google's autonomous cars may look cute, like a yuppie cross between a Little Tikes Cozy Coupe and a sheet of flypaper, but to make it in the real world they're going to have to act like calculating predators. At least, that's what a handful of scientists at the Institute of Neuroinformatics at the University of Zurich in Switzerland believe. They recently taught a robot to act like a predator and hunt its prey--which was a human-controlled robot--using a specialized camera and software that allowed the robot to essentially teach itself how to find its mark. The end goal of the work is arguably more beneficial to humanity than creating a future robot bloodsport, however. The researchers aim to design software that would allow a robot to assess its environment and find a target in real time and space.

A Contrast Sensitive Silicon Retina with Reciprocal Synapses

Neural Information Processing Systems

The goal of perception is to extract invariant properties of the underlying world.By computing contrast at edges, the retina reduces incident light intensities spanning twelve decades to a twentyfold variation. In one stroke, it solves the dynamic range problem and extracts relative reflectivity, bringingus a step closer to the goal. We have built a contrastsensitive siliconretina that models all major synaptic interactions in the outer-plexiform layer of the vertebrate retina using current-mode CMOS circuits: namely, reciprocal synapses between cones and horizontal cells, which produce the antagonistic center/surround receptive field, and cone and horizontal cell gap junctions, which determine its size. The chip has 90 x 92 pixels on a 6.8 x 6.9mm die in 2/lm n-well technology and is fully functional. 1 INTRODUCTION Retinal cones use both intracellular and extracellular mechanisms to adapt their gain to the input intensity level and hence remain sensitive over a large dynamic range. For example, photochemical processes within the cone modulate the photo currentswhile shunting inhibitory feedback from the network adjusts its membrane conductance.Adaptation makes the light sensitivity inversely proportional to the recent input level and the membrane conductance proportional to the background intensity.As a result, the cone's membrane potential is proportional to the ratio between the input and its spatial or temporal average, i.e. contrast.

Direction Selective Silicon Retina that uses Null Inhibition

Neural Information Processing Systems

Biological retinas extract spatial and temporal features in an attempt to reduce the complexity of performing visual tasks. We have built and tested a silicon retina which encodes several useful temporal features found in vertebrate retinas.The cells in our silicon retina are selective to direction, highly sensitive to positive contrast changes around an ambient light level, and tuned to a particular velocity. Inhibitory connections in the null direction performthe direction selectivity we desire. This silicon retina is on a 4.6 x 6.8mm die and consists of a 47 x 41 array of photoreceptors.

Now Scientists Are Teaching a Robot to Hunt Prey


Some scientists are hard at work making a "kill switch" to overpower a too-strong AI and protect us, if needed. Others are specifically teaching robots how to hunt prey, also to help us. Researchers at the University of Zurich's Institute of Neuroinformatics are teaching a small, truck-shaped robot to see, track, and hunt its prey (another small, truck-shaped robot). The predator robot uses an advanced "silicon retina" to see instead of a traditional camera. This "silicon retina," which is modeled after animals' eyes, uses pixels to smoothly detect changes in real time instead of slowly processing frame-by-frame images.