When artificial intelligence technology intersects with abundant oil and gas seismic data, the outcome could yield a more accurate depiction of what lies beneath the surface, enabling cash-strapped drillers to better target sweet spots and maximize returns. It's all based on algorithms that essentially teach computers how to solve complex problems--in this instance, how to quickly and accurately find subsurface faults that lead to lucrative hydrocarbon discoveries. Naveen Rao, the CEO of two-year-old startup Nervana Systems, compared the concept to the brain and its network of neurons. "Each neuron does a little bit of information processing. It combines that with the output of many other neurons, and the whole stack basically processes information that comes in through our sensors," Rao told Hart Energy.
A team of computer scientists may have developed a surprising way to curb wildlife poaching. Funded by the National Science Foundation (NSF), a team of computer scientists at the University of Southern California (USC) have developed a model for "green security games" that use game theory to protect wildlife from poachers. Game theory uses mathematical equations "to predict the behavior of adversaries and plan optimal approaches for containment," explains NSF, which would allow park rangers to patrol parks and wildlife sanctuaries more effectively. "In most parks, ranger patrols are poorly planned, reactive rather than pro-active and habitual," Fei Fang, a Ph.D. candidate in the computer science department at USC and a researcher involved with the project, tells NSF. "We need to provide actual patrol routes that can be practically followed."
A century ago, more than 60,000 tigers roamed the wild. Today, that number has dwindled to around 3,200. Poaching is one of the main drivers of this steep decline. Humans have pushed tigers to near-extinction, whether for their skins, medicine or for trophy hunting. The same applies to other large animal species like elephants and rhinoceros that play unique and crucial roles in the ecosystems where they live.
In the late 00's some clever academics rebranded a subset of neural network techniques to'Deep Learning', which just means a stack of different nets on top of one another, forming a sort of computationally-brilliant lasagne. When I say'machine learning' in this blogpost, I'm referring to some kind of neural network technique.) Robotics has just started to get into neural networks. This has already sped up development. This year, Google demonstrated a system that teaches robotic arms to learn how to pick up objects of any size and shape.
Among all of the self-driving startups working toward Level 4 autonomy (a self-driving system that doesn't require human intervention in most scenarios), Mountain View, Calif.-based Drive.ai's Drive sees deep learning as the only viable way to make a truly useful autonomous car in the near term, says Sameep Tandon, cofounder and CEO. "If you look at the long-term possibilities of these algorithms and how people are going to build [self-driving cars] in the future, having a learning system just makes the most sense. There's so much complication in driving, there are so many things that are nuanced and hard, that if you have to do this in ways that aren't learned, then you're never going to get these cars out there." It's only been about a year since Drive went public, but already, the company has a fleet of four vehicles navigating (mostly) autonomously around the San Francisco Bay Area--even in situations (such as darkness, rain, or hail) that are notoriously difficult for self-driving cars. Last month, we went out to California to take a ride in one of Drive's cars, and to find out how it's using deep learning to master autonomous driving.