In 1851, a Florida doctor named John Gorrie received a patent for the first ice machine. He'd been trying to alleviate high fevers in malaria patients with cooled air. To this end, he designed an engine that could pull in air, compress it, then run it through pipes, allowing the air to cool as it expanded. It wasn't until the pipes on Gorrie's machine unexpectedly froze and began to develop ice that he found a new opportunity.
As Uber battles taxis and other ride-hailing apps in cities across the world, the company is beginning to move quickly into a much larger transportation market: trucking. This spring, Uber unveiled Uber Freight, a brokerage service connecting shippers and truckers through a new app. Since then, the teams have split up into self-driving research and development, managed by Alden Woodrow, formerly of Google X, and the Uber Freight team. Even in trucking, Uber's acquisition of Otto has led to a lawsuit filed by Alphabet's self-driving car division, Waymo, related to the alleged theft of sensor technology.
Elon Musk and Mark Zuckerberg are having a spat about whether or not artificial intelligence is going to kill us all. "But until people see robots going down the street killing people, they don't know how to react." In a Facebook Live broadcast, Zuckerberg, Facebook's CEO, offered riposte. Seeing the CEOs of publicly traded tech companies go at it like Tay and Kanye is unfamiliar territory.
Obama was a natural subject for this kind of experiment because there are so many readily available, high-quality video clips of him speaking. In order to make a photo-realistic mouth texture, researchers had to input many, many examples of Obama speaking--layering that data atop a more basic mouth shape. The researchers used what's called a recurrent neural network to synthesize the mouth shape from the audio. Recurrent neural networks are also used for facial recognition and speech recognition.)
As part of that, they've promised to "bring highly automated driving functions to market as a core competency from 2021." They announced they're rolling out "Level 3" automation--which means a car that can drive itself some of the time--in the A8 model this year with promises to bring fully autonomous vehicles to market in 2020. On the electric side, the company has promised a sporty little electric vehicle called the I.D. Instead, the company's engineers had built them to run artificially well under testing conditions (and only under testing conditions).
"Planes fly roughly 99 percent of the miles that they fly by computer. It's now to the place that it is not safe for humans to fly in a lot of conditions. If you could have a robotic surgeon that makes one mistake in 10,000, or a human that made one mistake in 1,000, are you really going to go under the knife with the human? As a counterpoint, however, there are lots of Americans who choose to drive rather than fly, fearing the latter more despite knowing that it is statistically much safer.
In the report, researchers at the Facebook Artificial Intelligence Research lab describe using machine learning to train their "dialog agents" to negotiate. At one point, the researchers write, they had to tweak one of their models because otherwise the bot-to-bot conversation "led to divergence from human language as the agents developed their own language for negotiating." In other words, the model that allowed two bots to have a conversation--and use machine learning to constantly iterate strategies for that conversation along the way--led to those bots communicating in their own non-human language. Already, there's a good deal of guesswork involved in machine learning research, which often involves feeding a neural net a huge pile of data then examining the output to try to understand how the machine thinks.
The world we experience is not the real world. Which raises the question: How would our world change if we had new and different senses? More recently, researchers in the emerging field of "sensory enhancement" have begun developing tools to give people additional senses--ones that imitate those of other animals, or that add capabilities nature never imagined. Researchers are working on other technologies that could restore sight or touch to those who lack it.
Despite the recent emergence of browser-based transcription aids, transcription's an area of drudgery in the modern Western economy where machines can't quite squeeze human beings out of the equation. That is until last year, when Microsoft built one that could. Automatic speech recognition, or ASR, is an area that has gripped the firm's chief speech scientist, Xuedong Huang, since he entered a doctoral program at Scotland's Edinburgh University. Huang and his colleagues used their software to transcribe the NIST 2000 CTS test set, a bundle of recorded conversations that's served as the benchmark for speech recognition work for more than 20 years.
Imagine someone told you to draw a pig and a truck. But then, imagine you were asked to draw a pig truck. If you'd drawn it, I, a fellow human, would subjectively rate this a creative interpretation of the prompt "pig truck." This pig truck is actually the output of a fascinating artificial intelligence system called SketchRNN, a part of a new effort at Google to see if AI can make art.