Goto

Collaborating Authors

Results


This AI Could Go From 'Art' to Steering a Self-Driving Car

WIRED

You've probably never wondered what a knight made of spaghetti would look like, but here's the answer anyway--courtesy of a clever new artificial intelligence program from OpenAI, a company in San Francisco. The program, DALL-E, released earlier this month, can concoct images of all sorts of weird things that don't exist, like avocado armchairs, robot giraffes, or radishes wearing tutus. OpenAI generated several images, including the spaghetti knight, at WIRED's request. DALL-E is a version of GPT-3, an AI model trained on text scraped from the web that's capable of producing surprisingly coherent text. DALL-E was fed images and accompanying descriptions; in response, it can generate a decent mashup image.


Designing customized 'brains' for robots

ScienceDaily > Artificial Intelligence

"The hang up is what's going on in the robot's head," she adds. Perceiving stimuli and calculating a response takes a "boatload of computation," which limits reaction time, says Neuman, who recently graduated with a PhD from the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). Neuman has found a way to fight this mismatch between a robot's "mind" and body. The method, called robomorphic computing, uses a robot's physical layout and intended applications to generate a customized computer chip that minimizes the robot's response time. The advance could fuel a variety of robotics applications, including, potentially, frontline medical care of contagious patients.


When to expect the real self-driving revolution

CNN Top Stories

This year, new technologies will enable more drivers to take their hands off the wheel while on the road. But that doesn't mean their cars will be fully self-driving -- that day still remains far in the future. Automakers like General Motors (GM), Ford (F) and Stellantis (the company formed in the recent merger of Fiat Chrysler and Groupe PSA) are introducing -- or upgrading existing -- technologies that allow drivers to completely take their hands off the steering wheel and pull their feet away from the pedals for long stretches of time. But these systems will still be limited in their capabilities. Drivers will still be required to pay constant attention to the road, for instance.


Behind those dancing robots, scientists had to bust a move

Boston Herald

The man who designed some of the world's most advanced dynamic robots was on a daunting mission: programming his creations to dance to the beat with a mix of fluid, explosive and expressive motions that are almost human. Almost a year and half of choreography, simulation, programming and upgrades that were capped by two days of filming to produce a video running at less than 3 minutes. The clip, showing robots dancing to the 1962 hit "Do You Love Me?" by The Contours, was an instant hit on social media, attracting more than 23 million views during the first week. It shows two of Boston Dynamics' humanoid Atlas research robots doing the twist, the mashed potato and other classic moves, joined by Spot, a doglike robot, and Handle, a wheeled robot designed for lifting and moving boxes in a warehouse or truck. Boston Dynamics founder and chairperson Marc Raibert says what the robot maker learned was far more valuable.


Ex-Google engineer among those pardoned by Donald Trump

BBC News - Technology

As an employee, he downloaded more than 14,000 files containing the intellectual property of Google's former self-driving car division, Waymo, before leaving to found Otto, which was soon acquired by Uber.


How to train a robot (using AI and supercomputers)

ScienceDaily > Artificial Intelligence

To navigate built environments, robots must be able to sense and make decisions about how to interact with their locale. Researchers at the company were interested in using machine and deep learning to train their robots to learn about objects, but doing so requires a large dataset of images. While there are millions of photos and videos of rooms, none were shot from the vantage point of a robotic vacuum. Efforts to train using images with human-centric perspectives failed. Beksi's research focuses on robotics, computer vision, and cyber-physical systems.