You wake up on a bus, surrounded by all your remaining possessions. A few fellow passengers slump on pale blue seats around you, their heads resting against the windows. You turn and see a father holding his son. But one man, with a salt-and-pepper beard and khaki vest, stands near the back of the bus, staring at you. You feel uneasy and glance at the driver, wondering if he would help you if you needed it. When you turn back around, the bearded man has moved toward you and is now just a few feet away.
Creating driverless cars capable of humanlike reasoning is a long-standing pursuit of companies like Waymo, GM's Cruise, Uber, and others. Intel's Mobileye proposes a mathematical model -- the Responsibility-Sensitive Safety (RSS) -- it describes as a "common sense" approach to on-the-road decision-making that codifies good habits like giving other cars the right of way. For its part, Nvidia is actively developing Safety Force Field, a decision-making policy in a motion-planning stack that monitors unsafe actions by analyzing real-time sensor data. Now, a team of MIT scientists are investigating an approach that leverages GPS-like maps and visual data to enable autonomous cars to learn human steering patterns, and to apply the learned knowledge to complex planned routes in previously unseen environments. Their work -- which will be presented at the International Conference on Robotics and Automation in Long Beach, California next month -- builds on end-to-end navigation systems architected by Daniel Rus, director of the Computer Science and Artificial Intelligence Laboratory (CSAIL).
With aims of bringing more human-like reasoning to autonomous vehicles, MIT researchers have created a system that uses only simple maps and visual data to enable driverless cars to navigate routes in new, complex environments. Human drivers are exceptionally good at navigating roads they haven't driven on before, using observation and simple tools. We simply match what we see around us to what we see on our GPS devices to determine where we are and where we need to go. In every new area, the cars must first map and analyze all the new roads, which is very time consuming. The systems also rely on complex maps -- usually generated by 3-D scans -- which are computationally intensive to generate and process on the fly.
Self-driving delivery vehicles may be getting closer to becoming a reality, but Ford believes there's one leg of the process that could be further solved by robots. The auto giant has partnered with startup Agility Robotics to create a two-legged robot called'Digit' that can ferry packages to your doorstep. It solves a problem generated by self-driving delivery vehicles, which is that if there's no humans in the driver's seat that can drop off a package, autonomous robots can pick up the slack. 'It's not always convenient for people to leave their homes to retrieve deliveries or for businesses to run their own delivery services,' Ken Washington, chief technology officer at Ford, wrote in a blog post. 'If we can free people up to focus less on the logistics of making deliveries, they can turn their time and effort to things that really need their attention.
In this Oct. 31, 2018, file photo, a man, who declined to be identified, has his face painted to represent efforts to defeat facial recognition during a protest at Amazon headquarters over the company's facial recognition system, "Rekognition," in Seattle. San Francisco is on track to become the first U.S. city to ban the use of facial recognition by police and other city agencies. These days, with facial recognition technology, you've got a face that can launch a thousand applications, so to speak. Sure, you may love the ease of opening your phone just by facing it instead of tapping in a code. But how do you feel about having your mug scanned, identifying you as you drive across a bridge, when you board an airplane or to confirm you're not a stalker on your way into a Taylor Swift concert?
A fleet of miniature autonomous cars has shown how driverless cars improve traffic flow by at least 35% when programmed to work together. Researchers at the University of Cambridge, UK, tested how 16 miniature robotic cars driving around a two-lane track reacted when one of the cars on the inner lane stopped. When the cars were in cooperative mode, they alerted the rest of the cars to slow down as it neared the immobile car, allowing the inner lane cars to quickly pass it. However, when they were not driving cooperatively, traffic built up as cars had to stop and wait for a safe moment to overtake the stopped car. "Autonomous cars could fix a lot of different problems associated with driving in cities, but there needs to be a way for them to work together," said co-author of the study Michael He, an undergraduate student at St John's College who designed the lane-changing algorithms for the experiment.
If you board a flight out of the United States four years from now, chances are the government is going to scan your face -- an ambitious timeline that has privacy experts reeling. That's according to a recent Department of Homeland Security report, which says that U.S. Customs and Border Protection (CBP) plans to dramatically expand its Biometric Exit program to cover 97 percent of outbound air passengers within four years. Through this program, which was already in place in 15 U.S. airports at the end of 2018, passengers have their faces scanned by cameras before boarding flights out of the nation. If the AI-powered system determines that the photo doesn't match one on file, CBP officials can look into it. The goal of these airport face scans is purportedly to catch people who have overstayed their visas, but civil liberties expert Edward Hasbrouck sees them as potentially giving the government increased control over American citizens.
AMAZON: Has opened an AI-powered convenience store in Seattle. The premise of Amazon Go is simple: to eliminate everyone's least-favorite part of the shopping experience, checking out. With ceiling-mounted sensors and cameras backed by artificial intelligence, Amazon is able to track every interaction a customer has with a product. It knows exactly when a product is picked up or put back. Go works like a physical manifestation of Amazon's 1-Click checkout, where you "click" by taking an item off a shelf.
Decades after Isaac Asimov first wrote his laws for robots, their ever-expanding role in our lives requires a radical new set of rules, legal and AI expert Frank Pasquale warned on Thursday. The world has changed since sci-fi author Asimov in 1942 wrote his three rules for robots, including that they should never harm humans, and today's omnipresent computers and algorithms demand up-to-date measures. According to Pasquale, author of "The Black Box Society: The Secret Algorithms Behind Money and Information", four new legally-inspired rules should be applied to robots and AI in our daily lives. "The first is that robots should complement rather than substitute for professionals" Pasquale told AFP on the sidelines of a robotics conference at the Vatican's Pontifical Academy of Sciences. "Rather than having a robot doctor, you should hope that you have a doctor who really understands how AI works and gets really good advice from AI, but ultimately it's a doctor's decision to decide what to do and what not to do." "The second is that we need to stop robotic arms races. There's a lot of people right now who are investing in war robots, military robots, policing robots."