Robots in the work place can perform hazardous or even 'impossible' tasks; e.g., toxic waste clean-up, desert and space exploration, and more. AI researchers are also interested in the intelligent processing involved in moving about and manipulating objects in the real world.
These days, it’s tough to avoid newspaper headlines warning that artificial intelligence is coming for your job. The problem is that, often, the only thing these oversimplifications get right is that there is in fact an important connection between automation and work. What’s surprising is how many examples there are of AI acting as the catalyst for new hiring, higher wages, and happier employees. But of course AI success stories aren’t as exciting as the “job-stealing robots” narrative. The reality is that the impact of AI on the workforce is complex, nuanced, and still very much in transition.
The technology entrepreneur Elon Musk recently urged the nation's governors to regulate artificial intelligence "before it's too late." Mr. Musk insists that artificial intelligence represents an "existential threat to humanity," an alarmist view that confuses A.I. systems (for example, the safety of autonomous vehicles) rather than trying to define and rein in the amorphous and rapidly developing field of A.I. I propose three rules for artificial intelligence systems that are inspired by, yet develop further, the "three laws of robotics" that the writer Isaac Asimov introduced in 1942: A robot may not injure a human being or, through inaction, allow a human being to come to harm; a robot must obey the orders given it by human beings, except when such orders would conflict with the previous law; and a robot must protect its own existence as long as such protection does not conflict with the previous two laws.
Guy Hoffman, who is well known for the fascinating creativity of his robot designs, has been working on a completely new kind of social robot in a collaboration between his lab at Cornell and Google ZOO's creative technology team in APAC. Guy Hoffman: Looking at the design of the huge number of social robots revealed in recent years, there are a lot of repetitive features: white shiny plastic with metal or black accents, glass screens and smooth, rounded lines and edges. The soft components give the robot a physical compliance which make Blossom move in an imperfect, lifelike way, and would be impossible to recreate with rigid components. The Blossom project is a collaboration between Hoffman's lab at Cornell and the team at Google ZOO's creative technology team in APAC.
One response to the call by experts in robotics and artificial intelligence for an ban on "killer robots" ("lethal autonomous weapons systems" or Laws in the language of international treaties) is to say: shouldn't you have thought about that sooner? There are shades of science-fictional preconceptions in a 2012 report on killer robots by Human Rights Watch. Besides, there's a continuum between drone war, soldier enhancement technologies and Laws that can't be broken down into "man versus machine". By all means let's try to curb our worst impulses to beat ploughshares into swords, but telling an international arms trade that they can't make killer robots is like telling soft-drinks manufacturers that they can't make orangeade.
Before autonomous trucks and taxis hit the road, manufacturers will need to solve problems far more complex than collision avoidance and navigation (see "10 Breakthrough Technologies 2017: Self-Driving Trucks"). These vehicles will have to anticipate and defend against a full spectrum of malicious attackers wielding both traditional cyberattacks and a new generation of attacks based on so-called adversarial machine learning (see "AI Fight Club Could Help Save Us from a Future of Super-Smart Cyberattacks"). When hackers demonstrated that vehicles on the roads were vulnerable to several specific security threats, automakers responded by recalling and upgrading the firmware of millions of cars. The computer vision and collision avoidance systems under development for autonomous vehicles rely on complex machine-learning algorithms that are not well understood, even by the companies that rely on them (see "The Dark Secret at the Heart of AI").
A bill that would speed up development of self-driving cars and establish a federal framework for their regulation, the Highly Automated Vehicle Testing and Deployment Act of 2017, is now working its way through Congress. But they're also willing to expose vehicles via online software updates because the logistical challenges posed by physical downloads (car drives to shop, shop downloads new software) would make the frequent improvements required to millions and millions of lines of code virtually impossible to effect. Geater explained that some of the measures being taken to improve security include separating functions – the sound system can communicate with the vehicle speed system (to modulate sound volume according to vehicle speed), but neither can communicate with the transmission, for example. "People prove time and time again to be absolutely terrible, dangerous drivers," Geater said, adding that the risks posed by an actual human behind the wheel of a car far outweigh those posed by a potential hacker.
Singapore and MIT have been at the forefront of autonomous vehicle development. Now, leveraging similar technology, MIT and Singaporean researchers have developed and deployed a self-driving wheelchair at a hospital. Spearheaded by Daniela Rus, the Andrew (1956) and Erna Viterbi Professor of Electrical Engineering and Computer Science and director of MIT's Computer Science and Artificial Intelligence Laboratory, this autonomous wheelchair is an extension of the self-driving scooter that launched at MIT last year -- and it is a testament to the success of the Singapore-MIT Alliance for Research and Technology, or SMART – a collaboration between researchers at MIT and in Singapore. Rus, who is also the principal investigator of the SMART Future Urban Mobility research group, says this newest innovation can help nurses focus more on patient care as they can get relief from logistics work which includes searching for wheelchairs and wheeling patients in the complex hospital network.
Over the last several years, a team of roboticists at the University of Tehran has been working on increasingly large and complex life-size humanoids. A team of 15 researchers at University of Tehran's Center for Advanced Systems and Technologies worked for over a year to design and build Surena Mini, which is 50 centimeters tall and weighs 3.4 kilograms. Its hands aren't designed for grasping objects, but Surena Mini can push on small things--or karate-chop them: A little over a year ago, the same group unveiled Surena III, an advanced adult-size humanoid designed for researching bipedal locomotion, human-robot interaction, and other challenges in robotics. The Iranian roboticists plan to continue working on Surena III, but they also want to explore the possibility of creating marketable products based on their research, Professor Yousefi-Koma explained, and one of the ideas they had was building a "kid-size version of Surena."
Standard computer chips for quadcoptors and other similarly sized drones process an enormous amount of streaming data from cameras and sensors, and interpret that data on the fly to autonomously direct a drone's pitch, speed, and trajectory. The team, led by Sertac Karaman, the Class of 1948 Career Development Associate Professor of Aeronautics and Astronautics at MIT, and Vivienne Sze, an associate professor in MIT's Department of Electrical Engineering and Computer Science, developed a low-power algorithm, in tandem with pared-down hardware, to create a specialized computer chip. The group quickly realized that conventional chip design techniques would likely not produce a chip that was small enough and provided the required processing power to intelligently fly a small autonomous drone. For each version of the algorithm that was implemented on the FPGA chip, the researchers observed the amount of power that the chip consumed as it processed the incoming data and estimated its resulting position in space.