Robots in the work place can perform hazardous or even 'impossible' tasks; e.g., toxic waste clean-up, desert and space exploration, and more. AI researchers are also interested in the intelligent processing involved in moving about and manipulating objects in the real world.
A version of this article originally appeared in Issues in Science and Technology. When Americans talk about automation, they tend to ask first how many jobs are at risk--or more broadly, how many jobs will there be, who will do them, and where will they be located. These are the wrong questions. They suggest a policy discussion that starts at the end, focused on mitigating negative impacts. This approach perpetuates a flawed view of how technology develops--one that plagues contemporary debates about the future of work--because it presents technological progress as a process of scientists and engineers applying knowledge and technique to the material world to find a single best way to perform some task.In short, this view of automation sees the consequences of technology for workers (job loss, lower wages, need for retraining, and the like) as largely inevitable.
To butcher a quote from the great science fiction writer William Gibson, "Autonomous vehicles are already here – they're just not very evenly distributed." In other words, while autonomous vehicles may not be in your city, they might be in the city next door. Autonomous vehicles are primed for exponential growth, and all indicators point to that growth beginning to happen sooner rather than later. By 2040, we can expect our highways to be bumper to bumper with over 33 million self-driving vehicles. By 2040, we can expect our highways to be bumper to bumper with over 33 million self-driving vehicles.
Robots are one step closer to being more like living beings with a new development within the field. Scientists from Nanyang Technological University, Singapore (NTU Singapore) have created an AI system that allows robots to recognize pain and self-repair. The newly developed system relies on AI-enabled sensor nodes, which process'pain' and then respond to it. This pain is identified when there is pressure brought on by an outside physical force. The other major part of the system is self-repair.
On Friday, we finally got a glimpse at a long-awaited feature that Elon Musk rolled out this week for a select group of Tesla owners: Full Self-Driving mode. Only a small group of Tesla-chosen drivers with safe driving records were given the software update to test the new autonomous feature, known as FSD, which goes beyond what's currently available on Tesla's advanced driving system, Autopilot. As longtime Tesla vlogger Tesla Raj showed in the video above, FSD now lets his electric Model X navigate itself on city roads. Previously Autopilot, which featured abilities like autosteering, braking, and lane changing, only worked on highways and major thoroughfares with clear lane markings. Now with the beta update, the car can maintain the speed limit and its position in the lane, stop at stop signs, make turns, and even more on its own.
Microsoft and non-profit research organization MITRE have joined forces to accelerate the development of cyber-security's next chapter: to protect applications that are based on machine learning and are at risk of new adversarial threats. The two organizations, in collaboration with academic institutions and other big tech players such as IBM and Nvidia, have released a new open-source tool called the Adversarial Machine Learning Threat Matrix. The framework is designed to organize and catalogue known techniques for attacks against machine learning systems, to inform security analysts and provide them with strategies to detect, respond and remediate against threats. What is AI? Everything you need to know about Artificial Intelligence The matrix classifies attacks based on criteria related to various aspects of the threat, such as execution and exfiltration, but also initial access and impact. To curate the framework, Microsoft and MITRE's teams analyzed real-world attacks carried out on existing applications, which they vetted to be effective against AI systems.
At some point, construction sites will be run by purpose-built autonomous machines -- robots -- that bring heretofore unseen levels of efficiency to what is decidedly an inefficient building industry. But though that future is still decades away, that doesn't mean automation on the job site is far off. In fact, the same heavy equipment you see on sites today is being retrofitted for autonomous use. Globally, construction represents is a $10 trillion industry. But it's also one that's uniquely prey to outside pressures like labor shortages, materials cost increases, and market shifts.
This Sunday sees the start of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). This year the event is online and free for anyone to attend. Content will be available from the platform on demand, with access available from 25 October to 25 November 2020. IROS conferences have traditionally had a theme and this year is no different with the emphasis being on "consumer robotics and our future". You can sign up here.
This talk addresses some key decisional issues that are necessary for a cognitive and collaborative robot which shares space and task with a human. One main challenge, inspired by the Joint Action framework, is to endow the robot with the capacity to build and to maintain, co-constructively with the human, and as long as necessary, the collaborative process and relationship that come along with the task, thus allowing its joint execution. We adopt a constructive approach based on the identification and the effective implementation of individual and collaborative skills. Key design issues are linked to legibility, acceptability and pertinence of robot decisions and behaviours. I will provide some illustrative examples from several collaborative research projects.
Human interaction with machines has experienced a great leap forward in recent years, largely driven by artificial intelligence (AI). From smart homes to self-driving cars, AI has become a seamless part of our daily lives. Voice interactions play a key role in many of these technological advances, most notably in language translation. Here, AI enables instant translation across a number of mediums: text, voice, images and even street signs. The technology works by recognizing individual words, then leveraging similarities in how various languages express the relationships between those words.