It began with the "heartless" Tin man from the Wizard of Oz and continued with the humanoid robot that impersonated Maria in Metropolis. By the 1950s, we had a generation of scientists, mathematicians, and philosophers with the concept of artificial intelligence (or AI) culturally assimilated in their minds. One such person was Alan Turing, a young British polymath who explored the mathematical possibility of artificial intelligence. Turing suggested that humans use available information as well as reason in order to solve problems and make decisions, so why can't machines do the same thing? This was the logical framework of his 1950 paper, Computing Machinery and Intelligence in which he discussed how to build intelligent machines and how to test their intelligence.
Engineers at Rice University have developed a method that allows humans to help robots "see" their environments and carry out tasks. The strategy called Bayesian Learning IN the Dark -- BLIND, for short -- is a novel solution to the long-standing problem of motion planning for robots that work in environments where not everything is clearly visible all the time. The peer-reviewed study led by computer scientists Lydia Kavraki and Vaibhav Unhelkar and co-lead authors Carlos Quintero-Peña and Constantinos Chamzas of Rice's George R. Brown School of Engineering was presented at the Institute of Electrical and Electronics Engineers' International Conference on Robotics and Automation in late May. The algorithm developed primarily by Quintero-Peña and Chamzas, both graduate students working with Kavraki, keeps a human in the loop to "augment robot perception and, importantly, prevent the execution of unsafe motion," according to the study. To do so, they combined Bayesian inverse reinforcement learning (by which a system learns from continually updated information and experience) with established motion planning techniques to assist robots that have "high degrees of freedom" -- that is, a lot of moving parts.
A team of engineers at MIT has developed an optimization code for improving any autonomous robotic system. The code automatically identifies how and where to alter a system to improve a robot’s performance. The engineers’ findings are set to be presented at the annual Robotics: Science and Systems conference in New York. The team included […]
Fears have been raised about the future of artificial intelligence after a robot was found to have learned'toxic stereotypes' from the internet. The machine showed significant gender and racial biases, including gravitating toward men over women and white people over people of colour during tests by scientists. It also jumped to conclusions about peoples' jobs after a glance at their face. 'The robot has learned toxic stereotypes through these flawed neural network models,' said author Andrew Hundt, a postdoctoral fellow at Georgia Tech who co-conducted the work as a PhD student working in Johns Hopkins' Computational Interaction and Robotics Laboratory in Baltimore, Maryland. 'We're at risk of creating a generation of racist and sexist robots but people and organisations have decided it's OK to create these products without addressing the issues.'
Koomey's law This law posits that the energy efficiency of computation doubles roughly every one-and-a-half years (see Figure 1–7). In other words, the energy necessary for the same amount of computation halves in that time span. To visualize the exponential impact this has, consider the face that a fully charged MacBook Air, when applying the energy efficiency of computation of 1992, would completely drain its battery in a mere 1.5 seconds. According to Koomey's law, the energy requirements for computation in embedded devices is shrinking to the point that harvesting the required energy from ambient sources like solar power and thermal energy should suffice to power the computation necessary in many applications. Metcalfe's law This law has nothing to do with chips, but all to do with connectivity. Formulated by Robert Metcalfe as he invented Ethernet, the law essentially states that the value of a network increases exponentially with regard to the number of its nodes (see Figure 1–8).
Advances in computer vision and machine learning have made it possible for a wide range of technologies to perform sophisticated tasks with little or no human supervision. From autonomous drones and self-driving cars to medical imaging and product manufacturing, many computer applications and robots use visual information to make critical decisions. Cities increasingly rely on these automated technologies for public safety and infrastructure maintenance. However, compared to humans, computers see with a kind of tunnel vision that leaves them vulnerable to attacks with potentially catastrophic results. For example, a human driver, seeing graffiti covering a stop sign, will still recognize it and stop the car at an intersection.
The startup focuses on developing self-driving technology for unstructured environment conditions and India's road network is full of such environments. In the thick of it is founder and CEO Sanjeev Sharma, whose interest in the field of robotics was born way back in 2009, when he watched the videos of Team MIT at the 2007 DARPA Urban Challenge. With time, he knew that he wanted to hone in on research to enable autonomous driving in the most difficult traffic environmental scenarios, but it wasn't until 2014, when Sharma deferred his PhD at the University of Massachusetts for a year, that he established Swaayatt Robots. Fast forward eight years and, despite knowing much more about autonomous mobility than in 2014, safety continues to be a huge challenge. Even before we think of the purchasing and operational cost, we're quite some time away from solving for driver safety in an uncontrolled and unstructured environment -- but Swaayatt Robots is trying to fix that.
ENGLAND: Two autonomous, delivery robots pass on the pavement as they make home deliveries of ... [ ] groceries. Created by two of the co-founders of Skype in 2014, Starship has developed the self-driving pods for various, logistical tasks. Much more complicated than originally thought. Several manufacturers expected the first self-driving cars to hit the market 3-4 years ago. In fact, Johann Jungwirth of Volkswagen met with Focus Magazine in April of 2016 amongst beanbags, blue suede shoes and skateboards to report the first autonomous vehicles (AVs) will be on the market by 2019.
Mars used to be a wildly different land. Though the red planet is bone dry today, NASA's Curiosity rover recently rumbled by poignant evidence of an ancient watery world. The car-sized robot snapped an image of a unique rock that looks like its composed of stacked layers. Such a rock likely formed "in an ancient streambed or small pond," the space agency wrote. Curiosity is winding up through the foothills of the three-mile-tall Mount Sharp, where it's encountering a place where these streams and ponds once carried red sediments through the landscape.
GM's autonomous driving division, Cruise, has begun its paid driverless taxi service in San Francisco and officially took its first fares last night. Cruise has been operating a free driverless taxi service in the area since earlier this year (and got pulled over once), but last night it began charging for this service. Both Cruise and its rival Waymo, a division of Google's parent company Alphabet, have been hoping for some time to start charging for autonomous taxi rides in California. Waymo got permission in February but has not yet started charging fares. Cruise's program is still quite limited, only covering about a third of San Francisco with 30 cars.