Goto

Collaborating Authors

 lazier


To be more useful, robots need to become lazier

MIT Technology Review

That's the principle underpinning "lazy robotics," a field of study championed by René van de Molengraft, a professor at Eindhoven University of Technology in the Netherlands. He believes that teaching all kinds of robots to be "lazier" with their data could help pave the way for machines that are better at interacting with things in their real-world environments, including humans. Essentially, the more efficient a robot can be with information, the better. Van de Molengraft's lazy robotics is just one approach researchers and robotics companies are now taking as they train their robots to complete actions successfully, flexibly, and in the most efficient manner possible. Teaching them to be smarter when they sift through the data they gather and then de-prioritize anything that's safe to overlook will help make them safer and more reliable--a long-standing goal of the robotics community.

  Country: Europe > Netherlands > North Brabant > Eindhoven (0.28)
  Industry: Leisure & Entertainment > Sports > Soccer (0.70)

Do humans get lazier when robots help with tasks?

Robohub

'Social loafing' is a phenomenon which happens when members of a team start to put less effort in because they know others will cover for them. Scientists investigating whether this happens in teams which combine work by robots and humans found that humans carrying out quality assurance tasks spotted fewer errors when they had been told that robots had already checked a piece, suggesting they relied on the robots and paid less attention to the work. Now that improvements in technology mean that some robots work alongside humans, there is evidence that those humans have learned to see them as team-mates -- and teamwork can have negative as well as positive effects on people's performance. People sometimes relax, letting their colleagues do the work instead. This is called'social loafing', and it's common where people know their contribution won't be noticed or they've acclimatized to another team member's high performance.


We're approaching the limits of computer power – we need new programmers now John Naughton

The Guardian

Way back in the 1960s, Gordon Moore, the co-founder of Intel, observed that the number of transistors that could be fitted on a silicon chip was doubling every two years. Since the transistor count is related to processing power, that meant that computing power was effectively doubling every two years. Thus was born Moore's law, which for most people working in the computer industry – or at any rate those younger than 40 – has provided the kind of bedrock certainty that Newton's laws of motion did for mechanical engineers. There is, however, one difference. Moore's law is just a statement of an empirical correlation observed over a particular period in history and we are reaching the limits of its application.