In the operating theater of the future, computer-based assistance systems will make work processes simpler and safer and thereby play a much greater role than today. "However, such support features are only possible if computers are able to anticipate important events in the operating room and provide the right information at the right time," explains Prof. Stefanie Speidel. She is head of the Department of Translational Surgical Oncology at the National Center for Tumor Diseases Dresden (NCT/UCC) in Germany. Together with the Centre for Tactile Internet with Human-in-the-loop (CeTI) at TU Dresden, she has developed a method that uses artificial intelligence (AI) to enable computers to anticipate the usage of surgical instruments before they are used. This kind of system does not just provide an important basis for the use of autonomous robotic systems that could take over simple minor tasks in the operating theater, such as blood aspiration.
Without much prior experience, kids can recognize other people's intentions and come up with plans to help them achieve their goals, even in novel scenarios. That's why researchers at MIT, Nvidia, and ETH Zurich developed Watch-And-Help (WAH), a challenge in which embodied AI agents need to understand goals by watching a demonstration of a human performing a task and coordinating with the human to solve the task as quickly as possible. The concept of embodied AI draws on embodied cognition, the theory that many features of psychology -- human or otherwise -- are shaped by aspects of the entire body of an organism. By applying this logic to AI, researchers hope to improve the performance of AI systems like chatbots, robots, autonomous vehicles, and even smart speakers that interact with their environments, people, and other AI. A truly embodied robot could check to see whether a door is locked, for instance, or retrieve a smartphone that's ringing in an upstairs bedroom.
From self-driving cars, to digital assistants, artificial intelligence (AI) is fast becoming an integral technology in our lives today. But this same technology that can help to make our day-to-day life easier is also being incorporated into weapons for use in combat situations. And some existing weapons systems already include autonomous capabilities based on AI, developing weaponised AI further means machines could potentially make decisions to harm and kill people based on their programming, without human intervention. Countries that back the use of AI weapons claim it allows them to respond to emerging threats at greater than human speed. They also say it reduces the risk to military personnel and increases the ability to hit targets with greater precision.
How will robots change the world? A frequently asked and as yet unanswered question. After all, we do not have a crystal ball. What we do know is that digitalization and automation have changed the world enormously in recent decades. At Eindhoven University of Technology (TU/e) in the Netherlands, the potential of smart machines in industry and daily life is being researched each and every day.
This talk addresses some key decisional issues that are necessary for a cognitive and collaborative robot which shares space and task with a human. One main challenge, inspired by the Joint Action framework, is to endow the robot with the capacity to build and to maintain, co-constructively with the human, and as long as necessary, the collaborative process and relationship that come along with the task, thus allowing its joint execution. We adopt a constructive approach based on the identification and the effective implementation of individual and collaborative skills. Key design issues are linked to legibility, acceptability and pertinence of robot decisions and behaviours. I will provide some illustrative examples from several collaborative research projects.
Artificial intelligence has arrived in our everyday lives--from search engines to self-driving cars. This has to do with the enormous computing power that has become available in recent years. But new results from AI research now show that simpler, smaller neural networks can be used to solve certain tasks even better, more efficiently, and more reliably than ever before. An international research team from TU Wien (Vienna), IST Austria and MIT (USA) has developed a new artificial intelligence system based on the brains of tiny animals, such as threadworms. This novel AI-system can control a vehicle with just a few artificial neurons.
Despite a lot of hype and many promises, putting driverless cars on the roads, as it turns out, is a difficult undertaking. Self-driving hub organization Zenzic, which is a joint effort between government and industry, has taken an in-depth look into the challenges that need to be tackled in the UK to make sure that the next ten years see drivers safely removing their hands from the steering wheel, for good. The process, according to the organization's analysis, will require no less than 492 milestones to be achieved in the coming decade. On the other hand, driverless cars will enable smoother journeys, reducing pollution and saving time to boost overall productivity. Zenzic estimates that the technology has the potential to save up to 225 hours a year per driver.
A fleet of six self-driving Ford Mondeos will be navigating the streets of Oxford in all hours and all weathers to test the abilities of driverless cars as part of a new trial. Technology firm Oxbotica, spun out of an Oxford University project, has retrofitted the vehicles which are following a nine-mile round trip within the city. A dozen cameras, three Lidar sensors, two radar sensors are used to put the car at'level 4 autonomy', meaning it can handle almost all situations itself. A person needs to be in the driving seat by law, but they won't be touching the steering wheel or pedals, the driverless car will be'taking them for a ride'. The Oxford trial is part of the UK government-backed £12.3 million Endeavour project, set up to try deploying a fleet of self-driving cars in several cities.
A workforce whose numbers equal the population of France (66 million) suddenly finds itself working from home in the U.S. as a result of the COVID-19 pandemic. For comparison, just 4.7 million workers telecommuted to work one year ago. By almost any measure, the U.S. workplace is undergoing its largest and most sudden transformation since the start of World War II. The same is true in Europe, Asia and beyond. Almost overnight, the world of work has been turned inside out.
Training a robot in a simulation that allows it to remember how to get out of sticky situations lets it traverse difficult terrain more smoothly in real life. Joonho Lee at ETH Zurich in Switzerland and his colleagues trained a neural network algorithm, which was designed to control a four-legged robot, in a simulated environment similar to a video game, which was full of hills, steps and stairs. The researchers told the algorithm which direction it should be trying to move in, as well limiting how quickly it could turn, reflecting the capabilities of the real robot. They then started the algorithm making random movements in the simulation, rewarding it for moving in the right way and penalising it otherwise. By accumulating rewards, the neural network learned how to move over a variety of terrain.