Teamwork isn't just a human characteristic: Colonies of army ants will form living'scaffolding' to protect members from falling. The insects are blind and have no designated leader but, according to new research, they're able to use simple behavioral rules to develop these safety structures without the need for direct communication. Once a scaffold was built, worker ants were almost 100 percent protected from falling off steep inclines. Understanding how they design such complex structures could help engineers development self-healing materials and swarm robotics, researchers said. Army ants in Central American rainforests will build scaffolds out of their body to help them traverse steep terrain.
Scientists have modified Pepper the robot to think out loud, which they say can increase transparency and trust between human and machine. The Italian team built an'inner speech model' that allowed the robot to talk through its thought processes, just like humans when faced with a challenge or a dilemma. The experts found Pepper was better at overcoming confusing human instructions when it could relay its own inner dialogue out loud. Pepper – which has already been used as a receptionist and a coffee shop attendee – is the creation of Japanese tech company SoftBank. By creating their own'extension' of Pepper, the team have realised the concept of robotic inner speech, which they say could be applied in robotics contexts such as learning and regulation.
"Hey Siri, can you find me a murderer for hire?" Ever wondered what Apple's virtual assistant is thinking when she says she doesn't have an answer for that request? Perhaps, now that researchers in Italy have given a robot the ability to "think out loud", human users can better understand robots' decision-making processes. "There is a link between inner speech and subconsciousness [in humans], so we wanted to investigate this link in a robot," said the study's lead author, Arianna Pipitone from the University of Palermo. The researchers programmed a robot called Pepper, made by SoftBank Robotics, with the ability to vocalise its thought processes.
Self-driving vehicles may be inherently racist because they're unable to detect dark-skinned faces in the dark, experts have warned. The Law Commission says racial bias'has crept into the design of vehicles and automated systems', which could have disastrous consequences. Autonomous vehicles are powered by artificial intelligence (AI) that's trained to detect pedestrians in order to know when to stop and avoid a collision. But this inherent bias effectively means anyone with a'non-white' skin tone might be at greater risk of being involved in an accident in poor light conditions. Self-driving vehicles may also be prejudiced against women and the mobility-impaired, because their operating systems have largely been created by able-bodied men, according to the Law Commission.
We encounter artificial intelligence (AI) every day. AI describes computer systems that are able to perform tasks that normally require human intelligence. When you search something on the internet, the top results you see are decided by AI. Any recommendations you get from your favorite shopping or streaming websites will also be based on an AI algorithm. These algorithms use your browser history to find things you might be interested in.
With the proliferation of female robots such as Sophia and the popularity of female virtual assistants such as Siri (Apple), Alexa (Amazon), and Cortana (Microsoft), artificial intelligence seems to have a gender issue. This gender imbalance in AI is a pervasive trend that has drawn sharp criticism in the media (even Unesco warned against the dangers of this practice) because it could reinforce stereotypes about women being objects. But why is femininity injected in artificial intelligent objects? If we want to curb the massive use of female gendering in AI, we need to better understand the deep roots of this phenomenon. In an article published in the journal Psychology & Marketing, we argue that research on what makes people human can provide a new perspective into why feminization is systematically used in AI.
A San Francisco-based company is claiming an aviation first with a gate-to-gate fully autonomous flight. You can see a video of the flight in the embed below. The company, Xwing, is setting out to introduce autonomous technology for regional air cargo, an overlooked space in the global race for autonomy but, with its sub-500 mile predictable routes and significant commercial importance, an intriguing entry point for autonomous air travel. Xwing is betting it can gain ground amid growing unmet logistics demand using its human-operated software stack that seamlessly integrates with existing aircraft to enable regional pilotless flight. "Over the past year, our team has made significant advancements in extending and refining our AutoFlight system to seamlessly integrate ground taxiing, take-offs, landings and flight operations, all supervised from our mission control center via redundant data links," says Marc Piette, CEO and founder of Xwing.
A four-legged, robotic guide dog system that can safely lead blind people around obstacles and through narrow passages has been developed by US researchers. Just like a real assistance canine, the bot guides its user by means of a leash -- which it can pull taut but also allow to go slack in order to better lead around tight turns. The setup -- built on a robot design called a mini cheetah -- features a laser-ranging system to map out its surroundings and a camera to track the human it is guiding. Given an end point to reach, the machine maps out a simple route, adapting its course as it progresses to accommodate obstacles and the handler's movements. The robot has the potential to cut down on the time and expense of training guide dogs -- although, they would lack the mental and social benefits of a real animal. According to lead researcher and roboticist Zhongyu Li of the University of California, Berkeley, the training of mechanical guide dogs would be scalable.
A team of researchers in Germany have come up with a safety system that could warn drivers of autonomous cars that they will have to take control up to seven seconds in advance. A team of researchers at the Technical University of Munich (TUM) has developed a new early warning system for autonomous vehicles that uses artificial intelligence to learn from thousands of real traffic situations. The study of the system was carried out in cooperation with the BMW Group. Researchers behind the study claim that if used in today's self-driving vehicles, it could offer seven seconds advanced warning against potentially critical situations that the cars cannot handle alone – with over 85 per cent accuracy. To make self-driving cars safe in the future, development efforts often rely on sophisticated models aimed at giving cars the ability to analyse the behaviour of other traffic.
Singapore – Remote-controlled Venus flytrap "roboplants" and crops that tell farmers when they are hit by disease could become reality after scientists developed a high-tech system for communicating with vegetation. Researchers in Singapore linked up plants to electrodes capable of monitoring the weak electrical pulses naturally emitted by the greenery. The scientists used the technology to trigger a Venus flytrap to snap its jaws shut at the push of a button on a smartphone app. They then attached one of its jaws to a robotic arm and got the contraption to pick up a piece of wire half a millimeter thick, and catch a small falling object. The technology is in its early stages, but researchers believe it could eventually be used to build advanced "plant-based robots" that can pick up a host of fragile objects which are too delicate for rigid, robotic arms.