An artificial intelligence that can grade the skill of a pianist with near-human accuracy could be used in online music tutoring. Brendan Morris at the University of Nevada, Las Vegas, and his colleagues selected almost 1000 short video clips of people playing piano from YouTube and got an expert pianist to manually grade each on a 10-point scale. The researchers used half of these videos and their grades to train a neural network, a form of AI, creating a model that can assess piano playing in unseen videos.
Robot squid that move to a rhythm can match the power efficiency of the real animals, a trick that could be useful for designing next-generation submarines. Real squid have small fins that they use for careful manoeuvring, but when a big burst of speed is required they suck in and expel water to propel themselves. Researchers have tried to build robots that mimic this jet-like behaviour, but now a team led by Gabriel Weymouth at the University of Southampton, UK, has discovered a way to boost their efficiency. Weymouth and his colleagues created an umbrella-like robot with eight 3D-printed plastic ribs covered by a rubber skirt. It flexes outwards to suck in water and contracts to expel it, providing thrust.
Entangled photons have been sent between two drones hovering a kilometre apart, demonstrating technology that could form the building blocks of a quantum internet. When a pair of photons are quantum entangled, you can instantly deduce the state of one by measuring the other, regardless of the distance separating them. This phenomenon, which Albert Einstein dismissively called "spooky action at a distance", is the basis of quantum encryption – using entangled particles to ensure communications are secret. Quantum networks are far more secure than the existing internet because any attempt to eavesdrop changes the state of the photons, alerting the recipient to foul play. Entangled photons have been transported more than 1000 kilometres in tests between a satellite and ground stations before, but now Zhenda Xie at Nanjing University in China and his colleagues have shown that links can be made over shorter distances with relatively inexpensive hardware.
NASA's "mole" on Mars has failed. After nearly two years of attempting to dig the InSight lander's heat probe – nicknamed the mole – into the Red Planet's surface, engineers have finally given up. The InSight lander arrived on Mars in November 2018. Its main purpose is to study the planet's deep interior in order to help us understand the history of the solar system's rocky worlds. The lander has three main instruments to help it do that: a seismometer to catch vibrations travelling through the ground, a radio to precisely measure Mars's rotation and learn more about its metal core and a setup called the Heat Flow and Physical Properties Package (HP3) to measure the heat flowing out of the planet's centre.
Artificial intelligence could train your dog while you are out at work. A prototype device can issue basic dog commands, recognise if they are carried out and provide a treat if they are. Jason Stock and Tom Cavey at Colorado State University trained an AI to identify when dogs were sitting, standing or lying down using over 20,000 images of dogs from different breeds. The AI achieved 92 per cent accuracy.
The Tin Woodman first appeared in Frank Baum's The Wonderful Wizard of Oz 120 years ago. Now real robot foresters are making their debut, planting trees rather than cutting them down. The robotic foresters are the work of robot makers Milrem in partnership with the University of Tartu, both based in Estonia. Two versions are under development based on the company's range of driverless ground vehicles. One type is a planter, the other a brush cutter, and both are autonomous.
A neural network uses text captions to create outlandish images – such as armchairs in the shape of avocados – demonstrating it understands how language shapes visual culture. OpenAI, an artificial intelligence company that recently partnered with Microsoft, developed the neural network, which it calls DALL-E. It is a version of the company's GPT-3 language model that can create expansive written works based on short text prompts, but DALL-E produces images instead. "The world isn't just text," says Ilya Sutskever, co-founder of OpenAI. "Humans don't just talk: we also see. A lot of important context comes from looking."
Robots can pick themselves up after a fall, even in an unfamiliar environment, thanks to an artificially intelligent controller that can adapt to new scenarios. It could make four-legged robots more useful in responding to natural disasters, such as earthquakes. Zhibin (Alex) Li at the University of Edinburgh, UK and his colleagues used an AI technique called deep reinforcement learning to teach four-legged robots a set of basic skills, such as trotting, steering and fall recovery. This involves the robots experimenting with different ways of moving and being rewarded with a numerical score for achieving a certain goal, such as standing up after a fall, and penalised for failing. This lets the AI recognise which actions are desired and repeat them in the similar situations in the future.
When soldiers are teamed with robots, the human need to interfere may negate the benefits of robotic assistance, a new US military project has discovered. But letting military artificial intelligence proceed without human supervision raises troubling ethical questions. The System-of-Systems Enhanced Small Unit (SESU) project foresees a team of around 200 to 300 soldiers augmented with swarms of small drones and robotic ground vehicles.
Voice assistants can detect typing on nearby devices, which could potentially be used to work out what a person is writing on their phone from up to half a metre away. Ilia Shumailov at the University of Cambridge and his colleagues built a machine-learning system that could recognise the sound of tapping on a touchscreen and combined it with other artificial intelligence tools to try to determine what people were typing.