AI recap this month: Drone 'kills' operator; DeepMind's speed up
This month we heard about a fascinating AI experiment from a US Air Force colonel. An AI-controlled drone trained to autonomously carry out bombing missions had turned on its human operator when told not to attack targets; its programming prioritised successfully carrying out missions, so it saw human intervention as an obstacle in its way and decided to forcefully take it out. The only problem with the story was that it was nonsense. Firstly, as the colonel told it, the test was a simulation. Secondly, a US Air Force statement was hastily issued to clarify that the colonel, speaking at a UK conference, had "mis-spoke" and that no such tests had been carried out.
Jun-29-2023, 09:00:04 GMT