Goto

Collaborating Authors

Military


'What should the limits be?' The father of ChatGPT on whether AI will save humanity – or destroy it

The Guardian

When I meet Sam Altman, the chief executive of AI research laboratory OpenAI, he is in the middle of a world tour. He is preaching that the very AI systems he and his competitors are building could pose an existential risk to the future of humanity – unless governments work together now to establish guide rails, ensuring responsible development over the coming decade. In the subsequent days, he and hundreds of tech leaders, including scientists and "godfathers of AI", Geoffrey Hinton and Yoshua Bengio, as well as Google's DeepMind CEO, Demis Hassabis, put out a statement saying that "mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war". It is an all-out effort to convince world leaders that they are serious when they say that "AI risk" needs concerted international effort. It must be an interesting position to be in – Altman, 38, is the daddy of AI chatbot ChatGPT, after all, and is leading the charge to create "artificial general intelligence", or AGI, an AI system capable of tackling any task a human can achieve.


'Dead zone': how the Ukraine war moved inside Russia

Al Jazeera

Kyiv, Ukraine – The enemy "turns border districts into a dead zone", a war correspondent covering the Russia-Ukraine war wrote on his Telegram channel on Saturday. But retired colonel Yuri Kotyonok, who reported from almost every war zone in the former Soviet Union and whose Telegram channel has 420,000 subscribers, was not talking about Ukraine. The districts belong to the western Russian region of Belgorod that borders Ukraine. In recent months, it has been shelled and attacked by drones hundreds of times – 130 in May alone, Russian officials say. As a result, 32 people were killed and 157 wounded, regional governor Vyacheslav Gladkov said in late April.


To avoid AI doom, learn from nuclear safety

MIT Technology Review

Last week, a group of tech company leaders and AI experts pushed out another open letter, declaring that mitigating the risk of human extinction due to AI should be as much of a global priority as preventing pandemics and nuclear war. So how do companies themselves propose we avoid AI ruin? One suggestion comes from a new paper by researchers from Oxford, Cambridge, the University of Toronto, the University of Montreal, Google DeepMind, OpenAI, Anthropic, several AI research nonprofits, and Turing Prize winner Yoshua Bengio. They suggest that AI developers should evaluate a model's potential to cause "extreme" risks at the very early stages of development, even before starting any training. These risks include the potential for AI models to manipulate and deceive humans, gain access to weapons, or find cybersecurity vulnerabilities to exploit.


AI drone swarm shows military might but also questions of who holds the power

FOX News

Naftali Bennett spoke exclusively with Fox News Digital about the benefits of AI and the need to set parameters for its use now. The new drone swarm test conducted by the U.S. and its allies last week shows some of the wider applications of artificial intelligence (AI) in military settings while also raising some potential issues about how multiple militaries will be able to cooperate. "Just like coordination is needed to conduct classic, joint and coalition maneuvers and military operations, similar clear definitions of boundaries, tasks, responsibility and authority are needed to control and de-conflict drone swarms," retired Brig. Gen. Uri Engelhard, AI and cyber expert, member of the Israel Defense and Security Forum, told Fox News Digital. "If planned and conducted properly, the deployment of drone swarms should not be more challenging than other military activities."


AI poses national security threat, warns terror watchdog

The Guardian

The creators of artificial intelligence need to abandon their "tech utopian" mindset, according to the terror watchdog, amid fears that the new technology could be used to groom vulnerable individuals. Jonathan Hall KC, whose role is to review the adequacy of terrorism legislation, said the national security threat from AI was becoming ever more apparent and the technology needed to be designed with the intentions of terrorists firmly in mind. He said too much AI development focused on the potential positives of the technology while neglecting to consider how terrorists might use it to carry out attacks. "They need to have some horrible little 15-year-old neo-Nazi in the room with them, working out what they might do. You've got to hardwire the defences against what you know people will do with it," said Hall.


AI Is Being Used to 'Turbocharge' Scams

WIRED

Code hidden inside PC motherboards left millions of machines vulnerable to malicious updates, researchers revealed this week. Staff at security firm Eclypsium found code within hundreds of models of motherboards created by Taiwanese manufacturer Gigabyte that allowed an updater program to download and run another piece of software. While the system was intended to keep the motherboard updated, the researchers found that the mechanism was implemented insecurely, potentially allowing attackers to hijack the backdoor and install malware. Elsewhere, Moscow-based cybersecurity firm Kaspersky revealed that its staff had been targeted by newly discovered zero-click malware impacting iPhones. Victims were sent a malicious message, including an attachment, on Apple's iMessage. The attack automatically started exploiting multiple vulnerabilities to give the attackers access to devices, before the message deleted itself.


Robot takeover? Not quite. Here's what AI doomsday would look like

The Guardian

Alarm over artificial intelligence has reached a fever pitch in recent months. Just this week, more than 300 industry leaders published a letter warning AI could lead to human extinction and should be considered with the seriousness of "pandemics and nuclear war". Terms like "AI doomsday" conjure up sci-fi imagery of a robot takeover, but what does such a scenario actually look like? The reality, experts say, could be more drawn out and less cinematic – not a nuclear bomb but a creeping deterioration of the foundational areas of society. "I don't think the worry is of AI turning evil or AI having some kind of malevolent desire," said Jessica Newman, director of University of California Berkeley's Artificial Intelligence Security Initiative.


Biden says artificial intelligence scientists worried about tech overtaking human thinking and planning

FOX News

Sam Altman, the CEO of artificial intelligence lab OpenAI, told a Senate panel he welcomes federal regulation on the technology "to mitigate" its risks. President Biden told hundreds of U.S. Air Force Academy graduation attendees on Thursday that scientists are warning about the capabilities of artificial intelligence. "I met in the Oval Office, in my office, with 12 leading -- no, excuse me, eight leading scientists -- in the area of AI," he said, speaking at Falcon Stadium in Colorado. "Some are very worried that AI can actually overtake human thinking and planning," Biden noted. "So we've got a lot to deal with."


Reports of an AI drone that 'killed' its operator are pure fiction

New Scientist

In a story that could be ripped from a sci-fi thriller, the hyper-motivated AI had been trained to destroy surface-to-air missiles only with approval from a human overseer – and when denied approval, it turned on its handler. Only, it is no surprise that story sounds fictional – because it is. The story emerged from a report by the Royal Aeronautical Society, describing a presentation by US Air Force (USAF) colonel Tucker Hamilton at a recent conference. That report noted the incident was only a simulation, in which there was no real drone and no real risk to any human – a fact missed by many attention-grabbing headlines. Later, it emerged that even the simulation hadn't taken place, with the USAF issuing a denial and the original report updated to clarify that Hamilton "mis-spoke". The apocalyptic scenario was nothing but a hypothetical thought experiment.


Congress races to research AI-enhanced drones to maintain national security edge over China

FOX News

AGI, while powerful, could have negative consequences, warned Diveplane CEO Mike Capps and Liberty Blockchain CCO Christopher Alexander. Legislation moving through the House would provide millions of dollars for research on how to incorporate artificial intelligence into drone technology in an effort to keep the U.S. ahead of China in this increasingly important component of national security. The House Committee on Science, Space, and Technology last week approved legislation from committee Chairman Frank Lucas, R-Okla., that he says needs to pass before China becomes locked in as the world's major supplier of drones. His bill, the National Drone and Advanced Air Mobility Research and Development Act, would fund about $1.6 billion in research over the next five years to give a boost to U.S.-based drone manufacturers. "To say China has cornered this market is an understatement," Lucas said last week. "One single company with extensive ties to the Chinese Communist Party and the People's Liberation Army produces 80% of the drones used recreationally in the U.S." A staff member works on an unmanned aerial vehicle at Guizhou University in Guiyang, China, on May 23, 2023.