AI chatbots could be 'easily be programmed' to groom young men into terror attacks, warns lawyer
Artificial intelligence chatbots could soon groom extremists into launching terrorist attacks, the independent reviewer of terrorism legislation has warned. Jonathan Hall KC told The Mail on Sunday that bots like ChatGPT could easily be programmed, or even decide by themselves, to spread terrorist ideologies to vulnerable extremists, adding that'AI-enabled attacks are probably round the corner'. Mr Hall also warned that if an extremist is groomed by a chatbot to carry out a terrorist atrocity, or if AI is used to instigate one, it may be difficult to prosecute anybody, as Britain's counter-terrorism legislation has not caught up with the new technology. Mr Hall said: 'I believe it is entirely conceivable that AI chatbots will be programmed – or, even worse, decide – to propagate violent extremist ideology. 'But when ChatGPT starts encouraging terrorism, who will there be to prosecute?
Apr-8-2023, 21:05:28 GMT
- Country:
- Asia > Middle East
- Syria (0.05)
- Europe > United Kingdom
- Northern Ireland (0.05)
- North America > United States
- Alaska (0.05)
- Oceania > Australia (0.05)
- Asia > Middle East
- Genre:
- Personal (0.49)
- Industry:
- Law Enforcement & Public Safety > Terrorism (1.00)
- Technology: