Darktrace warns of rise in AI-enhanced scams since ChatGPT release

The Guardian 

The cybersecurity firm Darktrace has warned that since the release of ChatGPT it has seen an increase in criminals using artificial intelligence to create more sophisticated scams to con employees and hack into businesses. The Cambridge-based company, which reported a 92% drop in operating profits in the half year to the end of December, said AI was further enabling "hacktivist" cyber-attacks using ransomware to extort money from businesses. The company said it had seen the emergence of more convincing and complex scams by hackers since the launch of the hugely popular Microsoft-backed AI tool ChatGPT last November. "Darktrace has found that while the number of email attacks across its own customer base remained steady since ChatGPT's release, those that rely on tricking victims into clicking malicious links have declined while linguistic complexity, including text volume, punctuation and sentence length among others, have increased," the company said. "This indicates that cybercriminals may be redirecting their focus to crafting more sophisticated social engineering scams that exploit user trust." However, Darktrace said that the phenomenon had not yet resulted in a new wave of cybercriminals emerging, merely changing the tactics of the existing cohort.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found