Goto

Collaborating Authors

 easterly


Google Will Use AI to Guess People's Ages Based on Search History

WIRED

Last week, the United Kingdom began requiring residents to verify their ages before accessing online pornography and other adult content, all in the name of protecting children. Almost immediately, things did not go as planned--although, they did go as expected. As experts predicted, UK residents began downloading virtual private networks (VPNs) en masse, allowing them to circumvent age verification, which can require users to upload their government IDs, by making it look like they're in a different country. The UK's Online Safety Act is just one part of a wave of age-verification efforts around the world. And while these laws may keep some kids from accessing adult content, some experts warn that they also create security and privacy risks for everyone.


Artificial intelligence changes across the US

FOX News

Fox News chief political anchor Bret Baier has the latest on regulatory uncertainty amid AI development on'Special Report.' An increasing number of companies are using artificial intelligence (AI) for everyday tasks. Much of the technology is helping with productivity and keeping the public safer. However, some industries are pushing back against certain aspects of AI. And some industry leaders are working to balance the good and the bad.


North Korea and Iran using AI for hacking, Microsoft says

The Guardian

US adversaries – chiefly Iran and North Korea, and to a lesser extent Russia and China – are beginning to use generative artificial intelligence to mount or organize offensive cyber operations, Microsoft said on Wednesday. Microsoft said it detected and disrupted, in collaboration with business partner OpenAI, many threats that used or attempted to exploit AI technology they had developed. In a blogpost, the company said the techniques were "early-stage" and neither "particularly novel or unique" but that it was important to expose them publicly as US rivals leveraging large-language models to expand their ability to breach networks and conduct influence operations. Cybersecurity firms have long used machine-learning on defense, principally to detect anomalous behavior in networks. But criminals and offensive hackers use it as well, and the introduction of large-language models led by OpenAI's ChatGPT upped that game of cat-and-mouse.


US cybersecurity official urges safeguards against artificial intelligence threats: 'Moving too fast'

FOX News

Fox News Flash top headlines are here. Check out what's clicking on Foxnews.com. The potential threat posed by the rapid development of artificial intelligence (AI) means safeguards need to be built in to systems from the start rather than tacked on later, a top U.S. official said on Monday. "We've normalized a world where technology products come off the line full of vulnerabilities and then consumers are expected to patch those vulnerabilities. We can't live in that world with AI," said Jen Easterly, director of the U.S. Cybersecurity and Infrastructure Security Agency.


US, UK and a dozen more countries unveil pact to make AI 'secure by design'

The Guardian

The United States, the United Kingdom and more than a dozen other countries on Sunday unveiled what a senior US official described as the first detailed international agreement on how to keep artificial intelligence safe from rogue actors, pushing for companies to create AI systems that are "secure by design". In a 20-page document unveiled on Sunday, the 18 countries agreed that companies designing and using AI need to develop and deploy it in a way that keeps customers and the wider public safe from misuse. The agreement is non-binding and carries mostly general recommendations such as monitoring AI systems for abuse, protecting data from tampering and vetting software suppliers. Still, the director of the US Cybersecurity and Infrastructure Security Agency, Jen Easterly, said it was important that so many countries put their names to the idea that AI systems needed to put safety first. "This is the first time that we have seen an affirmation that these capabilities should not just be about cool features and how quickly we can get them to market or how we can compete to drive down costs," Easterly said, adding that the guidelines represented "an agreement that the most important thing that needs to be done at the design phase is security".


CISA Has a New Road Map for Handling Weaponized AI

WIRED

Last month, a 120-page United States executive order laid out the Biden administration's plans to oversee companies that develop artificial intelligence technologies and directives for how the federal government should expand its adoption of AI. At its core, though, the document focused heavily on AI-related security issues--both finding and fixing vulnerabilities in AI products and developing defenses against potential cybersecurity attacks fueled by AI. As with any executive order, the rub is in how a sprawling and abstract document will be turned into concrete action. Today, the US Cybersecurity and Infrastructure Security Agency (CISA) will announce a "Roadmap for Artificial Intelligence" that lays out its plan for implementing the order. CISA divides its plans to tackle AI cybersecurity and critical infrastructure-related topics into five buckets.


US cyber chiefs warn of threats from China and AI • The Register

#artificialintelligence

Bots like ChatGPT may not be able to pull off the next big Microsoft server worm or Colonial Pipeline ransomware super-infection but they may help criminal gangs and nation-state hackers develop some attacks against IT, according to Rob Joyce, director of the NSA's Cybersecurity Directorate. Joyce, speaking at CrowdStrike's Government Summit Tuesday, said he doesn't expect to see -- at least not "in the near term" -- AI used "for automated attacks that will rip through systems at speeds that are unfathomable today." Machine learning and its chatbot offspring are "the tools that are going to flow and increase the pace of the threat," Joyce claimed. "It's not going to generate the threat itself." Miscreants can use ML software to develop more authentic-seeming phishing lures and craft better ransom notes, while also scanning larger volumes of data for sensitive info they can monetize, he offered.