Goto

Collaborating Authors

AI-Alerts


Governments race to regulate artificial intelligence tools

Al Jazeera

Rapid advances in artificial intelligence (AI) such as Microsoft-backed OpenAI's ChatGPT are complicating governments' efforts to agree to laws governing the use of the technology. The government is consulting Australia's main science advisory body and is considering the next steps, a spokesperson for the industry and science minister said in April. The Financial Conduct Authority, one of several state regulators tasked with drawing up new guidelines covering AI, is consulting with the Alan Turing Institute and other legal and academic institutions to improve its understanding of the technology, a spokesperson said. Britain's competition regulator said on May 4 it would start examining the effect of AI on consumers, businesses and the economy, and whether new controls were needed. Britain said in March it planned to split responsibility for governing AI between its regulators for human rights, health and safety, and competition, rather than creating a new body. China's cyberspace regulator in April unveiled draft measures to manage generative AI services, saying it wanted firms to submit security assessments to authorities before they launch offerings to the public.


Google Flooded the Internet With AI News. Where's Apple? - CNET

CNET - News

Unless you've been living under a rock, you've probably heard the term "generative AI" at least a handful of times now, perhaps thanks to the wildly popular ChatGPT service. The AI-powered chatbot's success didn't just shine a spotlight on OpenAI, the creator behind it, but it also catalyzed an AI arms race in the tech industry – a race from which Apple has been noticeably absent. Earlier this month, Google made a flurry of AI-related announcements at its annual developer conference, including a new AI-infused version of search and Bard, its AI-powered chatbot, which is being rolled out across the world. Before that, Microsoft built generative AI into its suite of long-established productivity apps like Word, PowerPoint and Outlook in a move that's changing how more than a billion people work. In February, Meta released its own sophisticated AI model, which has many of the same capabilities at ChatGPT and Bard, as open-source software for public use.


Russia Targets Kyiv for 10th Time This Month

NYT > Europe

Russia unleashed another widespread missile and drone attack overnight on cities across Ukraine, including targeting the capital, Kyiv, for the 10th time this month, Ukrainian officials said on Friday. At least three cruise missiles and six attack drones managed to evade air defenses, according to Ukraine's Air Force. There was no immediate information on casualties or what was hit. The air force said in a statement that three cruise missiles and 16 Iranian-made Shahed-136 drones had been intercepted, and local officials in Lviv said that five of those were over their region, in western Ukraine. The attack drones came in "several waves" with short intervals in between, according to the city's military administration, which said all had been shot down.


ChatGPT Is Already Obsolete

The Atlantic - Technology

Last week, at Google's annual conference dedicated to new products and technologies, the company announced a change to its premier AI product: The Bard chatbot, like OpenAI's GPT-4, will soon be able to describe images. Although it may seem like a minor update, the enhancement is part of a quiet revolution in how companies, researchers, and consumers develop and use AI--pushing the technology not only beyond remixing written language and into different media, but toward the loftier goal of a rich and thorough comprehension of the world. ChatGPT is six months old, and it's already starting to look outdated. That program and its cousins, known as large language models, mime intelligence by predicting what words are statistically likely to follow one another in a sentence. Researchers have trained these models on ever more text--at this point, every book ever and then some--with the premise that force-feeding machines more words in different configurations will yield better predictions and smarter programs.


It's a Weird Time to Be a Doomsday Prepper

The Atlantic - Technology

If you're looking for a reason the world will suddenly end, it's not hard to find one--especially if your job is to convince people they need to buy things to prepare for the apocalypse. "World War III, China, Russia, Iran, North Korea, Joe Biden--you know, everything that's messed up in the world," Ron Hubbard, the CEO of Atlas Survival Shelters, told me. His Texas-based company sells bunkers with bulletproof doors and concrete walls to people willing to shell out several thousand--and up to millions--of dollars for peace of mind about potential catastrophic events. Lately, interest in his underground bunkers has been booming. "When the war broke out in Ukraine, my phone was ringing every 45 seconds for about two weeks," he said.


AI Chatbots Are Doing Something a Lot Like Improv

TIME - Tech

For weeks after his bizarre conversation with Bing's new chatbot went viral, New York Times columnist Kevin Roose wasn't sure what had happened. "The explanations you get for how these language models work, they're not that satisfying," Roose said at one point. "No one can tell me why this chatbot tried to break up my marriage." He's not alone in feeling confused. Powered by a relatively new form of AI called large language models, this new generation of chatbots defies our intuitions about how to interact with computers.


Spooked by ChatGPT, US Lawmakers Want to Create an AI Regulator

WIRED

Since the tech industry began its love affair with machine learning about a decade ago, US lawmakers have chattered about the potential need for regulation to rein in the technology. No proposal to regulate corporate AI projects has got close to becoming law--but OpenAI's release of ChatGPT in November has convinced some senators there is now an urgent need to do something to protect people's rights against the potential harms of AI technology. At a hearing held by a Senate Judiciary subcommittee yesterday attendees heard a terrifying laundry list of ways artificial intelligence can harm people and democracy. Senators from both parties spoke in support of the idea of creating a new arm of the US government dedicated to regulating AI. The idea even got the backing of Sam Altman, CEO of OpenAI.


ChatGPT Scams Are Infiltrating Apple's App Store and Google Play

WIRED

Any major trend or world event, from the coronavirus pandemic to the cryptocurrency frenzy, will quickly be used as fodder in digital phishing attacks and other online scams. In recent months, it has become clear that the same would happen for large language models and generative AI. Today, researchers from the security firm Sophos are warning that the latest incarnation of this is showing up in Google Play and Apple's App Store, where scammy apps are pretending to offer access to OpenAI's chatbot service ChatGPT through free trials that eventually start charging subscription fees. There are paid versions of OpenAI's GPT and ChatGPT for regular users and developers, but anyone can try the AI chatbot for free on the company's website. The scam apps take advantage of people who have heard about this new technology--and perhaps the frenzy of people clamoring to use it--but don't have much additional context for how to try it themselves.


CEO behind ChatGPT warns Congress AI could cause 'harm to the world'

Washington Post - Technology News

Altman advocated for a number of regulations, including a new government agency charged with creating government standards for the field, to address mounting concerns that generative AI could distort reality and create unprecedented safety risks. The CEO tallied a litany of "risky" behaviors presented by technology like ChatGPT, including spreading "one-on-one interactive disinformation" and emotional manipulation. At one point he acknowledged AI could be used to target drone strikes.


Ministers looking at body-worn facial recognition technology for police

The Guardian

Ministers are calling for facial recognition technology to be "embedded" in everyday policing, including potentially linking it to the body-worn cameras officers use as they patrol streets. Until now, police use of live facial recognition in England and Wales has been limited to special operations such as football matches or the coronation. Prof Fraser Sampson, the biometrics and surveillance camera commissioner, said the potential expansion was "very significant" and that "the Orwellian concerns of people, the ability of the state to watch every move, is very real". The government's intentions were revealed in a document produced for the surveillance camera commissioner, discussing changes to the oversight of technology and surveillance. It said: "This issue is made more pressing given the policing minister [Chris Philp] expressed his desire to embed facial recognition technology in policing and is considering what more the government can do to support the police on this. Such embedding is extremely likely to include exploring integration of this technology with police body-worn video."