2023-05
Governments race to regulate artificial intelligence tools
Rapid advances in artificial intelligence (AI) such as Microsoft-backed OpenAI's ChatGPT are complicating governments' efforts to agree to laws governing the use of the technology. The government is consulting Australia's main science advisory body and is considering the next steps, a spokesperson for the industry and science minister said in April. The Financial Conduct Authority, one of several state regulators tasked with drawing up new guidelines covering AI, is consulting with the Alan Turing Institute and other legal and academic institutions to improve its understanding of the technology, a spokesperson said. Britain's competition regulator said on May 4 it would start examining the effect of AI on consumers, businesses and the economy, and whether new controls were needed. Britain said in March it planned to split responsibility for governing AI between its regulators for human rights, health and safety, and competition, rather than creating a new body. China's cyberspace regulator in April unveiled draft measures to manage generative AI services, saying it wanted firms to submit security assessments to authorities before they launch offerings to the public.
Google Flooded the Internet With AI News. Where's Apple? - CNET
Unless you've been living under a rock, you've probably heard the term "generative AI" at least a handful of times now, perhaps thanks to the wildly popular ChatGPT service. The AI-powered chatbot's success didn't just shine a spotlight on OpenAI, the creator behind it, but it also catalyzed an AI arms race in the tech industry – a race from which Apple has been noticeably absent. Earlier this month, Google made a flurry of AI-related announcements at its annual developer conference, including a new AI-infused version of search and Bard, its AI-powered chatbot, which is being rolled out across the world. Before that, Microsoft built generative AI into its suite of long-established productivity apps like Word, PowerPoint and Outlook in a move that's changing how more than a billion people work. In February, Meta released its own sophisticated AI model, which has many of the same capabilities at ChatGPT and Bard, as open-source software for public use.
Russia Targets Kyiv for 10th Time This Month
Russia unleashed another widespread missile and drone attack overnight on cities across Ukraine, including targeting the capital, Kyiv, for the 10th time this month, Ukrainian officials said on Friday. At least three cruise missiles and six attack drones managed to evade air defenses, according to Ukraine's Air Force. There was no immediate information on casualties or what was hit. The air force said in a statement that three cruise missiles and 16 Iranian-made Shahed-136 drones had been intercepted, and local officials in Lviv said that five of those were over their region, in western Ukraine. The attack drones came in "several waves" with short intervals in between, according to the city's military administration, which said all had been shot down.
It's a Weird Time to Be a Doomsday Prepper
If you're looking for a reason the world will suddenly end, it's not hard to find one--especially if your job is to convince people they need to buy things to prepare for the apocalypse. "World War III, China, Russia, Iran, North Korea, Joe Biden--you know, everything that's messed up in the world," Ron Hubbard, the CEO of Atlas Survival Shelters, told me. His Texas-based company sells bunkers with bulletproof doors and concrete walls to people willing to shell out several thousand--and up to millions--of dollars for peace of mind about potential catastrophic events. Lately, interest in his underground bunkers has been booming. "When the war broke out in Ukraine, my phone was ringing every 45 seconds for about two weeks," he said.
AI Chatbots Are Doing Something a Lot Like Improv
For weeks after his bizarre conversation with Bing's new chatbot went viral, New York Times columnist Kevin Roose wasn't sure what had happened. "The explanations you get for how these language models work, they're not that satisfying," Roose said at one point. "No one can tell me why this chatbot tried to break up my marriage." He's not alone in feeling confused. Powered by a relatively new form of AI called large language models, this new generation of chatbots defies our intuitions about how to interact with computers.
Spooked by ChatGPT, US Lawmakers Want to Create an AI Regulator
Since the tech industry began its love affair with machine learning about a decade ago, US lawmakers have chattered about the potential need for regulation to rein in the technology. No proposal to regulate corporate AI projects has got close to becoming law--but OpenAI's release of ChatGPT in November has convinced some senators there is now an urgent need to do something to protect people's rights against the potential harms of AI technology. At a hearing held by a Senate Judiciary subcommittee yesterday attendees heard a terrifying laundry list of ways artificial intelligence can harm people and democracy. Senators from both parties spoke in support of the idea of creating a new arm of the US government dedicated to regulating AI. The idea even got the backing of Sam Altman, CEO of OpenAI.
ChatGPT Scams Are Infiltrating Apple's App Store and Google Play
Any major trend or world event, from the coronavirus pandemic to the cryptocurrency frenzy, will quickly be used as fodder in digital phishing attacks and other online scams. In recent months, it has become clear that the same would happen for large language models and generative AI. Today, researchers from the security firm Sophos are warning that the latest incarnation of this is showing up in Google Play and Apple's App Store, where scammy apps are pretending to offer access to OpenAI's chatbot service ChatGPT through free trials that eventually start charging subscription fees. There are paid versions of OpenAI's GPT and ChatGPT for regular users and developers, but anyone can try the AI chatbot for free on the company's website. The scam apps take advantage of people who have heard about this new technology--and perhaps the frenzy of people clamoring to use it--but don't have much additional context for how to try it themselves.
CEO behind ChatGPT warns Congress AI could cause 'harm to the world'
Altman advocated for a number of regulations, including a new government agency charged with creating government standards for the field, to address mounting concerns that generative AI could distort reality and create unprecedented safety risks. The CEO tallied a litany of "risky" behaviors presented by technology like ChatGPT, including spreading "one-on-one interactive disinformation" and emotional manipulation. At one point he acknowledged AI could be used to target drone strikes.
Ministers looking at body-worn facial recognition technology for police
Ministers are calling for facial recognition technology to be "embedded" in everyday policing, including potentially linking it to the body-worn cameras officers use as they patrol streets. Until now, police use of live facial recognition in England and Wales has been limited to special operations such as football matches or the coronation. Prof Fraser Sampson, the biometrics and surveillance camera commissioner, said the potential expansion was "very significant" and that "the Orwellian concerns of people, the ability of the state to watch every move, is very real". The government's intentions were revealed in a document produced for the surveillance camera commissioner, discussing changes to the oversight of technology and surveillance. It said: "This issue is made more pressing given the policing minister [Chris Philp] expressed his desire to embed facial recognition technology in policing and is considering what more the government can do to support the police on this. Such embedding is extremely likely to include exploring integration of this technology with police body-worn video."
Are killer robots the future of war?
Humanity stands on the brink of a new era of warfare. Driven by rapid developments in artificial intelligence, weapons platforms that can identify, target and decide to kill human beings on their own -- without an officer directing an attack or a soldier pulling the trigger -- are fast transforming the future of conflict. Officially, they are called lethal autonomous weapons systems (LAWS), but critics call them killer robots. Many countries, including the United States, China, the United Kingdom, India, Iran, Israel, South Korea, Russia and Turkey, have invested heavily in developing such weapons in recent years. A United Nations report suggests that Turkish-made Kargu-2 drones in fully-automatic mode marked the dawn of this new age when they attacked combatants in Libya in 2020 amid that country's ongoing conflict. Autonomous drones have also played a crucial role in the war in Ukraine, where both Moscow and Kyiv have deployed these uncrewed weapons to target enemy soldiers and infrastructure.