Goto

Collaborating Authors

North America


ChatGPT exploded into public life a year ago. Now we know what went on behind the scenes John Naughton

The Guardian

If a week is a long time in politics, a year is an eternity in tech. Just over 12 months ago, the industry was humming along in its usual way. The big platforms were deep into what Cory Doctorow calls "enshittification" – the process in which platforms go from being initially good to their users, to abusing them to make things better for their business customers and finally to abusing those customers in order to claw back all the value for themselves. Elon Musk was ramping up his efforts to alienate advertisers on Twitter/X and accelerate the death spiral of his expensive toy. TikTok was monopolising every waking hour of teenagers.


European Union reaches agreement on landmark legislation to regulate AI

Al Jazeera

European Union policymakers have agreed on landmark legislation to regulate artificial intelligence (AI), paving the way for the most ambitious set of standards yet to control the use of the game-changing technology. The agreement to support the "AI Act" on Friday came after nearly 38 hours of negotiations between lawmakers and policymakers. "The AI Act is a global first. A unique legal framework for the development of AI you can trust," EU chief Ursula von der Leyen said. A commitment we took in our political guidelines – and we delivered.


E.U. reaches deal on landmark AI bill, racing ahead of U.S.

Washington Post - Technology News

After years of inaction in the U.S. Congress, E.U. tech laws have had wide-ranging implications for Silicon Valley companies. Europe's digital privacy law, the General Data Protection Regulation, has prompted some companies, such as Microsoft, to overhaul how they handle users' data even beyond Europe's borders. Meta, Google and other companies have faced fines under the law, and Google had to delay the launch of its generative AI chatbot Bard in the region due to a review under the law. However, there are concerns that the law created costly compliance measures that have hampered small businesses, and that lengthy investigations and relatively small fines have blunted its efficacy among the world's largest companies.


AI at the edge: 5G and the Internet of Things see fast times ahead

ZDNet

Connected devices linked to the Internet of Things (IoT) -- in association with 5G network technology -- are now everywhere. But just wait until next-generation applications, such as artificial intelligence (AI), start running within these edge devices. Meanwhile, the low latency and higher data speeds of 5G and IoT will add a new real-time dimension to AI. Consider an extended reality (XR) headset that not only provides a 3D view of the inside of an aircraft engine, but which also has on-board intelligence to point you to problem areas or to information on anomalies in that engine, which are immediately and automatically recognized and adjusted. Chipmakers are already developing powerful yet energy-efficient processors -- or "systems on a chip" -- that can deliver AI processing within a small footprint device.


Cheap drones can take out expensive military systems, warns former Air Force pilot pushing AI-enabled force

FOX News

AI-enabled military systems have been effective in battle, but some reliability issues still concern troops and their commanders: former Air Force test pilot. Cheap drones equipped with AI can destroy expensive military equipment, and the Pentagon will need to incorporate autonomous technology into its strategy to advance into the next generation of warfare, a former test pilot and military tech company executive told Fox News. "What we've seen in Europe and other theaters is that they've democratized warfare," said EpiSci Vice President of Tactical Autonomous Systems Chris Gentile. "A $1,000 drone can take out a multimillion-dollar asset." The Pentagon has a portfolio of over 800 contracts for AI-enabled projects.


The FTC is reportedly looking into Microsoft's $13 billion OpenAI investment

Engadget

OpenAI's recent drama hasn't only caught UK regulators' attention. Bloomberg reported Friday that the Federal Trade Commission (FTC) is looking into Microsoft's investment in the Sam Altman-led company and whether it violates US antitrust laws. FTC Chair Lina Khan wrote in a New York Times op-ed earlier this year that "the expanding adoption of AI risks further locking in the market dominance of large incumbent technology firms." Bloomberg's report stresses that the FTC inquiry is preliminary, and the agency hasn't opened a formal investigation. But Khan and company are reportedly "analyzing the situation and assessing what its options are." One complicating factor for regulation is that OpenAI is a non-profit, and transactions involving non-corporate entities aren't required by law to be reported.


AI-powered 'Nudify' apps that digitally undress fully-clothed teenage girls are soaring in popularity

Daily Mail - Science & tech

Tens of millions of people are using AI-powered'nudify' apps, according to a new analysis that shows the dark side of the technology. More than 24 million people visited nudity AI websites in September, which digitally alter images, primarily women, to make them appear naked in the photo using deep-learning algorithms. These algorithms are trained on existing images of women which allows it to overlay realistic images of nude body parts, regardless of whether the photographed person is clothed. Spam ads across major platforms are also directing people to the sites and apps increased by more than 2,000 percent since the beginning of 2023. The rise in nudity-promoted apps is particularly prevalent on social media, including Google's YouTube, Reddit, and X - and 52 Telegram groups were also found to be used to access non-consensual intimate imagery (NCII) services.


AI's 'Fog of War'

The Atlantic - Technology

This is Atlantic Intelligence, an eight-week series in which The Atlantic's leading thinkers on AI will help you understand the complexity and opportunities of this groundbreaking technology. Earlier this year, The Atlantic published a story by Gary Marcus, a well-known AI expert who has agitated for the technology to be regulated, both in his Substack newsletter and before the Senate. Marcus argued that "this is a moment of immense peril," and that we are teetering toward an "information-sphere disaster, in which bad actors weaponize large language models, distributing their ill-gotten gains through armies of ever more sophisticated bots." I was interested in following up with Marcus given recent events. In the past six weeks, we've seen an executive order from the Biden administration focused on AI oversight; chaos at the influential company OpenAI; and this Wednesday, the release of Gemini, a GPT competitor from Google.


Be glad UK's watchdog has its eyes on what just happened at OpenAI Nils Pratley

The Guardian

Why is the little ol' Competition & Markets Authority, a UK regulator, inserting itself into the entertaining and important – but distant – drama at San Francisco-based OpenAI? Even if the CMA finds eventually that Microsoft, another US company, is pulling the strings at Sam Altman's show, what could it actually do? Doesn't it all paint the UK as an unfriendly place for tech investment, notwithstanding Rishi Sunak's eagerness to host AI summits and conduct cosy chats with Elon Musk? All fair questions, and the CMA should brace for more in that vein. It is indeed slightly odd that the UK regulator is the first out of traps in wondering, albeit in a preliminary manner, if Microsoft has gained effective control over OpenAI and, if it has, whether that amounts to a problem. But there is another way to look at developments: thank goodness a regulator somewhere is seeking clarity about what just occurred at OpenAI.


Assured and Trustworthy Human-centered AI – a AAAI Fall symposium

AIHub

The Assured and Trustworthy Human-centered AI (ATHAI) symposium was held as part of the AAAI Fall Symposium Series in Arlington, VA from October 25-27, 2023. The symposium brought together three groups of stakeholders from industry, academia, and government to discuss issues related to AI assurance in different domains ranging from healthcare to defense. The symposium drew over 50 participants and consisted of a combination of invited keynote speakers, spotlight talks, and interactive panel discussions. On Day 1, the symposium kicked off with a keynote by Professor Missy Cummings (George Mason University) titled "Developing Trustworthy AI: Lessons Learned from Self-driving Cars". Missy shared important lessons learned from her time at the National Highway Traffic Safety Administration (NHTSA) and interacting with the autonomous vehicle industry.