Goto

Collaborating Authors

Results


Responses to Jack Clark's AI Policy Tweetstorm

#artificialintelligence

Artificial intelligence guru Jack Clark has written the longest, most interesting Twitter thread on AI policy that I've ever read. After a brief initial introductory tweet on August 6, Clark went on to post an additional 79 tweets in this thread. It was a real tour de force. Because I'm currently finishing up a new book on AI governance, I decided to respond to some of his thoughts on the future of governance for artificial intelligence (AI) and machine learning (ML). Clark is a leading figure in the field of AI science and AI policy today. He is the co-founder of Anthropic, an AI safety and research company, and he previously served as the Policy Director of OpenAI. So, I take seriously what he has to say on AI governance matters and really learned a lot from his tweetstorm. But I also want to push back on a few things. Specifically, several of the issues that Clark raises about AI governance are not unique to AI per se; they are broadly applicable to many other emerging technology sectors, and even some traditional ones. Below, I will refer to this as my "general critique" of Clark's tweetstorm. On the other hand, Clark correctly points to some issues that are unique to AI/ML and which really do complicate the governance of computational systems.


How do you teach younger students computer science?

ZDNet

To teach language to the youngest students, you first introduce them to the letters of the alphabet and the sounds that the letters make. Next, you step forward into combining letters into simple words, then sentences. To teach math, you start with numbers, then counting, then basic addition and subtraction. So, where do you start when it comes to teaching kindergarten through middle school students the basics of computer science? ZDNet asked, and here's what three education experts said.


Top challenge to internet health is AI power disparity and harm, Mozilla says

#artificialintelligence

The top challenge for the health of the internet is the power disparity between who benefits from AI and who is harmed by AI, Mozilla's new 2022 Internet Health reveals. Once again, this new report puts AI under the spotlight for how companies and governments use the technology. Mozilla's report scrutinized the nature of the AI-driven world citing real examples from different countries. TechRepublic spoke to Solana Larsen, Mozilla's Internet Health report editor, to shed light on the concept of "Responsible AI from the Start," black box AI, the future of regulations and how some AI projects lead by example. Larsen explains that AI systems should be built from the start considering ethics and responsibility, not tacked on at a later date when the harms begin to emerge.


Top challenge to internet health is AI power disparity and harm, Mozilla says

#artificialintelligence

The top challenge for the health of the internet is the power disparity between who benefits from AI and who is harmed by AI, Mozilla's new 2022 Internet Health reveals. Once again, this new report puts AI under the spotlight for how companies and governments use the technology. Mozilla's report scrutinized the nature of the AI-driven world citing real examples from different countries. TechRepublic spoke to Solana Larsen, Mozilla's Internet Health report editor, to shed light on the concept of "Responsible AI from the Start," black box AI, the future of regulations and how some AI projects lead by example. Larsen explains that AI systems should be built from the start considering ethics and responsibility, not tacked on at a later date when the harms begin to emerge.


Artificial Intelligence Act: will the EU's AI regulation set an example?

#artificialintelligence

When Microsoft unleashed Tay, its AI-powered chatbot, on Twitter on 23 March 2016, the software giant's hope was that it would "engage and entertain people… through casual and playful conversation". An acronym for'thinking about you', Tay was designed to mimic the language patterns of a 19-year-old American girl and learn by interacting with human users on the social network. Within hours, things had gone badly wrong. Trolls tweeted politically incorrect phrases at the bot in a bid to manipulate its behaviour. Sure enough, Tay started spewing out racist, sexist and other inflammatory messages to its following of more than 100,000 users. Microsoft was forced to lock the @TayandYou account indefinitely less than a day later, but not before its creation had tweeted more than 96,000 times.


How to shrink AI's ballooning carbon footprint

#artificialintelligence

The carbon footprints of data centres, which provide cloud-computing services, can range widely.Credit: Feature China/Future Publishing/Getty As machine-learning experiments get more sophisticated, their carbon footprints are ballooning. Now, researchers have calculated the carbon cost of training a range of models at cloud-computing data centres in various locations1. Their findings could help researchers to reduce the emissions created by work that relies on artificial intelligence (AI). The team found marked differences in emissions between geographical locations. For the same AI experiment, "the most efficient regions produced about a third of the emissions of the least efficient", says Jesse Dodge, a researcher in machine learning at the Allen Institute for AI in Seattle, Washington, who co-led the study.


The Coming AI Hackers

#artificialintelligence

Artificial intelligence--AI--is an information technology. And it is already deeply embedded into our social fabric, both in ways we understand and in ways we don't. It will hack our society to a degree and effect unlike anything that's come before. I mean this in two very different ways. One, AI systems will be used to hack us. And two, AI systems will themselves become hackers: finding vulnerabilities in all sorts of social, economic, and political systems, and then exploiting them at an unprecedented speed, scale, and scope. We risk a future of AI systems hacking other AI systems, with humans being little more than collateral damage. Okay, maybe it's a bit of hyperbole, but none of this requires far-future science-fiction technology. I'm not postulating any "singularity," where the AI-learning feedback loop becomes so fast that it outstrips human understanding. My scenarios don't require evil intent on the part of anyone. We don't need malicious AI systems like Skynet (Terminator) or the Agents (Matrix). Some of the hacks I will discuss don't even require major research breakthroughs. They'll improve as AI techniques get more sophisticated, but we can see hints of them in operation today. This hacking will come naturally, as AIs become more advanced at learning, understanding, and problem-solving. In this essay, I will talk about the implications of AI hackers. First, I will generalize "hacking" to include economic, social, and political systems--and also our brains. Next, I will describe how AI systems will be used to hack us. Then, I will explain how AIs will hack the economic, social, and political systems that comprise society. Finally, I will discuss the implications of a world of AI hackers, and point towards possible defenses. It's not all as bleak as it might sound. Caper movies are filled with hacks. Hacks are clever, but not the same as innovations. Systems tend to be optimized for specific outcomes. Hacking is the pursuit of another outcome, often at the expense of the original optimization Systems tend be rigid. Systems limit what we can do and invariably, some of us want to do something else. But enough of us are. Hacking is normally thought of something you can do to computers. But hacks can be perpetrated on any system of rules--including the tax code. But you can still think of it as "code" in the computer sense of the term. It's a series of algorithms that takes an input--financial information for the year--and produces an output: the amount of tax owed. It's deterministic, or at least it's supposed to be.


The Existential Threat of AI-Enhanced Disinformation Operations

#artificialintelligence

A recent Washington Post article about artificial intelligence (AI) briefly caught the publics' attention. A former engineer working for Google's Responsible AI organization went public with his belief that the company's chatbot was sentient. It should be stated bluntly: this AI is not a conscious entity. It is a large language model trained indiscriminately from Internet text that uses statistical patterns to predict the most probable sequence of words. While the tone of the Washington Post piece conjured all the usual Hollywood tropes related to humanity's fear of sentient technology (e.g., storylines from Ex Machina, Terminator, or 2001: A Space Odyssey), it also inadvertently highlighted an uncomfortable truth: As AI capabilities continue to improve, they will become increasingly effective tools for manipulating and fooling humans.


Why business is booming for military AI startups

MIT Technology Review

Militaries are responding to the call. NATO announced on June 30 that it is creating a $1 billion innovation fund that will invest in early-stage startups and venture capital funds developing "priority" technologies such as artificial intelligence, big-data processing, and automation. Since the war started, the UK has launched a new AI strategy specifically for defense, and the Germans have earmarked just under half a billion for research and artificial intelligence within a $100 billion cash injection to the military. "War is a catalyst for change," says Kenneth Payne, who leads defense studies research at King's College London and is the author of the book I, Warbot: The Dawn of Artificially Intelligent Conflict. The war in Ukraine has added urgency to the drive to push more AI tools onto the battlefield.


Intellectual property and investment in Artificial Intelligence

#artificialintelligence

Patents provide third-party opinions on the uniqueness of the technology and a'saleable asset insurance' in the event that the company ceases trading