Goto

Collaborating Authors

 schneier


From retail to the military, 'intelligent connectivity' raises ethical dilemmas

Christian Science Monitor | Science

Artificial intelligence gets tons of press – and for good reason. But AI's fast-rising expertise lies not just within the matrix of its own nifty algorithms, but also in its wider connections. It's about "intelligent connectivity" that relies on raw data – lots and lots of it – and on the communication networks that carry it. This blend of technologies may be surrounding you at a large store like Walmart. Retailers fight for their target audience using sensors galore, stationed in their aisles and checkout lines.


Fake or fact? 2024 is shaping up to be the first AI election. Should voters worry?

USATODAY - Tech Top Stories

The Republican National Committee fired off an attack ad as soon as President Joe Biden announced his reelection campaign last week. The 30-second spot which used fake visuals of China invading Taiwan, financial markets crashing and immigrants overrunning the border sported a disclaimer: "Built entirely with AI imagery." The ad – which the GOP called "an AI-generated look into the country's possible future if Joe Biden is re-elected in 2024" – is a sign of what's to come in the 2024 presidential election, experts say. AI crack down?Senate leader Schumer unveils plans to crack down on AI Fake Twitter accountsIs that Twitter account real? 4 ways to help you spot a fake account. Even as the technology grows more sophisticated and powerful, spreading into all aspects of American life, there are still very few rules governing its use.


Attacking Machine Learning Systems - Schneier on Security

#artificialintelligence

The field of machine learning (ML) security--and corresponding adversarial ML--is rapidly advancing as researchers develop sophisticated techniques to perturb, disrupt, or steal the ML model or data. It's a heady time; because we know so little about the security of these systems, there are many opportunities for new researchers to publish in this field. In many ways, this circumstance reminds me of the cryptanalysis field in the 1990. And there is a lesson in that similarity: the complex mathematical attacks make for good academic papers, but we mustn't lose sight of the fact that insecure software will be the likely attack vector for most ML systems. We are amazed by real-world demonstrations of adversarial attacks on ML systems, such as a 3D-printed object that looks like a turtle but is recognized (from any orientation) by the ML system as a gun.


Inserting a Backdoor into a Machine-Learning System - Schneier on Security

#artificialintelligence

Nice to hear from you, I hope you are well and life is not to hectic. "For myself, it is the front door into ML that is more worrying." What actually worries me is not "the method" of perversion of which ML appears to have endless varieties at every point (thus is not fit for honest purpose). As I've pointed out before, in "The King Game" there is the notion of "The Godhead". Where the King is a direct conduit to God's words thus wishes.


Real risks behind artificial intelligence go way beyond fear of sentience, AI experts warn

#artificialintelligence

A former Google engineer made waves this past month with claims the tech company's new chat bot feature gained sentience, but technology experts say there are other, more concerning risks artificial intelligence poses to society. Blake Lemoine received pushback when he argued the bot, known as LaMDA, or Language Model for Dialogue Applications, is now capable of feeling. He was placed on leave in June after giving documents to a Senate committee, claiming the bot discriminated against people on the basis of religion, among other biases. Lemoine was fired this past month for what the company says is violating data security policies, but he still believes LaMDA poses a problem as AI becomes more engrained in society. "These are just engineers, building bigger and better systems for increasing the revenue into Google with no mindset towards ethics," Lemoine told Insider earlier this week, referring to what he believes is Google's lack of preparation for a technology that gains personhood.


The Coming AI Hackers

#artificialintelligence

Artificial intelligence--AI--is an information technology. And it is already deeply embedded into our social fabric, both in ways we understand and in ways we don't. It will hack our society to a degree and effect unlike anything that's come before. I mean this in two very different ways. One, AI systems will be used to hack us. And two, AI systems will themselves become hackers: finding vulnerabilities in all sorts of social, economic, and political systems, and then exploiting them at an unprecedented speed, scale, and scope. We risk a future of AI systems hacking other AI systems, with humans being little more than collateral damage. Okay, maybe it's a bit of hyperbole, but none of this requires far-future science-fiction technology. I'm not postulating any "singularity," where the AI-learning feedback loop becomes so fast that it outstrips human understanding. My scenarios don't require evil intent on the part of anyone. We don't need malicious AI systems like Skynet (Terminator) or the Agents (Matrix). Some of the hacks I will discuss don't even require major research breakthroughs. They'll improve as AI techniques get more sophisticated, but we can see hints of them in operation today. This hacking will come naturally, as AIs become more advanced at learning, understanding, and problem-solving. In this essay, I will talk about the implications of AI hackers. First, I will generalize "hacking" to include economic, social, and political systems--and also our brains. Next, I will describe how AI systems will be used to hack us. Then, I will explain how AIs will hack the economic, social, and political systems that comprise society. Finally, I will discuss the implications of a world of AI hackers, and point towards possible defenses. It's not all as bleak as it might sound. Caper movies are filled with hacks. Hacks are clever, but not the same as innovations. Systems tend to be optimized for specific outcomes. Hacking is the pursuit of another outcome, often at the expense of the original optimization Systems tend be rigid. Systems limit what we can do and invariably, some of us want to do something else. But enough of us are. Hacking is normally thought of something you can do to computers. But hacks can be perpetrated on any system of rules--including the tax code. But you can still think of it as "code" in the computer sense of the term. It's a series of algorithms that takes an input--financial information for the year--and produces an output: the amount of tax owed. It's deterministic, or at least it's supposed to be.


Attacking the Performance of Machine Learning Systems - Schneier on Security

#artificialintelligence

Abstract: The high energy costs of neural network training and inference led to the use of acceleration hardware such as GPUs and TPUs. While such devices enable us to train large-scale neural networks in datacenters and deploy them on edge devices, their designers' focus so far is on average-case performance. In this work, we introduce a novel threat vector against neural networks whose energy consumption or decision latency are critical. We show how adversaries can exploit carefully-crafted sponge examples, which are inputs designed to maximise energy consumption and latency, to drive machine learning (ML) systems towards their worst-case performance. Sponge examples are, to our knowledge, the first denial-of-service attack against the ML components of such systems.


Will Artificial Intelligence Help or Hurt Cyber Defense?

#artificialintelligence

A few months ago, I asked the question: "Are Bots and Robots the Answer to Worker Shortages?" Here are a few more from recent months: PYMNTS.com: "Record Number of Robots Sold to Help Fill Jobs" "The labor shortage triggered by COVID-19 has been a boon to robot sales as businesses scramble to fill jobs amid increasing consumer demands for goods and services post-pandemic. "Orders for robotics January through October reached 29,000 units for a record $1.48 billion compared to last year's $1.09 billion, topping 2017's record for the same time period of $1.47 billion, the Association for Advancing Automation (A3) said in a press release." International Business Times: "Robots Filling Vacant Jobs Amid Ongoing'Great Resignation'" "The U.S. is struggling with a labor shortage that is hobbling its economic recovery, but companies are not sitting still as they work to keep production up and running.


Using Machine Learning to Guess PINs from Video - Schneier on Security

#artificialintelligence

I'm guessing that if you put fingers on the whole row of numbers before pressing, you can defeat this particular AI. What you need is a little sleight of hand. Or press the button with the thumb under hand. Step 1) Hold your "dominant" hand out in front of you flat with the back of the hand upper most. Step 2) Bring your thumb under your hand so the thumb-nail is at the base of the little finger. Seen from the top your whole thumb down to the wrist should be "under your hand" and out of sight.


How to Create Unbiased Machine Learning Models - KDnuggets

#artificialintelligence

AI systems are becoming increasingly popular and central in many industries. They decide who might get a loan from the bank, whether an individual should be convicted, and we may even entrust them with our lives when using systems such as autonomous vehicles in the near future. Thus, there is a growing need for mechanisms to harness and control these systems so that we may ensure that they behave as desired. One important issue that has been gaining popularity in the last few years is fairness. While usually ML models are evaluated based on metrics such as accuracy, the idea of fairness is that we must ensure that our models are unbiased with regard to attributes such as gender, race and other selected attributes.