Goto

Collaborating Authors

 outrage


Why outrage is erupting over Trump plan to exclude nursing from 'professional' designation

Los Angeles Times

Things to Do in L.A. Tap to enable a layout that focuses on the article. Your morning catch-up: Mayor Lurie has SF feeling better, California's job market is taking a hit and more big stories Why outrage is erupting over Trump plan to exclude nursing from'professional' designation This is read by an automated voice. Please report any issues or inconsistencies here . Trump administration proposes excluding nursing and other fields from "professional" designation, capping graduate student loans. Nursing leaders warn the policy will worsen California's severe nurse shortage by discouraging graduate degrees required for teaching and specialized patient care.


We owe the Trump admin a debt of gratitude for the Signal group chat leak

Al Jazeera

Sometimes journalists befuddle me, and I'm a journalist – although my touchy detractors would dispute that. Perhaps like you, I have been watching – with a healthy dose of bemusement and amusement – the outrage-du-jour dominate the latest 24-hour "news cycle" in North America and beyond. Such is the squirrel-like attention span of many of my perpetually outraged colleagues, that today's outrage usually has a short life expectancy since another outrage inevitably comes along tomorrow. But the outrage seizing Washington, DC – the capital of outrage – appears poised to consume the Beltway press corps for more than a day or two. When that happens, the outrage tends to evolve into a four-alarm scandal which journalists crave because it often translates into a big, ego-boosting award for the lucky scribe who triggered the original outrage.


Outrage as Google scraps its promise not to use AI for weapons or surveillance

Daily Mail - Science & tech

Google has updated its AI ethical guidelines and removed a key pledge not to use the tech in a dangerous way. The company erased the 2018 pledge on Tuesday which stated the tech giant'would not use AI for weapons or surveillance'. The revised policy now shows that Google will only develop AI'responsibly' and in line with'widely accepted principles of international law and human rights.' Google's change has sparked internal backlash as employees called the move'deeply concerning' and that the company should not be involved in'the business of war.' Matt Mahmoudi, Amnesty adviser on AI and human rights, shamed Google for the move, saying the tech giant set a'dangerous precedent.' 'AI-powered technologies could fuel surveillance and lethal killing systems at a vast scale, potentially leading to mass violations and infringing on the fundamental right to privacy,' he added.


Signature moves: are we losing the ability to write by hand?

The Guardian

Humming away in offices on Capitol Hill, in the Pentagon and in the White House is a technology that represents the pragmatism, efficiency and unsentimental nature of American bureaucracy: the autopen. It is a device that stores a person's signature, replicating it as needed using a mechanical arm that holds a real pen. Like many technologies, this rudimentary robotic signature-maker has always provoked ambivalence. We invest signatures with meaning, particularly when the signer is well known. During the George W Bush administration, the secretary of defence, Donald Rumsfeld, generated a small wave of outrage when reporters revealed that he had been using an autopen for his signature on the condolence letters that he sent to the families of fallen soldiers. Fans of singer Bob Dylan expressed ire when they discovered that the limited edition of his book The Philosophy of Modern Song, which cost nearly 600 and came with an official certificate "attesting to its having been individually signed by Dylan", in fact had made unlimited use of an autopen. Dylan took the unusual step of issuing a statement on his Facebook page: "With contractual deadlines looming," Dylan wrote, "the idea of using an autopen was suggested to me, along with the assurance that this kind of thing is done'all the time' in the art and literary worlds."


How AI Can Guide Us on the Path to Becoming the Best Versions of Ourselves

TIME - Tech

The Age of AI has also ushered in the Age of Debates About AI. And Yuval Noah Harari, author of Sapiens and Homo Deus, and one of our foremost big-picture thinkers about the grand sweep of humanity, history and the future, is now out with Nexus: A Brief History of Information Networks from the Stone Age to AI. Harari generally falls into the AI alarmist category, but his thinking pushes the conversation beyond the usual arguments. The book is a look at human history through the lens of how we gather and marshal information. For Harari, this is essential, because how we use--and misuse--information is central to how our history has unfolded and to our future with AI. In what Harari calls the "naïve view of information," humans have assumed that more information will necessarily lead to greater understanding and even wisdom about the world.


What Is Privacy For?

The New Yorker

I belong to the last generation of Americans who grew up without the Internet in our pocket. We went online, but also, miraculously, we went offline. The clunky things we called computers didn't come with us. There were disadvantages, to be sure. Factual disputes could not be resolved by consulting Wikipedia on our phones; people remained wrong for hours, even days.


The Technology of Outrage: Bias in Artificial Intelligence

Bridewell, Will, Bello, Paul F., Bringsjord, Selmer

arXiv.org Artificial Intelligence

Artificial intelligence and machine learning are increasingly used to offload decision making from people. In the past, one of the rationales for this replacement was that machines, unlike people, can be fair and unbiased. Evidence suggests otherwise. We begin by entertaining the ideas that algorithms can replace people and that algorithms cannot be biased. Taken as axioms, these statements quickly lead to absurdity. Spurred on by this result, we investigate the slogans more closely and identify equivocation surrounding the word 'bias.' We diagnose three forms of outrage-intellectual, moral, and political-that are at play when people react emotionally to algorithmic bias. Then we suggest three practical approaches to addressing bias that the AI community could take, which include clarifying the language around bias, developing new auditing methods for intelligent systems, and building certain capabilities into these systems. We conclude by offering a moral regarding the conversations about algorithmic bias that may transfer to other areas of artificial intelligence.


Fake nudes of Taylor Swift spread across social media, sparking outrage

Washington Post - Technology News

The images, likely created by AI, spread rapidly across X and other social media platforms this week, with one image amassing over 45 million views. When X said they were working to take down the images, Swift's fan base took matters into their own hands, flooding the site with real images of the pop star along with the phrase "Protect Taylor Swift" to drown out the explicit content.


Netflix's 'Dog and Boy' anime causes outrage for incorporating AI-generated art

Engadget

In 2016, Studio Ghibli co-founder and director Hayao Miyazaki, responsible for beloved anime classics like Princess Mononoke and Kiki's Delivery Service, made headlines around the world for his reaction to an AI animation program. "I would never wish to incorporate this technology into my work at all," Miyazaki told the software engineers who came to show their creation to him. "I strongly feel that this is an insult to life itself." A half-decade later, artificial intelligence and the potential role it could play in anime productions is once again in the spotlight. This week, Netflix shared Dog and Boy, an animated short the streaming giant described as an "experimental effort" to address the anime industry's ongoing labor shortage.


Hollingshead

AAAI Conferences

We demonstrate that it is possible to leverage big data in the form of tweets and linked webpages to find expressions of sentiment that signal "bad behavior" such as cyber attacks. We hypothesize that expressions of "outrage" (high intensity, negative affect sentiment) against an organization in public data may be predictive of cyber attacks for two reasons: 1) threat actors may be motivated to launch an attack based on anger/discontent, and 2) outrage associated with an organization or industry may increase the likelihood of that organization or industry being victimized by threat actors (i.e., as a form of "vigilante justice"). We measure sentiment in online content and determine trends in public emotion and their correlation to trends in cyber attacks, as reported in Hackmageddon. We demonstrate that dimensions of sentiment, as afforded by our use of the Circumplex model of emotion, do yield correlations to reported cyber attacks, but differ dependent upon the domain of the data. Thus the use of this technique requires careful analysis for optimal application.