Goto

Collaborating Authors

 slippery slope


'Slippery slope': How will Pakistan strike India as tensions soar?

Al Jazeera

Islamabad, Pakistan – On Wednesday evening, as Pakistan grappled with the aftermath of a wave of missile strikes from India that hit at least six cities, killing 31 people, the country's military spokesperson took to a microphone with a chilling warning. "When Pakistan strikes India, it will come at a time and place of its own choosing," Lieutenant General Ahmed Sharif Chaudhry said in a media briefing. "The whole world will come to know, and its reverberation will be heard everywhere." Two days later, India and Pakistan have moved even closer to the brink of war. On Thursday, May 8, Pakistan accused India of flooding its airspace with kamikaze drones that were brought down over major cities, including Lahore and Karachi.


'Battlestar Galactica' star says show's AI warnings more timely as sci-fi fantasies come to life

FOX News

Tricia Helfer, who played a humanoid robot Cylon on "Battlestar Galactica," says the show's look at the conflict between humans and AI still resonates today. "We did warn against AI while we were shooting it," Helfer told Fox News Digital at the Beverly Hills Film Festival this week. She continued, "It was 20 years ago, and I've recently re-watched it and went, 'Oh my gosh, it's even more relevant now.' So I think we just really need to be careful. It's a slippery slope between using it to our advantage and having it maybe be able to control us a little bit." "I think we're a little bit far off from the humanoid Cylons yet and humanoid robots, but I don't know, they're coming," Helfer added.


From Flexibility to Manipulation: The Slippery Slope of XAI Evaluation

Wickstrøm, Kristoffer, Höhne, Marina Marie-Claire, Hedström, Anna

arXiv.org Artificial Intelligence

The lack of ground truth explanation labels is a fundamental challenge for quantitative evaluation in explainable artificial intelligence (XAI). This challenge becomes especially problematic when evaluation methods have numerous hyperparameters that must be specified by the user, as there is no ground truth to determine an optimal hyperparameter selection. It is typically not feasible to do an exhaustive search of hyperparameters so researchers typically make a normative choice based on similar studies in the literature, which provides great flexibility for the user. In this work, we illustrate how this flexibility can be exploited to manipulate the evaluation outcome. We frame this manipulation as an adversarial attack on the evaluation where seemingly innocent changes in hyperparameter setting significantly influence the evaluation outcome. We demonstrate the effectiveness of our manipulation across several datasets with large changes in evaluation outcomes across several explanation methods and models. Lastly, we propose a mitigation strategy based on ranking across hyperparameters that aims to provide robustness towards such manipulation. This work highlights the difficulty of conducting reliable XAI evaluation and emphasizes the importance of a holistic and transparent approach to evaluation in XAI.


Police using AI could lead to 'predictive' crime prevention 'slippery slope,' experts argue

FOX News

Recording Industry Association of America CEO Mitch Glazier says the Human Artistry Campaign aims to protect professional creators' rights to their performances, voices and likenesses after AI creates Drake and The Weeknd songs. A pilot program in the U.K. to enhance police capabilities via artificial intelligence has proven successful but could pave the way for a slide into a future of "predictive policing," experts told Fox News Digital. "Artificial intelligence is a tool, like a firearm is a tool, and it can be useful, it can be deadly," Christopher Alexander, CCO of Liberty Blockchain, told Fox News Digital. "In terms of the Holy Grail here, I really think it is the predictive analytics capability that if they get better at that, you have some very frightening capabilities." British police in different communities have experimented with an artificial intelligence-powered (AI) system to help catch drivers committing violations, such as using their phones while driving or driving without a seat belt.


Prompt attacks: are LLM jailbreaks inevitable?

#artificialintelligence

Out of all the Large Language Models (LLMs) currently out in the open, I've found Claude to be by far the safest and most harmless one. The team at Anthropic, a cutting-edge AI startup valued at $4B, has done an absolutely brilliant job taking AI Safety to the next level with Claude and Claude, using a slew of ingenuous techniques like RLAIF and a proprietary approach called "Constitutional AI", to turn their models into "helpful, honest, and harmless" AI systems. Through hundreds of experiments covering all the typical attempts of circumventing an LLM's safety restrictions, I can confidently confirm that Claude blows the competition out of the water on AI safety -- yes, that includes GPT-4 (and Bard, in case anyone still cares about that guy). But as can be seen from the snippet of my chat with Claude above (and as we will see in much more detail below), the road to a fully safe AI system is still long and arduous. The problem for LLMs is compounded by the fact that much of their impressive capabilities are emergent at scale, and that AI Interpretability Research is still pretty much an open field when it comes to the "black box" problem.


The slippery slope of using AI and deepfakes to bring history to life

#artificialintelligence

To mark Israel's Memorial Day in 2021, the Israel Defense Forces musical ensembles collaborated with a company that specializes in synthetic videos, also known as "deepfake" technology, to bring photos from the 1948 Israeli-Arab war to life. They produced a video in which young singers clad in period uniforms and carrying period weapons sang "Hareut," an iconic song commemorating soldiers killed in combat. As they sing, the musicians stare at faded black-and-white photographs they hold. The past comes to life, Harry Potter style. For the past few years, my colleagues and I at UMass Boston's Applied Ethics Center have been studying how everyday engagement with AI challenges the way people think about themselves and politics. We've found that AI has the potential to weaken people's capacity to make ordinary judgments.


Patents and Artificial Intelligence: An 'Obvious' Slippery Slope

#artificialintelligence

Stephen Thaler and Ryan Abbott plan to bring a light beacon, a beverage container, and a machine called Dabus into court, along with a simple question: Does an inventor need to be human? Depending on how they respond, a panel of judges on the U.S. Court of Appeals for the Federal Circuit could open the door to another significant question: What is "obvious" to a machine? A basic tenet of U.S. law is that patents aren't awarded for inventions that are obvious. The standard of obviousness in patent law is measured against a hypothetical person of ordinary skill in the art. Putting artificial intelligence, with its potential for near omnipotent capabilities, on equal footing as human inventors could have a significant impact on patent law's obviousness standard, attorneys and patent professionals say.


Is Apple's image-scan plan a wise move or the start of a slippery slope? John Naughton

The Guardian

Once upon a time, updates of computer operating systems were of interest only to geeks. You may recall how Version 14.5 of iOS, which required users to opt in to tracking, had the online advertising racketeers in a tizzy while their stout ally, Facebook, stood up for them. Now, the forthcoming version of iOS has libertarians, privacy campaigners and "thin-end-of-the-wedge" worriers in a spin. It also has busy mainstream journalists struggling to find headline-friendly summaries of what Apple has in store for us. "Apple is prying into iPhones to find sexual predators, but privacy activists worry governments could weaponise the feature" was how the venerable Washington Post initially reported it.


Anthony Bourdain's voice-cloning for new doc called into question: It's 'a slippery slope'

FOX News

Fox News Flash top entertainment and celebrity headlines are here. Check out what's clicking today in entertainment. The revelation that a documentary filmmaker used voice-cloning software to make the late chef Anthony Bourdain say words he never spoke has drawn criticism amid ethical concerns about use of the powerful technology. The movie "Roadrunner: A Film About Anthony Bourdain" appeared in cinemas Friday and mostly features real footage of the beloved celebrity chef and globe-trotting television host before he died in 2018. But its director, Morgan Neville, told The New Yorker that a snippet of dialogue was created using artificial intelligence technology.


Ethics in the Drone Industry & AI's Slippery Slope

#artificialintelligence

Several companies are pushing the boundaries of what is possible. Hardware is becoming ever more sophisticated, reducing weight, improving flight times and bringing down prices. Last month DJI launched the Mavic Mini, a tiny 249-gram drone with a range of 4km that can shoot 2.7K video and fly for 30 minutes on a single battery. A feat of engineering and a measure of how far things have come in the last decade. October also saw the launch of another industry benchmark: Skydio's new drone, the Skydio 2. It's lighter, cheaper and more sophisticated than the original R1 – which is saying something.