Goto

Collaborating Authors

 devil


The next Japanese knotweed? Expert sounds alarm over FOUR invasive weeds taking root across the UK - including the highly poisonous Devil's Trumpet

Daily Mail - Science & tech

FBI under pressure over open airport five miles from Charlie Kirk assassination hit as private jet'vanishes' after shooting Elite sniper breaks down Charlie Kirk assassin's sick plot... and reveals tiny detail everyone's missed: The gun. Shell-shocked Trump holds Melania's hand at first appearance since Charlie Kirk's assassination MAUREEN CALLAHAN: Charlie Kirk's body wasn't even cold... before the fighting started again. Do these ghouls not see where this is headed? Musk dethroned as richest person by forgotten Wall Street darling's founder as stock soars 42% MSNBC analyst Matthew Dowd fired over'disgusting' on-air comments about Charlie Kirk shortly after conservative star was assassinated Sweater weather starts here - the cozy, chic pieces from Soft Surroundings you'll actually wear all season Jimmy Kimmel reacts to assassination of Charlie Kirk: 'No finger pointing' TMZ forced to apologize after staff heard erupting in laughter as Charlie Kirk's death was announced Fierce debate erupts over'non-human' technology in space after video captures UFO surviving Hellfire strike Is this Charlie Kirk's killer? This Oscar-nominated actress, 68, will soon reunite with her ex in Spain for their daughter's wedding, can you guess who?


Don't Just Translate, Agitate: Using Large Language Models as Devil's Advocates for AI Explanations

Suh, Ashley, Alperin, Kenneth, Li, Harry, Gomez, Steven R

arXiv.org Artificial Intelligence

This position paper highlights a growing trend in Explainable AI (XAI) research where Large Language Models (LLMs) are used to translate outputs from explainability techniques, like feature-attribution weights, into a natural language explanation. While this approach may improve accessibility or readability for users, recent findings suggest that translating into human-like explanations does not necessarily enhance user understanding and may instead lead to overreliance on AI systems. When LLMs summarize XAI outputs without surfacing model limitations, uncertainties, or inconsistencies, they risk reinforcing the illusion of interpretability rather than fostering meaningful transparency. We argue that - instead of merely translating XAI outputs - LLMs should serve as constructive agitators, or devil's advocates, whose role is to actively interrogate AI explanations by presenting alternative interpretations, potential biases, training data limitations, and cases where the model's reasoning may break down. In this role, LLMs can facilitate users in engaging critically with AI systems and generated explanations, with the potential to reduce overreliance caused by misinterpreted or specious explanations.


A Devil's Bargain With OpenAI

The Atlantic - Technology

Earlier today, The Atlantic's CEO, Nicholas Thompson, announced in an internal email that the company has entered into a business partnership with OpenAI, the creator of ChatGPT. Editorial content from this publication will soon be directly referenced in response to queries in OpenAI products. In practice, this means that users of ChatGPT, say, might type in a question and receive an answer that briefly quotes an Atlantic story; according to Anna Bross, The Atlantic's senior vice president of communications, it will be accompanied by a citation and a link to the original source. Other companies, such as Axel Springer, the publisher of Business Insider and Politico, have made similar arrangements. It does all feel a bit like publishers are making a deal with--well, can I say it?


DEBATE: Devil's Advocate-Based Assessment and Text Evaluation

Kim, Alex, Kim, Keonwoo, Yoon, Sangwon

arXiv.org Artificial Intelligence

As natural language generation (NLG) models have become prevalent, systematically assessing the quality of machine-generated texts has become increasingly important. Recent studies introduce LLM-based evaluators that operate as reference-free metrics, demonstrating their capability to adeptly handle novel tasks. However, these models generally rely on a single-agent approach, which, we argue, introduces an inherent limit to their performance. This is because there exist biases in LLM agent's responses, including preferences for certain text structure or content. In this work, we propose DEBATE, an NLG evaluation framework based on multi-agent scoring system augmented with a concept of Devil's Advocate. Within the framework, one agent is instructed to criticize other agents' arguments, potentially resolving the bias in LLM agent's answers. DEBATE substantially outperforms the previous state-of-the-art methods in two meta-evaluation benchmarks in NLG evaluation, SummEval and TopicalChat. We also show that the extensiveness of debates among agents and the persona of an agent can influence the performance of evaluators.


Beyond Recommender: An Exploratory Study of the Effects of Different AI Roles in AI-Assisted Decision Making

Ma, Shuai, Zhang, Chenyi, Wang, Xinru, Ma, Xiaojuan, Yin, Ming

arXiv.org Artificial Intelligence

Artificial Intelligence (AI) is increasingly employed in various decisionmaking However, empirical research reveals several limitations within tasks, typically as a Recommender, providing recommendations the existing AI-assisted decision-making framework, wherein AI that the AI deems correct. However, recent studies suggest this acts primarily as a recommender. One notable issue is that individuals, may diminish human analytical thinking and lead to humans' inappropriate when passively receiving AI suggestions, seldom engage reliance on AI, impairing the synergy in human-AI teams. in analytical thinking [3, 7, 38]. Furthermore, people frequently In contrast, human advisors in group decision-making perform inappropriately rely on the AI's recommendations (such as overreliance various roles, such as analyzing alternative options or criticizing and under-reliance) [8, 30, 33, 46] and the mere provision of decision-makers to encourage their critical thinking. This diversity AI explanations can, paradoxically, exacerbate overreliance [2, 37]. of roles has not yet been empirically explored in AI assistance. In In comparison, in human-human decision-making, beyond recommenders, this paper, we examine three AI roles: Recommender, Analyzer, and human advisors sometimes play other types of roles, Devil's Advocate, and evaluate their effects across two AI performance such as helping the decision-makers analyze the pros and cons of levels. Our results show each role's distinct strengths and different alternatives instead of directly giving recommendations, or limitations in task performance, reliance appropriateness, and user critically challenging decision-makers' initial views [40, 42].


Innovation #0 system. #0 The system of being is a fractal…

#artificialintelligence

The system is built as a multi-level structure with fractals that interact with each other and have their own laws of virtual physics. At the heart of the system is the first fractal, which is managed and configured according to ethical principles. From the first fractal there is an immersion into the depth of the system, where each next level of fractals has the ability to mutate and adapt to obtain new information. To maintain order and security in the system, two types of algorithms are used: "Angels" and "Devil". Virtual angels work at the upper levels of the fractals, control the lower levels and maintain order.


'The Devil in Me' feels like a dead end for The Dark Pictures Anthology

Washington Post - Technology News

In practice, though, the inventory mechanics feel bolted-on at best, meshing awkwardly with Supermassive's long-established formula. Because we're constantly shifting characters, the game doesn't want to disorient us by having to track too many details across too many inventories. Pickups in the environment are primarily keys for use in the immediate vicinity through an extra button press, which is functionally just another way to visualize actions that have traditionally happened automatically in these games. If these new ideas accomplish anything, they suggest something potentially more experimental and fleshed out down the line for Supermassive. As is, they certainly don't ask us to consider which character we're playing or which tools they have for more than a few seconds.


The Devil in Details a.k.a AI Bias Driving Stock Trading Bots

#artificialintelligence

Nothing can give jittery experience than a stock market meltdown to an investor, trading with the best possible bets. History is ripe with such examples and there is very much probability that they will repeat, for different reasons though. Trading in stocks is more than a guessing game, now partially taken over by bots, not even sparing the well-grounded players at Wall-street. It was robotic trading that caused the famous stock market crash of 1987 when Dow Jones plunged to 22.6%. Like with any technology, AI has its teething problems with fintech.


Why AI Will Never Replace Managers

#artificialintelligence

Of all the tools managers use to lead their businesses, thinking is the most crucial. It involves two distinct ways of processing information: intuitive and conscious, which the Nobel laureate Daniel Kahneman labeled thinking fast and slow. Today computers increasingly outperform people in both. With their raw calculative power, computers easily beat humans in conscious-reasoning tasks, as long as the rules and parameters of the situation are known. Managers routinely turn to mathematical optimization and simulation to build investment portfolios, make pricing decisions, and understand supply-chain risks.


Devil's in the details in Historic AI debate ZDNet

#artificialintelligence

Yoshua Bengio, left, has been a machine learning researcher for decades and runs Montreal's MILA institute for AI. Gary Marcus is a psychologist at NYU and a frequent critic of the puffed-up hype around AI. Gary Marcus, the NYU professor and entrepreneur who has made himself a gadfly of deep learning with his frequent skewering of headline hype, and Yoshua Bengio, a leading practitioner of deep learning awarded computing's higher honor for his pioneering work, went head to head Monday night in a two-hour debate Webcast from Bengio's MILA institute headquarters in Montreal. The two scholars seemed to find a lot of common ground as far as the broad strokes of where artificial intelligence needs to go, things such as trying to bring reasoning to AI. But when the discussion periodically lapsed into particular terminology or historical assertions, the two were suddenly at odds. The recorded stream of the video is posted on the organization's Facebook page if you want to go back and watch it.