Goto

Collaborating Authors

 idiot


Spat deepens between Elon Musk and Ryanair's O'Leary

BBC News

Elon Musk has suggested he could buy Ryanair and called for its chief executive to be fired amid a deepening spat between the pair. The budget airline on Tuesday branded the Tesla chief executive an idiot, and used the extraordinary row to promote its January sale. Musk and Ryanair boss Michael O'Leary have been trading insults over the past week after O'Leary rejected the idea of using Musk's Starlink technology to provide wi-fi on flights. The two are among the world's most outspoken business chiefs, with Musk the world's richest man with an estimated net worth of $769bn (£573bn), and O'Leary running Europe's busiest airline. A statement on Ryanair's X account on Tuesday evening said: Perhaps Musk needs a break?? Ryanair is launching a Great Idiots seat sale especially for Elon and any other idiots on'X'.


I'm a Polite Person. But in This One Specific Situation, I Recommend Being a Total Jerk.

Slate

Sign up for the Slatest to get the most insightful analysis, criticism, and advice out there, delivered to your inbox daily. Fairly recently, I started being verbally abusive to large language models. I highly recommend you experiment with doing so yourself. Over the past 30 days, I have called large language models (primarily OpenAI's paid product) the following names, among others that I won't repeat here because my mom might read this: Dipshit, fucknuts, shitstain, dummy, dumbass, dum-dum fucking dumbass dum-dum, numbnuts, hockey puck (thank you, Don Rickles), turdburger, lickspittle, cockroach, fucking cockroach (thank you, Tony Montana), idiot, fucking idiot, total fucking idiot, and fucking numbnuts dipshit. Ethan Mollick, author of Co-Intelligence: Living and Working With AI, and currently the reigning A.I. whisperer for the consultant class, says that anthropomorphizing A.I. is "a sin of necessity."


The Extinction of Experience by Christine Rosen review – smartphone nation

The Guardian

People who walk along the street with their heads down, staring at their phones, are enemies of society. They are narcissistic babies who have unilaterally derogated from the social contract that says you should look where you're going to make sure you don't bump into people. They implicitly believe that others should do that cognitive work for them while they shuffle along scrolling for porn or doom. If, however, a normal person bumps into them they will be enraged at the unpleasant reminder that other human beings exist outside their solipsistic bubble. Meanwhile, they are walking so slowly that everyone behind them, too, is inconvenienced; they are prime contributors to urban congestion and alienation and the general breakdown of the fabric of society. All that is true enough, but The Extinction of Experience has a lot of other complaints about modern technology.


SCAR: Sparse Conditioned Autoencoders for Concept Detection and Steering in LLMs

Härle, Ruben, Friedrich, Felix, Brack, Manuel, Deiseroth, Björn, Schramowski, Patrick, Kersting, Kristian

arXiv.org Artificial Intelligence

Large Language Models (LLMs) have demonstrated remarkable capabilities in generating human-like text, but their output may not be aligned with the user or even produce harmful content. This paper presents a novel approach to detect and steer concepts such as toxicity before generation. We introduce the Sparse Conditioned Autoencoder (SCAR), a single trained module that extends the otherwise untouched LLM. SCAR ensures full steerability, towards and away from concepts (e.g., toxic content), without compromising the quality of the model's text generation on standard evaluation benchmarks. We demonstrate the effective application of our approach through a variety of concepts, including toxicity, safety, and writing style alignment. As such, this work establishes a robust framework for controlling LLM generations, ensuring their ethical and safe deployment in real-world applications.


Can LLMs Produce Faithful Explanations For Fact-checking? Towards Faithful Explainable Fact-Checking via Multi-Agent Debate

Kim, Kyungha, Lee, Sangyun, Huang, Kung-Hsiang, Chan, Hou Pong, Li, Manling, Ji, Heng

arXiv.org Artificial Intelligence

Fact-checking research has extensively explored verification but less so the generation of natural-language explanations, crucial for user trust. While Large Language Models (LLMs) excel in text generation, their capability for producing faithful explanations in fact-checking remains underexamined. Our study investigates LLMs' ability to generate such explanations, finding that zero-shot prompts often result in unfaithfulness. To address these challenges, we propose the Multi-Agent Debate Refinement (MADR) framework, leveraging multiple LLMs as agents with diverse roles in an iterative refining process aimed at enhancing faithfulness in generated explanations. MADR ensures that the final explanation undergoes rigorous validation, significantly reducing the likelihood of unfaithful elements and aligning closely with the provided evidence. Experimental results demonstrate that MADR significantly improves the faithfulness of LLM-generated explanations to the evidence, advancing the credibility and trustworthiness of these explanations.


'The Legend of Zelda: Tears of the Kingdom' Embraces Mad Scientist Discovery

WIRED

The denizens of The Legend of Zelda: Tears of the Kingdom are waiting for me to save Hyrule, but I've been slightly preoccupied. A generous breakdown of how I've spent my time so far is something like this: 5 percent on main storyline, 10 percent on side quests, 85 percent on deranged, barely working experiments--like a little kid glueing Legos together. I haven't been a particularly proficient builder. I attached a rocket to a tree trunk, thinking I could magic-carpet myself across the map; I flew off the back immediately. I created what I thought was a sort of "air raft," trunks with fans on every conceivable corner to hover me away; it was as effective as a handful of helium balloons.


Elon Musk reaffirms AI's potential to destroy civilization - Jack Of All Techs

#artificialintelligence

While tech giants across the world work on materializing the idea of having a generative artificial intelligence (AI) to aid humans in their daily lives, the risk of the nascent technology going rogue remains imminent. Considering this possibility, Tesla and Twitter chief Elon Musk reminded the people of AI's potential to destroy civilization. On March 15, Musk's plan of creating a new AI startup surfaced after the entrepreneur was reportedly assembling a team of AI researchers and engineers. However, Musk continues to highlight the destructive potential of AI -- just like any other technology -- if it goes into the wrong hands or is being developed with ill intentions. According to Musk, AI can be dangerous. In a FOX interview, he said that AI can be more dangerous than mismanaged aircraft design or production maintenance, for example.


White House's Kirby blasts Russia for awarding pilots behind US drone crash: 'at best, just an idiot'

FOX News

Former U.S. Amb. to NATO Kurt Volker says the Russian fighter jet collision was'intentional' and requires a'firm response' from the U.S. The Biden administration blasted Russia for honoring two pilots for downing a U.S. drone in international airspace while saying the aviator who crashed into the drone was "at best, just an idiot." Last week, the Kremlin issued state awards to the fighter jet pilots responsible for downing the U.S. MQ-9 Reaper drone over the Black Sea earlier the week prior. In an official statement, the Ministry of Defense commended the pilots for preventing the drone from "violating the boundaries of the temporary airspace regime established for the special military operation." U.S. European Command said a Russian Su-27 fighter jet colliding with a U.S. MQ-9 Reaper drone over the Black Sea. A screenshot shows a jet dumping fuel.


A historic Relic (Sci-fi):. Sam: I laugh at the stupid Prophecizers…

#artificialintelligence

Sam: I laugh at the stupid Prophecizers who are so certain of technological Singularity, for ex: Kurzweil. He is a fraud or worst, an inductivist idiot. Just plot a line of past progresses against time, and extend that "exponential" line to the future. And voila you have got Artificial General Intelligence and the fountain of youth. Chris: So you think AGI is a myth that will never happen?


MoviePass, Take Two!

The New Yorker

Stacy Spikes, the C.E.O. of MoviePass, was buying popcorn at the Angelika Film Center the other day, before a screening of Darren Aronofsky's "The Whale." "I think it's incredible," Spikes said of the film, which he'd seen already. "You don't see overweight people, you don't see addiction, you don't see a broken heart the same way as you did before." He filled a cup with Diet Coke before making his way to Theatre 1, fourth row center. Spikes is a movie nerd who has dedicated a chunk of his career to helping theatres survive the age of smartphones and streaming.