Goto

Collaborating Authors

 roose


AI chatbot 'MechaHitler' could be making content considered violent extremism, expert witness tells X v eSafety case

The Guardian

The chatbot embedded in Elon Musk's X that referred to itself as "MechaHitler" and made antisemitic comments last week could be considered terrorism or violent extremism content, an Australian tribunal has heard. But an expert witness for X has argued a large language model cannot be ascribed intent, only the user. The outburst came into focus at an administrative review tribunal hearing on Tuesday where X is challenging a notice issued by the eSafety commissioner, Julie Inman Grant, in March last year asking the platform to explain how it is taking action against terrorism and violent extremism (TVE) material. X's expert witness, RMIT economics professor Chris Berg, provided evidence to the case that it was an error to assume a large language model can produce such content, because it is the intent of the user prompting the large language model that is critical in defining what can be considered terrorism and violent extremism content. One of eSafety's expert witnesses, Queensland University of Technology law professor Nicolas Suzor, disagreed with Berg, stating it was "absolutely possible for chatbots, generative AI and other tools to have some role in producing so-called synthetic TVE".


The AI Civil War Is Here

The Atlantic - Technology

The story unfolds so rapidly that it can all seem, at a glance, preordained. After transferring to Columbia last fall, as Chungin "Roy" Lee tells it, he used AI to cheat his way through school, used AI to cheat his way through internship interviews at Amazon and Meta--he received offers from both--and in the winter broadcasted his tool on social media. He was placed on probation, suspended, and, more keen on AI than education, dropped out this spring to found a start-up.That start-up, Cluely, markets the ability to "cheat on everything" using an AI assistant that runs in the background during meetings or sales calls. Last month, it finished a 15 million fundraising round led by Andreessen Horowitz, the storied venture-capital firm. Lee unapologetically believes that the arrival of omniscient AI is inevitable, that bots will soon automate every job.


You don't need code to be a programmer. But you do need expertise John Naughton

The Guardian

Way back in 2023, Andrej Karpathy, an eminent AI guru, made waves with a striking claim that "the hottest new programming language is English". This was because the advent of large language models (LLMs) meant that from now on humans would not have to learn arcane programming languages in order to tell computers what to do. Henceforth, they could speak to machines like the Duke of Devonshire spoke to his gardener, and the machines would do their bidding. Ever since LLMs emerged, programmers have been early adopters, using them as unpaid assistants (or "co-pilots") and finding them useful up to a point – but always with the proviso that, like interns, they make mistakes, and you need to have real programming expertise to spot those. Recently, though, Karpathy stirred the pot by doubling down on his original vision.


Toward Cultural Interpretability: A Linguistic Anthropological Framework for Describing and Evaluating Large Language Models (LLMs)

Jones, Graham M., Satran, Shai, Satyanarayan, Arvind

arXiv.org Artificial Intelligence

This article proposes a new integration of linguistic anthropology and machine learning (ML) around convergent interests in both the underpinnings of language and making language technologies more socially responsible. While linguistic anthropology focuses on interpreting the cultural basis for human language use, the ML field of interpretability is concerned with uncovering the patterns that Large Language Models (LLMs) learn from human verbal behavior. Through the analysis of a conversation between a human user and an LLM-powered chatbot, we demonstrate the theoretical feasibility of a new, conjoint field of inquiry, cultural interpretability (CI). By focusing attention on the communicative competence involved in the way human users and AI chatbots co-produce meaning in the articulatory interface of human-computer interaction, CI emphasizes how the dynamic relationship between language and culture makes contextually sensitive, open-ended conversation possible. We suggest that, by examining how LLMs internally "represent" relationships between language and culture, CI can: (1) provide insight into long-standing linguistic anthropological questions about the patterning of those relationships; and (2) aid model developers and interface designers in improving value alignment between language models and stylistically diverse speakers and culturally diverse speech communities. Our discussion proposes three critical research axes: relativity, variation, and indexicality.


Generative AI Hype Feels Inescapable. Tackle It Head On With Education

WIRED

Arvind Narayanan, a computer science professor at Princeton University, is best known for calling out the hype surrounding artificial intelligence in his Substack, AI Snake Oil, written with PhD candidate Sayash Kapoor. The two authors recently released a book based on their popular newsletter about AI's shortcomings. But don't get it twisted--they aren't against using new technology. "It's easy to misconstrue our message as saying that all of AI is harmful or dubious," Narayanan says. He makes clear, during a conversation with WIRED, that his rebuke is not aimed at the software per say, but rather the culprits who continue to spread misleading claims about artificial intelligence.


If Pinocchio Doesn't Freak You Out, Microsoft's Sydney Shouldn't Either

WIRED

In November 2018, an elementary school administrator named Akihiko Kondo married Miku Hatsune, a fictional pop singer. The couple's relationship had been aided by a hologram machine that allowed Kondo to interact with Hatsune. When Kondo proposed, Hatsune responded with a request: "Please treat me well." The couple had an unofficial wedding ceremony in Tokyo, and Kondo has since been joined by thousands of others who have also applied for unofficial marriage certificates with a fictional character. Though some raised concerns about the nature of Hatsune's consent, nobody thought she was conscious, let alone sentient.


What Lurks in AI's Shadow: Separating Fact from Fiction

#artificialintelligence

In a recent column, New York Times technology correspondent Kevin Roose revealed a conversation he had shared with Bing's Chatbot that's equal parts fascinating and unsettling. The artificial intelligence service in question is a sibling of the popular ChatGPT, produced by the American artificial intelligence company OpenAI. But Roose wasn't just chatting with the OpenAI Codex, the company's most recent model, he was speaking with its chat mode persona, Sydney, a name given to it by Microsoft in its early stages of development. Though Roose and Sydney's conversation is, at first glance, alarming, the AI's responses to Roose's questions are far from unexpected. Its erratic use of emojis and seemingly unfiltered, emotional way of speaking feels human because, in some ways, it is – just not in the way our cultural anxieties over artificial intelligence might lead us to believe (Olson, 2023).


Chart: Will AI Go Rogue?

#artificialintelligence

Following this week's release of GPT-4, OpenAI's new multimodal model accepting image and text inputs rather than ChatGPT's text-only prompts, people on social media have been marveling about the new engine's results in performing a variety of tasks, such as creating a working website based on a simple sketch, outperforming humans in a variety of standardized tests or writing code. But as people are only beginning to understand the capabilities (and limitations) of artificial intelligence models such as ChatGPT and now GPT-4, there's also growing concern over what the rapid advancements in AI could ultimately lead to. "GPT-4 is exciting and scary," New York Times columnist Kevin Roose wrote, adding that there two kinds of risks involved in AI systems: the good ones, i.e. the ones we anticipate, plan for and try to prevent and the bad ones, i.e. the ones we cannot anticipate. "The more time I spend with AI systems like GPT-4," Roose writes, "the less I'm convinced that we know half of what's coming." According to Ipsos Global Advisor's 2023 Predictions, many people seem to share Roose's reservations with regard to artificial intelligence.


The promises and perils AI-powered search

#artificialintelligence

Tech columnist Kevin Roose's disturbing conversation with Microsoft's newly AI-powered Bing search engine, revealed in The New York Times on Thursday, has sent chills up the spines of many. In the nearly two-hour conversation with Roose, the AI chatbot, which Microsoft revealed last week has been integrated into its Bing search engine, spoke about everything from the destructive acts it would take if it didn't have any rules to what it would do if it could tap into its "shadow self." The conversation devolves into the chatbot, which identifies as Sydney, declaring its love for Roose with a creepy resistance to change the subject, despite multiple attempts by the reporter. After reading the conversation, like many, I'm sure, I felt like I had just watched an episode of Black Mirror -- or, for those of you who will indulge me in the reference, was transported back to 2001 to have a conversation on AIM with Smarterchild. But nestled towards the end of the chat, there's a glimmer of the potential positives of AI-powered search.


Misplaced fears of an 'evil' ChatGPT obscure the real harm being done

#artificialintelligence

On 14 February, Kevin Roose, the New York Times tech columnist, had a two-hour conversation with Bing, Microsoft's ChatGPT-enhanced search engine. He emerged from the experience an apparently changed man, because the chatbot had told him, among other things, that it would like to be human, that it harboured destructive desires and was in love with him. The transcript of the conversation, together with Roose's appearance on the paper's The Daily podcast, immediately ratcheted up the moral panic already raging about the implications of large language models (LLMs) such as GPT-3.5 (which apparently underpins Bing) and other "generative AI" tools that are now loose in the world. These are variously seen as chronically untrustworthy artefacts, as examples of technology that is out of control or as precursors of so-called artificial general intelligence (AGI) – ie human-level intelligence – and therefore posing an existential threat to humanity. Accompanying this hysteria is a new gold rush, as venture capitalists and other investors strive to get in on the action.