Goto

Collaborating Authors

 fox


Her husband wanted to use ChatGPT to create sustainable housing. Then it took over his life.

The Guardian

Her husband wanted to use ChatGPT to create sustainable housing. Then it took over his life. Kate Fox says Joe Ceccanti was the'most hopeful person' before he started spending 12 hours a day with a chatbot On 7 August, Kate Fox received a phone call that upended her life. A medical examiner said that her husband, Joe Ceccanti - who had been missing for several hours - had jumped from a railway overpass and died. Ceccanti had no history of depression, she said, nor was he suicidal - he was the "most hopeful person" she had ever known. In fact, according to the witness accounts shared with Fox later, just before Ceccanti jumped, he smiled and yelled: "I'm great!" to the rail yard attendants below when they asked him if he was OK.


The Andrew Cuomo Campaign Is All in on MAGA Influencers

WIRED

With the NYC mayoral race coming to a close, Andrew Cuomo is courting right-wing creators. With only 13 days left before the New York City mayoral election, former governor Andrew Cuomo is partnering with some of the same influencers who helped President Donald Trump win the White House last year. Over the past week, right-wing creators like Logan Paul, the former vlogger turned podcaster and WWE wrestler, and Emily Austin, an influencer and sports commentator, have published content featuring Cuomo as a guest on their shows. The appearances have marked a new investment by Cuomo's team into cultivating attention online as a means of competing against the social media-savvy Democratic nominee Zohran Mamdani . But instead of trying to cleave off Mamdani's online support, Cuomo appears to be trying to siphon off support from GOP nominee Curtis Sliwa.


Adaptive Computation Pruning for the Forgetting Transformer

Lin, Zhixuan, Obando-Ceron, Johan, He, Xu Owen, Courville, Aaron

arXiv.org Artificial Intelligence

The recently proposed Forgetting Transformer (FoX) incorporates a forget gate into softmax attention and has shown consistently better or on-par performance compared to the standard RoPE-based Transformer. Notably, many attention heads in FoX tend to forget quickly, causing their output at each timestep to rely primarily on local context. Based on this observation, we propose Adaptive Computation Pruning (ACP) for FoX, a method that dynamically prunes computations involving input-output dependencies that are strongly decayed by the forget gate. In particular, our method performs provably safe pruning via a dynamically set pruning threshold that guarantees the pruned attention weights are negligible. We apply ACP to language model pretraining with FoX and show it consistently reduces the number of FLOPs and memory accesses in softmax attention by around 70% across different model sizes and context lengths, resulting in a roughly 50% to 70% reduction in attention runtime (or a 2-3$\times$ speedup) and a roughly 10% to 40% increase in end-to-end training throughput. Furthermore, longer context lengths yield greater computational savings. All these speed improvements are achieved without any performance degradation. Our code is available at https://github.com/zhixuan-lin/forgetting-transformer.


A Hybrid Multi-Agent Prompting Approach for Simplifying Complex Sentences

Zunjare, Pratibha, Hsiao, Michael

arXiv.org Artificial Intelligence

--This paper addresses the challenge of transforming complex sentences into sequences of logical, simplified sentences while preserving semantic and logical integrity with the help of Large Language Models. We propose a hybrid approach that combines advanced prompting with multi-agent architectures to enhance the sentence simplification process. Experimental results show that our approach was able to successfully simplify 70% of the complex sentences written for video game design application. In comparison, a single-agent approach attained a 48% success rate on the same task. Sentence simplification is a challenging task in computational linguistics. The simplification process aims to transform complex sentences into simpler structures while preserving the original meaning. Effective sentence simplification has significant applications across numerous domains like education, content accessibility for individuals with cognitive disabilities, automated content creation, robotics, coding, legal documents, etc. Traditional approaches to sentence simplification have relied on rule-based systems, statistical methods, and more recently neural network architectures [1]. Complex sentences present significant challenges in action-oriented contexts, particularly when attempting to derive executable/actionable functionalities such as robotics, legal documents, and video games.


Forgetting Transformer: Softmax Attention with a Forget Gate

Lin, Zhixuan, Nikishin, Evgenii, He, Xu Owen, Courville, Aaron

arXiv.org Artificial Intelligence

An essential component of modern recurrent sequence models is the forget gate. While Transformers do not have an explicit recurrent form, we show that a forget gate can be naturally incorporated into Transformers by down-weighting the unnormalized attention scores in a data-dependent way. We name this attention mechanism the Forgetting Attention and the resulting model the Forgetting Transformer (FoX). We show that FoX outperforms the Transformer on long-context language modeling, length extrapolation, and short-context downstream tasks, while performing on par with the Transformer on long-context downstream tasks. Moreover, it is compatible with the FlashAttention algorithm and does not require any positional embeddings. Several analyses, including the needle-in-the-haystack test, show that FoX also retains the Transformer's superior long-context capabilities over recurrent sequence models such as Mamba-2, HGRN2, and DeltaNet. We also introduce a "Pro" block design that incorporates some common architectural components in recurrent sequence models and find it significantly improves the performance of both FoX and the Transformer. Our code is available at https://github.com/zhixuan-lin/forgetting-transformer.


A Longitudinal Analysis of Racial and Gender Bias in New York Times and Fox News Images and Articles

Ibrahim, Hazem, AlDahoul, Nouar, Abbasi, Syed Mustafa Ali, Zaffar, Fareed, Rahwan, Talal, Zaki, Yasir

arXiv.org Artificial Intelligence

The manner in which different racial and gender groups are portrayed in news coverage plays a large role in shaping public opinion. As such, understanding how such groups are portrayed in news media is of notable societal value, and has thus been a significant endeavour in both the computer and social sciences. Yet, the literature still lacks a longitudinal study examining both the frequency of appearance of different racial and gender groups in online news articles, as well as the context in which such groups are discussed. To fill this gap, we propose two machine learning classifiers to detect the race and age of a given subject. Next, we compile a dataset of 123,337 images and 441,321 online news articles from New York Times (NYT) and Fox News (Fox), and examine representation through two computational approaches. Firstly, we examine the frequency and prominence of appearance of racial and gender groups in images embedded in news articles, revealing that racial and gender minorities are largely under-represented, and when they do appear, they are featured less prominently compared to majority groups. Furthermore, we find that NYT largely features more images of racial minority groups compared to Fox. Secondly, we examine both the frequency and context with which racial minority groups are presented in article text. This reveals the narrow scope in which certain racial groups are covered and the frequency with which different groups are presented as victims and/or perpetrators in a given conflict. Taken together, our analysis contributes to the literature by providing two novel open-source classifiers to detect race and age from images, and shedding light on the racial and gender biases in news articles from venues on opposite ends of the American political spectrum.


Robot disguised as a coyote or fox will scare wildlife away from runways at Alaska airport

FOX News

Fox News Flash top headlines are here. Check out what's clicking on Foxnews.com. ANCHORAGE, Alaska (AP) -- A headless robot about the size of a labrador retriever will be camouflaged as a coyote or fox to ward off migratory birds and other wildlife at Alaska's second largest airport, a state agency said. The Alaska Department of Transportation and Public Facilities has named the new robot Aurora and said it will be based at the Fairbanks airport to "enhance and augment safety and operations," the Anchorage Daily News reported. The transportation department released a video of the robot climbing rocks, going up stairs and doing something akin to dancing while flashing green lights.


How Google's Antitrust Trial Could Change Internet Search

TIME - Tech

In the ongoing court battle between Google and the U.S. Justice Department over whether the company has violated an antitrust law, the stakes are high. The outcome of the 10-week trial, which will be decided by U.S. District Judge Amit Mehta, could fundamentally change the way people search the internet and reduce revenue for the company that has the most common search engine for online users. The civil antitrust lawsuit is the first to go to trial in a series of cases targeting other big tech companies like Meta and Amazon. But this particular suit, brought forward by the Justice Department and eleven other states, alleges that Google illegally monopolizes search engine services--spending billions to do so-- making it the default company through which advertising companies and website publishers purchase and sell ads. "The question is whether [Google] is entrenching its monopoly and closing off avenues for competitors to try to develop a competitive search engine," says Eleanor Fox, professor at New York University School of Law.


Changing agents and ascribing beliefs in dynamic epistemic logic

Singh, Shikha, Lodaya, Kamal, Khemani, Deepak

arXiv.org Artificial Intelligence

In dynamic epistemic logic (Van Ditmarsch, Van Der Hoek, & Kooi, 2008) it is customary to use an action frame (Baltag & Moss, 2004; Baltag, Moss, & Solecki, 1998) to describe different views of a single action. In this article, action frames are extended to add or remove agents, we call these agent-update frames. This can be done selectively so that only some specified agents get information of the update, which can be used to model several interesting examples such as private update and deception, studied earlier by Baltag and Moss (2004); Sakama (2015); Van Ditmarsch, Van Eijck, Sietsma, and Wang (2012). The product update of a Kripke model by an action frame is an abbreviated way of describing the transformed Kripke model which is the result of performing the action. This is substantially extended to a sum-product update of a Kripke model by an agent-update frame in the new setting. These ideas are applied to an AI problem of modelling a story. We show that dynamic epistemic logics, with update modalities now based on agent-update frames, continue to have sound and complete proof systems. Decision procedures for model checking and satisfiability have expected complexity. For a sublanguage, there are polynomial space algorithms.


Understanding TF-IDF in NLP: A Comprehensive Guide

#artificialintelligence

Natural Language Processing (NLP) is an area of computer science that focuses on the interaction between human language and computers. One of the fundamental tasks of NLP is to extract relevant information from large volumes of unstructured data. In this article, we will explore one of the most popular techniques used in NLP called TF-IDF. TF-IDF is a numerical statistic that reflects the importance of a word in a document. It is commonly used in NLP to represent the relevance of a term to a document or a corpus of documents.