Goto

Collaborating Authors

 henderson


Ben & Jerry's row deepens as three board members removed

BBC News

Ben & Jerry's row deepens as three board members removed Three members of Ben & Jerry's independent board will no longer be eligible to serve in their roles, after the ice cream company introduced a new set of governance practices. These include a nine-year limit set on board members' terms. Chair Anuradha Mittal, who earlier said she had no plans to resign under pressure, is among those affected. The move was criticised by the company's co-founder Ben Cohen, who called it a blatant power grab designed to strip the board of legal authority and independence. His remarks are the latest in a long-running row between Ben and Jerry's and its owner over the Cherry Garcia maker's social activism and the continued independence of its board.


A Physics-Informed Fixed Skyroad Model for Continuous UAS Traffic Management (C-UTM)

Zahed, Muhammad Junayed Hasan, Rastgoftar, Hossein

arXiv.org Artificial Intelligence

Abstract--Unlike traditional multi-agent coordination frameworks, which assume a fixed number of agents, UAS traffic management (UTM) requires a platform that enables Uncrewed Aerial Systems (UAS) to freely enter or exit constrained low-altitude airspace. Consequently, the number of UAS operating in a given region is time-varying, with vehicles dynamically joining or leaving even in dense, obstacle-laden environments. The primary goal of this paper is to develop a computationally efficient management system that maximizes airspace usability while ensuring safety and efficiency. T o achieve this, we first introduce physics-informed methods to structure fixed skyroads across multiple altitude layers of urban airspace, with the directionality of each skyroad designed to guarantee full reachability. We then present a novel Continuous UTM (C-UTM) framework that optimally allocates skyroads to UAS requests while accounting for the time-varying capacity of the airspace. Collectively, the proposed model addresses the key challenges of low-altitude UTM by providing a scalable, safe, and efficient solution for urban airspace usability.


The Viral 'DoorDash Girl' Saga Unearthed a Nightmare for Black Creators

WIRED

A delivery driver posted a TikTok alleging she had been sexually assaulted by a customer. The deepfakes that followed reveal a growing digital blackface problem. When DoorDash delivery driver Livie Rose Henderson posted a video alleging that one of her customers sexually assaulted her in October, it set off a firestorm of reactions. Henderson's TikTok claimed that when she was dropping off a delivery in Oswego, New York, she found a customer's front door wide open and inside, a man on the couch with his pants and underwear pulled down to his ankles. Henderson was dubbed the "DoorDash Girl," and her video accrued tens of millions of views, including some supportive and consoling responses to what she said she had endured on the job as a young woman.


OpenAI has finally released open-weight language models

MIT Technology Review

"The vast majority of our [enterprise and startup] customers are already using a lot of open models," said Casey Dvorak, a research program manager at OpenAI, in a media briefing about the model release. "Because there is no [competitive] open model from OpenAI, we wanted to plug that gap and actually allow them to use our technology across the board." The new models come in two different sizes, the smaller of which can theoretically run on 16 GB of RAM--the minimum amount that Apple currently offers on its computers. The larger model requires a high-end laptop or specialized hardware. Open models have a few key use cases.


Stress-testing Machine Generated Text Detection: Shifting Language Models Writing Style to Fool Detectors

Pedrotti, Andrea, Papucci, Michele, Ciaccio, Cristiano, Miaschi, Alessio, Puccetti, Giovanni, Dell'Orletta, Felice, Esuli, Andrea

arXiv.org Artificial Intelligence

Recent advancements in Generative AI and Large Language Models (LLMs) have enabled the creation of highly realistic synthetic content, raising concerns about the potential for malicious use, such as misinformation and manipulation. Moreover, detecting Machine-Generated Text (MGT) remains challenging due to the lack of robust benchmarks that assess generalization to real-world scenarios. In this work, we present a pipeline to test the resilience of state-of-the-art MGT detectors (e.g., Mage, Radar, LLM-DetectAIve) to linguistically informed adversarial attacks. To challenge the detectors, we fine-tune language models using Direct Preference Optimization (DPO) to shift the MGT style toward human-written text (HWT). This exploits the detectors' reliance on stylistic clues, making new generations more challenging to detect. Additionally, we analyze the linguistic shifts induced by the alignment and which features are used by detectors to detect MGT texts. Our results show that detectors can be easily fooled with relatively few examples, resulting in a significant drop in detection performance. This highlights the importance of improving detection methods and making them robust to unseen in-domain texts.


Export Reviews, Discussions, Author Feedback and Meta-Reviews

Neural Information Processing Systems

The paper tackles (constituent) syntactic parsing by mapping this prediction problem to a sequence-to-sequence alignment problem, and then essentially applying a method recently developed in the context of neural machine translation (LSTM-encoder-decoder with an attention mechanism). The resulting parsing model achieves state-of-the-art results when used in the standard supervised set-up (PTB WSJ) and improves further when estimated in a semi-supervised / co-training regime. What I find especially interesting in this paper is that the attention mechanism is crucial for attaining good generalization properties: without using the attention mechanism LSTM achieves very poor results in the supervised setting. This is an interesting observation which may in principle generate future work focusing on refining the attention model (e.g., moving more in a direction of Neural Turing machines of Graves et al.). This is also somewhat surprising that such simple linearization strategy led to state-of-the-art performance.


Patient with paralysis uses mind to pilot virtual quadcopter

Popular Science

Multiple brain-computer interface (BCI) projects are currently underway, but BrainGate is one of the first aimed at motor restoration in users affected by neurodegenerative disorders and spinal cord injuries. Researchers have spent years working through the device's clinical trial phases, but their most recent breakthrough isn't focused on physical accomplishments. Instead, the latest achievements could pave the way for people with disabilities to more easily utilize complex computer software, communicate with loved ones, work remotely, and even make music. According to a study published by BrainGate engineers on January 20 in the journal Nature Medicine, a volunteer with quadriplegia can now maintain unprecedented control over a virtual object using their surgically implanted BrainGate BCI device. To demonstrate the ability, the patient guided a virtual rotocopter through hoops in a digital obstacle course by simply thinking about moving the fingers on one of their hands.


CSSL: Contrastive Self-Supervised Learning for Dependency Parsing on Relatively Free Word Ordered and Morphologically Rich Low Resource Languages

Ray, Pretam, Sandhan, Jivnesh, Krishna, Amrith, Goyal, Pawan

arXiv.org Artificial Intelligence

Neural dependency parsing has achieved remarkable performance for low resource morphologically rich languages. It has also been well-studied that morphologically rich languages exhibit relatively free word order. This prompts a fundamental investigation: Is there a way to enhance dependency parsing performance, making the model robust to word order variations utilizing the relatively free word order nature of morphologically rich languages? In this work, we examine the robustness of graph-based parsing architectures on 7 relatively free word order languages. We focus on scrutinizing essential modifications such as data augmentation and the removal of position encoding required to adapt these architectures accordingly. To this end, we propose a contrastive self-supervised learning method to make the model robust to word order variations. Furthermore, our proposed modification demonstrates a substantial average gain of 3.03/2.95 points in 7 relatively free word order languages, as measured by the UAS/LAS Score metric when compared to the best performing baseline.


Shh, ChatGPT. That's a Secret.

The Atlantic - Technology

This past spring, a man in Washington State worried that his marriage was on the verge of collapse. "I am depressed and going a little crazy, still love her and want to win her back," he typed into ChatGPT. With the chatbot's help, he wanted to write a letter protesting her decision to file for divorce and post it to their bedroom door. "Emphasize my deep guilt, shame, and remorse for not nurturing and being a better husband, father, and provider," he wrote. In another message, he asked ChatGPT to write his wife a poem "so epic that it could make her change her mind but not cheesy or over the top." The man's chat history was included in the WildChat data set, a collection of 1 million ChatGPT conversations gathered consensually by researchers to document how people are interacting with the popular chatbot.


GADePo: Graph-Assisted Declarative Pooling Transformers for Document-Level Relation Extraction

Coman, Andrei C., Theodoropoulos, Christos, Moens, Marie-Francine, Henderson, James

arXiv.org Artificial Intelligence

Document-level relation extraction typically relies on text-based encoders and hand-coded pooling heuristics to aggregate information learned by the encoder. In this paper, we leverage the intrinsic graph processing capabilities of the Transformer model and propose replacing hand-coded pooling methods with new tokens in the input, which are designed to aggregate information via explicit graph relations in the computation of attention weights. We introduce a joint text-graph Transformer model and a graph-assisted declarative pooling (GADePo) specification of the input, which provides explicit and high-level instructions for information aggregation. GADePo allows the pooling process to be guided by domain-specific knowledge or desired outcomes but still learned by the Transformer, leading to more flexible and customisable pooling strategies. We evaluate our method across diverse datasets and models and show that our approach yields promising results that are consistently better than those achieved by the hand-coded pooling functions.