potter
Nuno set to take over at West Ham after Potter sacking
Image caption, Nuno Espirito Santo (right) is expected to succeed Graham Potter and could take charge for Monday's match at Everton West Ham are set to appoint former Nottingham Forest boss Nuno Espirito Santo after sacking head coach Graham Potter. Portuguese Nuno is expected to be in place before Monday's match against Everton, saying he has had positive talks with the West Ham board. Potter was dismissed on Saturday morning after just eight months, with the club struggling in 19th place in the Premier League. The Hammers picked up only three points from their opening five Premier League games under the Englishman. The east London club said they believe that a change is necessary in order to help improve the team's position in the Premier League as soon as possible.
- Europe > United Kingdom > England > Greater London > London (0.25)
- Europe > United Kingdom > Wales (0.05)
- Europe > United Kingdom > Scotland (0.05)
- (3 more...)
ScotRail to replace controversial AI voice on trains
ScotRail is set to replace a controversial AI voice on trains, after criticism from a professional voiceover artist. Gayanne Potter's Scottish accent was used to teach station announcer "Iona", but she said it was a surprise to hear a "dreadful" robotic version of herself. ScotRail introduced the voice in May, provide by Swedish tech firm ReadSpeaker, to replace pre-recorded human announcements on some services. Transport Scotland said the rail operator now intended to introduce an alternative voice "as soon as practicable". ScotRail has not confirmed if this will be a human recording or another AI-trained voice.
- Europe > United Kingdom > Scotland (0.35)
- South America (0.15)
- North America > Central America (0.15)
- (13 more...)
- Transportation > Passenger (1.00)
- Transportation > Ground > Rail (1.00)
Step-by-Step Reasoning Attack: Revealing 'Erased' Knowledge in Large Language Models
Sinha, Yash, Baser, Manit, Mandal, Murari, Divakaran, Dinil Mon, Kankanhalli, Mohan
Knowledge erasure in large language models (LLMs) is important for ensuring compliance with data and AI regulations, safeguarding user privacy, mitigating bias, and misinformation. Existing unlearning methods aim to make the process of knowledge erasure more efficient and effective by removing specific knowledge while preserving overall model performance, especially for retained information. However, it has been observed that the unlearning techniques tend to suppress and leave the knowledge beneath the surface, thus making it retrievable with the right prompts. In this work, we demonstrate that \textit{step-by-step reasoning} can serve as a backdoor to recover this hidden information. We introduce a step-by-step reasoning-based black-box attack, Sleek, that systematically exposes unlearning failures. We employ a structured attack framework with three core components: (1) an adversarial prompt generation strategy leveraging step-by-step reasoning built from LLM-generated queries, (2) an attack mechanism that successfully recalls erased content, and exposes unfair suppression of knowledge intended for retention and (3) a categorization of prompts as direct, indirect, and implied, to identify which query types most effectively exploit unlearning weaknesses. Through extensive evaluations on four state-of-the-art unlearning techniques and two widely used LLMs, we show that existing approaches fail to ensure reliable knowledge removal. Of the generated adversarial prompts, 62.5% successfully retrieved forgotten Harry Potter facts from WHP-unlearned Llama, while 50% exposed unfair suppression of retained knowledge. Our work highlights the persistent risks of information leakage, emphasizing the need for more robust unlearning strategies for erasure.
- Europe > United Kingdom > Scotland (0.04)
- Asia > Singapore (0.04)
- North America > United States > Virginia (0.04)
- Asia > India (0.04)
- Law (1.00)
- Education (0.96)
- Information Technology > Security & Privacy (0.93)
Voiceover artist calls on ScotRail to stop using her voice for AI announcements
ReadSpeaker markets its products, including Iona, as an "AI voice generator," but it said all of its programmes are based on "human voice talent". The firm uses a text-to-speech model, that means a user can type anything and Iona will read it out loud. The technology uses artificial intelligence learning but AI needs something to learn from. In this instance, it is voice recordings of an accent or language it is trying to emulate. In response to the complaints, the tech firm said: "ReadSpeaker is aware of Ms Potter's concerns, and has comprehensively addressed these with Ms Potter's legal representative several times in the past."
- Transportation > Passenger (0.40)
- Transportation > Ground > Rail (0.40)
Evaluating Copyright Takedown Methods for Language Models
Wei, Boyi, Shi, Weijia, Huang, Yangsibo, Smith, Noah A., Zhang, Chiyuan, Zettlemoyer, Luke, Li, Kai, Henderson, Peter
Language models (LMs) derive their capabilities from extensive training on diverse data, including potentially copyrighted material. These models can memorize and generate content similar to their training data, posing potential concerns. Therefore, model creators are motivated to develop mitigation methods that prevent generating protected content. We term this procedure as copyright takedowns for LMs, noting the conceptual similarity to (but legal distinction from) the DMCA takedown This paper introduces the first evaluation of the feasibility and side effects of copyright takedowns for LMs. We propose CoTaEval, an evaluation framework to assess the effectiveness of copyright takedown methods, the impact on the model's ability to retain uncopyrightable factual knowledge from the training data whose recitation is embargoed, and how well the model maintains its general utility and efficiency. We examine several strategies, including adding system prompts, decoding-time filtering interventions, and unlearning approaches. Our findings indicate that no tested method excels across all metrics, showing significant room for research in this unique problem setting and indicating potential unresolved challenges for live policy proposals.
- South America > Peru (0.14)
- North America > Belize (0.14)
- North America > Mexico (0.14)
- (7 more...)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.95)
The Frontier of Data Erasure: Machine Unlearning for Large Language Models
Qu, Youyang, Ding, Ming, Sun, Nan, Thilakarathna, Kanchana, Zhu, Tianqing, Niyato, Dusit
Large Language Models (LLMs) are foundational to AI advancements, facilitating applications like predictive text generation. Nonetheless, they pose risks by potentially memorizing and disseminating sensitive, biased, or copyrighted information from their vast datasets. Machine unlearning emerges as a cutting-edge solution to mitigate these concerns, offering techniques for LLMs to selectively discard certain data. This paper reviews the latest in machine unlearning for LLMs, introducing methods for the targeted forgetting of information to address privacy, ethical, and legal challenges without necessitating full model retraining. It divides existing research into unlearning from unstructured/textual data and structured/classification data, showcasing the effectiveness of these approaches in removing specific data while maintaining model efficacy. Highlighting the practicality of machine unlearning, this analysis also points out the hurdles in preserving model integrity, avoiding excessive or insufficient data removal, and ensuring consistent outputs, underlining the role of machine unlearning in advancing responsible, ethical AI.
- North America > United States > New York (0.04)
- Oceania > Australia > New South Wales > Sydney (0.04)
- North America > United States > Virginia (0.04)
- (4 more...)
- Overview (1.00)
- Research Report > Promising Solution (0.48)
- Media (1.00)
- Leisure & Entertainment (1.00)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
Sharp Drop in Airfares Cheers Inflation-Weary Travelers
Early this month, the average price for a domestic flight around Thanksgiving was down about 9 percent from a year ago. And flights around Christmas were about 18 percent cheaper, according to Hopper, a booking and price-tracking app. Kayak, the travel search engine, looked at a wider range of dates around the holidays and found that domestic flight prices were down about 18 percent around Thanksgiving and 23 percent around Christmas. "In a lot of cases, we're seeing some of the lowest fares that we've seen really since travel started coming back after the drop-off in 2020," said Kyle Potter, executive editor of Thrifty Traveler, a travel blog and deal-watching service. Domestic ticket prices fell over the summer, Mr. Potter said, and deals on international travel, particularly to Europe, have become more common recently.
- Transportation > Passenger (1.00)
- Transportation > Air (1.00)
- Consumer Products & Services > Travel (1.00)
DataVinci: Learning Syntactic and Semantic String Repairs
Singh, Mukul, Cambronero, José, Gulwani, Sumit, Le, Vu, Negreanu, Carina, Verbruggen, Gust
String data is common in real-world datasets: 67.6% of values in a sample of 1.8 million real Excel spreadsheets from the web were represented as text. Systems that successfully clean such string data can have a significant impact on real users. While prior work has explored errors in string data, proposed approaches have often been limited to error detection or require that the user provide annotations, examples, or constraints to fix the errors. Furthermore, these systems have focused independently on syntactic errors or semantic errors in strings, but ignore that strings often contain both syntactic and semantic substrings. We introduce DataVinci, a fully unsupervised string data error detection and repair system. DataVinci learns regular-expression-based patterns that cover a majority of values in a column and reports values that do not satisfy such patterns as data errors. DataVinci can automatically derive edits to the data error based on the majority patterns and constraints learned over other columns without the need for further user interaction. To handle strings with both syntactic and semantic substrings, DataVinci uses an LLM to abstract (and re-concretize) portions of strings that are semantic prior to learning majority patterns and deriving edits. Because not all data can result in majority patterns, DataVinci leverages execution information from an existing program (which reads the target data) to identify and correct data repairs that would not otherwise be identified. DataVinci outperforms 7 baselines on both error detection and repair when evaluated on 4 existing and new benchmarks.
- North America > United States > New York (0.04)
- North America > United States > Nevada (0.04)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- (4 more...)
DOLCE: A Descriptive Ontology for Linguistic and Cognitive Engineering
Borgo, Stefano, Ferrario, Roberta, Gangemi, Aldo, Guarino, Nicola, Masolo, Claudio, Porello, Daniele, Sanfilippo, Emilio M., Vieu, Laure
DOLCE, the first top-level (foundational) ontology to be axiomatized, has remained stable for twenty years and today is broadly used in a variety of domains. DOLCE is inspired by cognitive and linguistic considerations and aims to model a commonsense view of reality, like the one human beings exploit in everyday life in areas as diverse as socio-technical systems, manufacturing, financial transactions and cultural heritage. DOLCE clearly lists the ontological choices it is based upon, relies on philosophical principles, is richly formalized, and is built according to well-established ontological methodologies, e.g. OntoClean. Because of these features, it has inspired most of the existing top-level ontologies and has been used to develop or improve standards and public domain resources (e.g. CIDOC CRM, DBpedia and WordNet). Being a foundational ontology, DOLCE is not directly concerned with domain knowledge. Its purpose is to provide the general categories and relations needed to give a coherent view of reality, to integrate domain knowledge, and to mediate across domains. In these 20 years DOLCE has shown that applied ontologies can be stable and that interoperability across reference and domain ontologies is a reality. This paper briefly introduces the ontology and shows how to use it on a few modeling cases.
- North America > United States > California > San Francisco County > San Francisco (0.14)
- Europe > Netherlands > North Holland > Amsterdam (0.04)
- Europe > Middle East > Cyprus (0.04)
- Europe > Germany > Saarland > Saarbrücken (0.04)
POTTER: Pooling Attention Transformer for Efficient Human Mesh Recovery
Zheng, Ce, Liu, Xianpeng, Qi, Guo-Jun, Chen, Chen
Transformer architectures have achieved SOTA performance on the human mesh recovery (HMR) from monocular images. However, the performance gain has come at the cost of substantial memory and computational overhead. A lightweight and efficient model to reconstruct accurate human mesh is needed for real-world applications. In this paper, we propose a pure transformer architecture named POoling aTtention TransformER (POTTER) for the HMR task from single images. Observing that the conventional attention module is memory and computationally expensive, we propose an efficient pooling attention module, which significantly reduces the memory and computational cost without sacrificing performance. Furthermore, we design a new transformer architecture by integrating a High-Resolution (HR) stream for the HMR task. The high-resolution local and global features from the HR stream can be utilized for recovering more accurate human mesh. Our POTTER outperforms the SOTA method METRO by only requiring 7% of total parameters and 14% of the Multiply-Accumulate Operations on the Human3.6M (PA-MPJPE metric) and 3DPW (all three metrics) datasets. The project webpage is https://zczcwh.github.io/potter_page.
- North America > United States > North Carolina (0.04)
- Asia > Japan > Honshū > Chūbu > Ishikawa Prefecture > Kanazawa (0.04)