Goto

Collaborating Authors

 rotten tomatoe






LLaMAs Have Feelings Too: Unveiling Sentiment and Emotion Representations in LLaMA Models Through Probing

Di Palma, Dario, De Bellis, Alessandro, Servedio, Giovanni, Anelli, Vito Walter, Narducci, Fedelucio, Di Noia, Tommaso

arXiv.org Artificial Intelligence

Large Language Models (LLMs) have rapidly become central to NLP, demonstrating their ability to adapt to various tasks through prompting techniques, including sentiment analysis. However, we still have a limited understanding of how these models capture sentiment-related information. This study probes the hidden layers of Llama models to pinpoint where sentiment features are most represented and to assess how this affects sentiment analysis. Using probe classifiers, we analyze sentiment encoding across layers and scales, identifying the layers and pooling methods that best capture sentiment signals. Our results show that sentiment information is most concentrated in mid-layers for binary polarity tasks, with detection accuracy increasing up to 14% over prompting techniques. Additionally, we find that in decoder-only models, the last token is not consistently the most informative for sentiment encoding. Finally, this approach enables sentiment tasks to be performed with memory requirements reduced by an average of 57%. These insights contribute to a broader understanding of sentiment in LLMs, suggesting layer-specific probing as an effective approach for sentiment tasks beyond prompting, with potential to enhance model utility and reduce memory requirements.


The Series' Second Movie Beat em Citizen Kane /em on Rotten Tomatoes. The New One Is a Whole Different Animal.

Slate

The past decade has brought the world a lot of political and economic chaos, but in its defense, that same span of time has also given us the Paddington Bear movies. With those two London-set adventures, a mix of animation (Paddington) and live action (everyone else), director Paul King created a loopy world all his own, as cozy and visually pleasing as a dollhouse. The Paddington films were also refreshingly gentle, with moral messages that emerged not from preachy dialogue but from their ursine protagonist's unassuming goodness. And Ben Whishaw's voice performance as the unfailingly polite, naively bumbling bear is one of the all-time great matches between actor and animated character, up there with Tom Hanks' Woody in the Toy Story films: Whishaw quite simply is Paddington, and the completeness and believability of his characterization would have set the films apart even without their droll scripts and all-in supporting casts. The third film in the series, Paddington in Peru, ran a high risk of becoming a shark-jumping sequel, with King and his co-writers now replaced by first-time feature director Dougal Wilson and a new writing team consisting of Mark Burton, Jon Foster, and James Lamont.


Is Virginia Tracy the First Great American Film Critic?

The New Yorker

Indeed, many of Tracy's pieces of film criticism aren't reviews--they're movie-centered essays, in which she develops in detail her probingly comprehensive view of the art form over all. She may even be the cinema's first major theoretician. Her body of work cries out for a complete reissue in book form. Tracy, born in 1874, was the daughter of actors, and she began her career on the stage, in the eighteen-nineties. In 1909, she published a book of short stories about the lives of theatre people, "Merely Players." In her love of movies, she was fighting an uphill battle against the intellectual orthodoxies of the time, which revered theatre as a serious art form and disparaged movies as merely popular entertainment.


Rotten Tomatoes further dilutes its utility with 'Verified Hot' badge

Engadget

Rotten Tomatoes just added a new "Verified Hot" badge that indicates an overall positive user score that will join the "Certified Fresh" badge for critic scores. To qualify for this designation, a movie or show needs to have a Verified Audience Score of 90 percent or higher. Finally, the dregs will be slapped with a "Stale" badge, which is for any show or movie that falls beneath 60 percent. Rotten Tomatoes is trying to get around review bombing here by mandating that user reviews be from people who actually saw the movie in question. There are a couple of little problems with this. It verifies that a consumer saw the movie via the ticketing firm Fandango, and there are plenty of other ticketing firms out there, including, you know, the theater cashier.


Seeing the Forest through the Trees: Data Leakage from Partial Transformer Gradients

Li, Weijun, Xu, Qiongkai, Dras, Mark

arXiv.org Artificial Intelligence

Recent studies have shown that distributed machine learning is vulnerable to gradient inversion attacks, where private training data can be reconstructed by analyzing the gradients of the models shared in training. Previous attacks established that such reconstructions are possible using gradients from all parameters in the entire models. However, we hypothesize that most of the involved modules, or even their sub-modules, are at risk of training data leakage, and we validate such vulnerabilities in various intermediate layers of language models. Our extensive experiments reveal that gradients from a single Transformer layer, or even a single linear component with 0.54% parameters, are susceptible to training data leakage. Additionally, we show that applying differential privacy on gradients during training offers limited protection against the novel vulnerability of data disclosure.


DETAIL: Task DEmonsTration Attribution for Interpretable In-context Learning

Zhou, Zijian, Lin, Xiaoqiang, Xu, Xinyi, Prakash, Alok, Rus, Daniela, Low, Bryan Kian Hsiang

arXiv.org Artificial Intelligence

In-context learning (ICL) allows transformer-based language models that are pre-trained on general text to quickly learn a specific task with a few "task demonstrations" without updating their parameters, significantly boosting their flexibility and generality. ICL possesses many distinct characteristics from conventional machine learning, thereby requiring new approaches to interpret this learning paradigm. Taking the viewpoint of recent works showing that transformers learn in context by formulating an internal optimizer, we propose an influence function-based attribution technique, DETAIL, that addresses the specific characteristics of ICL. We empirically verify the effectiveness of our approach for demonstration attribution while being computationally efficient. Leveraging the results, we then show how DETAIL can help improve model performance in real-world scenarios through demonstration reordering and curation. Finally, we experimentally prove the wide applicability of DETAIL by showing our attribution scores obtained on white-box models are transferable to black-box models in improving model performance.