Goto

Collaborating Authors

 roast


Elon Musk Said Grok's Roasts Would Be 'Epic' at Parties--So I Tried It on My Coworkers

WIRED

Elon Musk Said Grok's Roasts Would Be'Epic' at Parties--So I Tried It on My Coworkers It went about as well as you'd expect. We can debate the worthiness of Elon Musk's accomplishments--building up Tesla, hollowing out the government, shooting for Mars --but we can all agree that his insistence on being seen as funny is his most grating quality. From the constant 4:20 references to his quote tweet "dunks" to awarding " Certified Bangers " badges to silly X posts, Musk's desperation for validation knows no bounds. It can get pretty annoying when the richest guy on earth makes a joke and then awkwardly eyes the room waiting for everyone to laugh. But over the weekend, I was intrigued when a clip emerged of Musk telling Joe Rogan that using Grok's Unhinged Mode to deliver an "epic vulgar roast" is a surefire way to "make people really laugh at a party."

  Country:
  Industry:

Coffee's delicious journey from tiny bean to tasty brew

Popular Science

Since 2004, the number of American adults who've enjoyed a daily cup of joe has increased 37 percent. Breakthroughs, discoveries, and DIY tips sent every weekday. Whether you're an early bird or a night owl, coffee is probably part of your daily routine. Since 2004, the number of American adults who've enjoyed a daily cup of java has jumped up 37 percent, the highest level in more than 20 years, according to the National Coffee Association . But coffee is hardly a new invention.


A Book App Used AI to 'Roast' Its Users. It Went Anti-Woke Instead

WIRED

Fable, a popular social media app that describes itself as a haven for "bookworms and bingewatchers," created an AI-powered end-of-year summary feature recapping what books users read in 2024. It was meant to be playful and fun, but some of the recaps took on an oddly combative tone. Writer Danny Groves's summary for example, asked if he's "ever in the mood for a straight, cis white man's perspective" after labeling him a "diversity devotee." Books influencer Tiana Trammell's summary, meanwhile, ended with the following advice: "Don't forget to surface for the occasional white author, OK?" A reader summary as shown on the 2024 stats page from the Fable app. Trammell was flabbergasted, and she soon realized she wasn't alone after sharing her experience with Fable's summaries on Threads.


Instagram users are asking ChatGPT to 'roast' their profiles with hilarious results - here's how you can try it

Daily Mail - Science & tech

If you're feeling smug about your perfectly curated Instagram profile, it's time to get humbled. The latest social media trend sees Instagram users asking ChatGPT to'roast' their profiles. The AI chatbot doesn't hold back with its critiques, with one user claiming they'just got dragged to hell'. So, are you brave enough to have your Instagram profile picked apart by a robot? Here's how you can try the hilarious new trend.


ROAST: Review-level Opinion Aspect Sentiment Target Joint Detection

Chebolu, Siva Uday Sampreeth, Dernoncourt, Franck, Lipka, Nedim, Solorio, Thamar

arXiv.org Artificial Intelligence

Aspect-Based Sentiment Analysis (ABSA) has experienced tremendous expansion and diversity due to various shared tasks spanning several languages and fields and organized via SemEval workshops and Germeval. Nonetheless, a few shortcomings still need to be addressed, such as the lack of low-resource language evaluations and the emphasis on sentence-level analysis. To thoroughly assess ABSA techniques in the context of complete reviews, this research presents a novel task, Review-Level Opinion Aspect Sentiment Target (ROAST). ROAST seeks to close the gap between sentence-level and text-level ABSA by identifying every ABSA constituent at the review level. We extend the available datasets to enable ROAST, addressing the drawbacks noted in previous research by incorporating low-resource languages, numerous languages, and a variety of topics. Through this effort, ABSA research will be able to cover more ground and get a deeper comprehension of the task and its practical application in a variety of languages and domains (https://github.com/RiTUAL-UH/ROAST-ABSA).


In defense of parameter sharing for model-compression

Desai, Aditya, Shrivastava, Anshumali

arXiv.org Artificial Intelligence

When considering a model architecture, there are several ways to reduce its memory footprint. Historically, popular approaches included selecting smaller architectures and creating sparse networks through pruning. More recently, randomized parameter-sharing (RPS) methods have gained traction for model compression at start of training. In this paper, we comprehensively assess the trade-off between memory and accuracy across RPS, pruning techniques, and building smaller models. Our findings demonstrate that RPS, which is both data and model-agnostic, consistently outperforms/matches smaller models and all moderately informed pruning strategies, such as MAG, SNIP, SYNFLOW, and GRASP, across the entire compression range. This advantage becomes particularly pronounced in higher compression scenarios. Notably, even when compared to highly informed pruning techniques like Lottery Ticket Rewinding (LTR), RPS exhibits superior performance in high compression settings. This points out inherent capacity advantage that RPS enjoys over sparse models. Theoretically, we establish RPS as a superior technique in terms of memory-efficient representation when compared to pruning for linear models. This paper argues in favor of paradigm shift towards RPS based models. During our rigorous evaluation of RPS, we identified issues in the state-of-the-art RPS technique ROAST, specifically regarding stability (ROAST's sensitivity to initialization hyperparameters, often leading to divergence) and Pareto-continuity (ROAST's inability to recover the accuracy of the original model at zero compression). We provably address both of these issues. We refer to the modified RPS, which incorporates our improvements, as STABLE-RPS.


Efficient model compression with Random Operation Access Specific Tile (ROAST) hashing

Desai, Aditya, Zhou, Keren, Shrivastava, Anshumali

arXiv.org Artificial Intelligence

Advancements in deep learning are often associated with increasing model sizes. The model size dramatically affects the deployment cost and latency of deep models. For instance, models like BERT cannot be deployed on edge devices and mobiles due to their sheer size. As a result, most advances in Deep Learning are yet to reach the edge. Model compression has sought much-deserved attention in literature across natural language processing, vision, and recommendation domains. This paper proposes a model-agnostic, cache-friendly model compression approach: Random Operation Access Specific Tile (ROAST) hashing. ROAST collapses the parameters by clubbing them through a lightweight mapping. Notably, while clubbing these parameters, ROAST utilizes cache hierarchies by aligning the memory access pattern with the parameter access pattern. ROAST is up to $\sim 25 \times$ faster to train and $\sim 50 \times$ faster to infer than the popular parameter sharing method HashedNet. Additionally, ROAST introduces global weight sharing, which is empirically and theoretically superior to local weight sharing in HashedNet, and can be of independent interest in itself. With ROAST, we present the first compressed BERT, which is $100\times - 1000\times$ smaller but does not result in quality degradation. These compression levels on universal architecture like transformers are promising for the future of SOTA model deployment on resource-constrained devices like mobile and edge devices