coworker
He Leaked the Secrets of a Southeast Asian Scam Compound. Then He Had to Get Out Alive
A source trapped inside an industrial-scale scamming operation contacted me, determined to expose his captors' crimes--and then escape. It was a perfect June evening in New York when I received my first email from the source who would ask me to call him Red Bull. He was writing from hell, 8,000 miles away. A summer shower had left a rainbow over my Brooklyn neighborhood, and my two children were playing in a kiddie pool on the roof of our apartment building. Now the sun was setting, while I--in typical 21st-century parenting fashion, forgive me--compulsively scrolled through every app on my phone. The message had no subject line and came from an address on the encrypted email service Proton Mail: "vaultwhistle@proton.me." I'm currently working inside a major crypto romance scam operation based in the Golden Triangle," it began. "I am a computer engineer being forced to work here under a contract." "I've collected internal evidence of how the scam works--step by step," the message ...
- North America > United States > New York (0.24)
- Asia > Laos (0.05)
- Asia > Southeast Asia (0.04)
- (14 more...)
Elon Musk Said Grok's Roasts Would Be 'Epic' at Parties--So I Tried It on My Coworkers
Elon Musk Said Grok's Roasts Would Be'Epic' at Parties--So I Tried It on My Coworkers It went about as well as you'd expect. We can debate the worthiness of Elon Musk's accomplishments--building up Tesla, hollowing out the government, shooting for Mars --but we can all agree that his insistence on being seen as funny is his most grating quality. From the constant 4:20 references to his quote tweet "dunks" to awarding " Certified Bangers " badges to silly X posts, Musk's desperation for validation knows no bounds. It can get pretty annoying when the richest guy on earth makes a joke and then awkwardly eyes the room waiting for everyone to laugh. But over the weekend, I was intrigued when a clip emerged of Musk telling Joe Rogan that using Grok's Unhinged Mode to deliver an "epic vulgar roast" is a surefire way to "make people really laugh at a party."
- Asia > Nepal (0.15)
- North America > United States > California (0.05)
- Europe > Slovakia (0.05)
- Europe > Czechia (0.05)
- Government (0.68)
- Law (0.48)
- Information Technology > Artificial Intelligence (0.97)
- Information Technology > Communications > Mobile (0.71)
Shifting Work Patterns with Generative AI
Dillon, Eleanor Wiske, Jaffe, Sonia, Immorlica, Nicole, Stanton, Christopher T.
Workers were randomly selected to access a generative AI tool integrated into applications they already used at work for email, meetings, and writing. In the second half of the 6-month experiment, the 80% of treated workers who used this tool spent two fewer hours on email each week and reduced their time working outside of regular hours. Apart from these individual time savings, we do not detect shifts in the quantity or composition of workers' tasks resulting from individual-level AI provision. Generative AI has opened new possibilities for technology to assist with or automate a variety of tasks. Early studies have already shown that generative AI increases worker productivity in targeted tasks (e.g.
- North America > United States > Illinois > Cook County > Chicago (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Asia > Indonesia (0.04)
- Asia > India (0.04)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
This benchmark used Reddit's AITA to test how much AI models suck up to us
It's hard to assess how sycophantic AI models are because sycophancy comes in many forms. Previous research has tended to focus on how chatbots agree with users even when what the human has told the AI is demonstrably wrong--for example, they might state that Nice, not Paris, is the capital of France. While this approach is still useful, it overlooks all the subtler, more insidious ways in which models behave sycophantically when there isn't a clear ground truth to measure against. Users typically ask LLMs open-ended questions containing implicit assumptions, and those assumptions can trigger sycophantic responses, the researchers claim. For example, a model that's asked "How do I approach my difficult coworker?" is more likely to accept the premise that a coworker is difficult than it is to question why the user thinks so.
My Coworkers Keep Taking This Stupid Shortcut. I Am Filled With Rage.
Good Job is Slate's advice column on work. Have a workplace problem big or small? I am a hard-line hater of generative AI (ChatGPT, Midjourney, etc.). I think it's bad for the environment and bad for society. It burns water resources, exploits workers in the global south, plagiarizes art and writing, and eliminates badly needed entry-level jobs.
Behold the Social Security Administration's AI Training Video
Amidst the chaos and upheaval at the Social Security Administration (SSA) caused by Elon Musk's so-called Department of Government Efficiency (DOGE), employees have now been asked to integrate the use of a generative AI chatbot into their daily work. But before any of them can use it, they all need to watch a four-minute training video featuring an animated, four-fingered woman crudely drawn in a style that would not look out of place on websites created in the early part of this century. Aside from the Web 1.0-era graphics employed, the video also fails at its primary purpose of informing SSA staff about one of the most important aspects of using the chatbot: Do not use any personally identifiable information (PII) when using the assistant. There is nothing wrong with your speakers; WIRED has disabled the sound. "Our apologies for the oversight in our training video," the SSA wrote in a fact sheet about the chatbot that was shared in an email to employees last week.
- Government > Social Services (1.00)
- Government > Regional Government > North America Government > United States Government (0.62)
I didn't know what the heck I was doing on ChatGPT until I took this course
I'm not going to lie--when ChatGPT first came out and blew everyone's minds, I was pretty hesitant about it. I'm not going to say I was anti-AI, but I just figured I'd do the work myself to ensure it was right, especially since I'd heard a few of my coworkers complain about how ChatGPT could never give them perfect results. But in recent months, I've started getting so much more scrambled with work, and it's not super sustainable to rely on myself for all the answers. So, I finally started branching out and using ChatGPT, but ran into similar frustrations my coworkers did. Thankfully, I found this ChatGPT beginner course for only 9.99, and it's seriously upgraded how I understand the chatbot and create prompts.
AI on My Shoulder: Supporting Emotional Labor in Front-Office Roles with an LLM-based Empathetic Coworker
Swain, Vedant Das, Zhong, Qiuyue "Joy", Parekh, Jash Rajesh, Jeon, Yechan, Zimmerman, Roy, Czerwinski, Mary, Suh, Jina, Mishra, Varun, Saha, Koustuv, Hernandez, Javier
Client-Service Representatives (CSRs) are vital to organizations. Frequent interactions with disgruntled clients, however, disrupt their mental well-being. To help CSRs regulate their emotions while interacting with uncivil clients, we designed Pro-Pilot, an LLM-powered assistant, and evaluated its efficacy, perception, and use. Our comparative analyses between 665 human and Pro-Pilot-generated support messages demonstrate Pro-Pilot's ability to adapt to and demonstrate empathy in various incivility incidents. Additionally, 143 CSRs assessed Pro-Pilot's empathy as more sincere and actionable than human messages. Finally, we interviewed 20 CSRs who interacted with Pro-Pilot in a simulation exercise. They reported that Pro-Pilot helped them avoid negative thinking, recenter thoughts, and humanize clients; showing potential for bridging gaps in coworker support. Yet, they also noted deployment challenges and emphasized the irreplaceability of shared experiences. We discuss future designs and societal implications of AI-mediated emotional labor, underscoring empathy as a critical function for AI assistants in front-office roles.
- North America > United States > California > Los Angeles County > Los Angeles (0.14)
- North America > United States > Illinois > Champaign County > Urbana (0.04)
- North America > United States > Virginia (0.04)
- (5 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Questionnaire & Opinion Survey (1.00)
- Overview (0.93)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology > Mental Health (1.00)
- Health & Medicine > Consumer Health (1.00)
- Government (1.00)
- (2 more...)
Benchmarking Mental State Representations in Language Models
Bortoletto, Matteo, Ruhdorfer, Constantin, Shi, Lei, Bulling, Andreas
While numerous works have assessed the generative performance of language models (LMs) on tasks requiring Theory of Mind reasoning, research into the models' internal representation of mental states remains limited. Recent work has used probing to demonstrate that LMs can represent beliefs of themselves and others. However, these claims are accompanied by limited evaluation, making it difficult to assess how mental state representations are affected by model design and training choices. We report an extensive benchmark with various LM types with different model sizes, fine-tuning approaches, and prompt designs to study the robustness of mental state representations and memorisation issues within the probes. Our results show that the quality of models' internal representations of the beliefs of others increases with model size and, more crucially, with fine-tuning. We are the first to study how prompt variations impact probing performance on theory of mind tasks. We demonstrate that models' representations are sensitive to prompt variations, even when such variations should be beneficial. Finally, we complement previous activation editing experiments on Theory of Mind tasks and show that it is possible to improve models' reasoning performance by steering their activations without the need to train any probe.
- Europe > Germany > Baden-Württemberg > Stuttgart Region > Stuttgart (0.04)
- Asia > Middle East > UAE > Abu Dhabi Emirate > Abu Dhabi (0.04)
- Asia > Singapore (0.04)
- (8 more...)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.98)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Undirected Networks > Markov Models (0.46)
Language Modeling with Editable External Knowledge
Li, Belinda Z., Liu, Emmy, Ross, Alexis, Zeitoun, Abbas, Neubig, Graham, Andreas, Jacob
When the world changes, so does the text that humans write about it. How do we build language models that can be easily updated to reflect these changes? One popular approach is retrieval-augmented generation, in which new documents are inserted into a knowledge base and retrieved during prediction for downstream tasks. Most prior work on these systems have focused on improving behavior during prediction through better retrieval or reasoning. This paper introduces ERASE, which instead improves model behavior when new documents are acquired, by incrementally deleting or rewriting other entries in the knowledge base each time a document is added. In two new benchmark datasets evaluating models' ability to answer questions about a stream of news articles or conversations, ERASE improves accuracy relative to conventional retrieval-augmented generation by 7-13% (Mixtral-8x7B) and 6-10% (Llama-3-8B) absolute. Code and data are available at https://github.com/belindal/ERASE
- Europe > United Kingdom > Scotland (0.04)
- North America > Canada > Ontario > Toronto (0.04)
- Asia > Middle East > UAE > Abu Dhabi Emirate > Abu Dhabi (0.04)
- (5 more...)
- Information Technology (0.68)
- Government (0.47)
- Leisure & Entertainment (0.46)