This Chrome Extension Turns LinkedIn Posts About AI Into Facts About Allen Iverson
The developers of a browser tool that changes AI-centric LinkedIn posts to Allen Iverson facts want to help "take back control of your experience of the internet." Give yourself a nice gift this holiday season. Download a free Chrome extension that replaces those incessant LinkedIn posts about artificial intelligence with facts about a very different kind of AI: Allen Iverson. Yes, the answer to your generative AI woes is "The Answer," the crossover king, the four-time NBA scoring champ. One of the defining traits of LinkedIn has always been unhinged posts from power users--the r/LinkedInLunatics subreddit exists for a reason--but the obsessive tenor of LinkedIn posting has become, somehow, more unbearable over the past few years as the generative AI hype cycle has grown.
- North America > United States > Virginia (0.05)
- North America > United States > California (0.05)
- Europe > Slovakia (0.05)
- Europe > Czechia (0.05)
- Information Technology > Services (0.91)
- Information Technology > Security & Privacy (0.70)
The Adoption and Usage of AI Agents: Early Evidence from Perplexity
Yang, Jeremy, Yonack, Noah, Zyskowski, Kate, Yarats, Denis, Ho, Johnny, Ma, Jerry
This paper presents the first large-scale field study of the adoption, usage intensity, and use cases of general-purpose AI agents operating in open-world web environments. Our analysis centers on Comet, an AI-powered browser developed by Perplexity, and its integrated agent, Comet Assistant. Drawing on hundreds of millions of anonymized user interactions, we address three fundamental questions: Who is using AI agents? How intensively are they using them? And what are they using them for? Our findings reveal substantial heterogeneity in adoption and usage across user segments. Earlier adopters, users in countries with higher GDP per capita and educational attainment, and individuals working in digital or knowledge-intensive sectors -- such as digital technology, academia, finance, marketing, and entrepreneurship -- are more likely to adopt or actively use the agent. To systematically characterize the substance of agent usage, we introduce a hierarchical agentic taxonomy that organizes use cases across three levels: topic, subtopic, and task. The two largest topics, Productivity & Workflow and Learning & Research, account for 57% of all agentic queries, while the two largest subtopics, Courses and Shopping for Goods, make up 22%. The top 10 out of 90 tasks represent 55% of queries. Personal use constitutes 55% of queries, while professional and educational contexts comprise 30% and 16%, respectively. In the short term, use cases exhibit strong stickiness, but over time users tend to shift toward more cognitively oriented topics. The diffusion of increasingly capable AI agents carries important implications for researchers, businesses, policymakers, and educators, inviting new lines of inquiry into this rapidly emerging class of AI capabilities.
- Media (1.00)
- Leisure & Entertainment (1.00)
- Banking & Finance (1.00)
- (5 more...)
Are tech companies using your private data to train AI models?
Are tech companies using your private data to train AI models? Leading tech companies are in a race to release and improve artificial intelligence (AI) products, leaving users in the United States to puzzle out how much of their personal data could be extracted to train AI tools. Meta (which owns Facebook, Instagram, Threads and WhatsApp), Google and LinkedIn have all rolled out AI app features that have the capacity to draw on users' public profiles or emails. Google and LinkedIn offer users ways to opt out of the AI features, while Meta's AI tool provides no means for its users to say "no, thanks." Anthropic's AI hacking claims divide experts Posts warned that the platforms' AI tool rollouts make most private information available for tech company harvesting .
- South America (0.05)
- North America > United States > California > Alameda County > Berkeley (0.05)
- North America > Central America (0.05)
- (10 more...)
LinkedIn is using your data to train its AI models. Here's how to opt out
When you purchase through links in our articles, we may earn a small commission. LinkedIn is using your data to train its AI models. Here's how to opt out Disabling this setting prevents your data from being used, but data already used for training can't be taken back retroactively. Microsoft-owned social networking site LinkedIn will soon start using the data of its users to train its AI models, reports Windows Latest . The platform has sent out emails to users about the change, which will start November 3rd, 2025 and apply to the US, EU, UK, and Switzerland.
- Europe > Switzerland (0.25)
- North America > United States > California (0.05)
- Information Technology > Services (1.00)
- Information Technology > Security & Privacy (0.82)
Generative Sequential Notification Optimization via Multi-Objective Decision Transformers
Ocejo, Borja, Wang, Ruofan, Liu, Ke, Patra, Rohit K., Shen, Haotian, Liu, David, Yuan, Yiwen, Mohanasundaram, Gokulraj, Borisyuk, Fedor, Prabhakar, Prakruthi
Notifications are an important communication channel for delivering timely and relevant information. Optimizing their delivery involves addressing complex sequential decision-making challenges under constraints such as message utility and user fatigue. Offline reinforcement learning (RL) methods, such as Conservative Q-Learning (CQL), have been applied to this problem but face practical challenges at scale, including instability, sensitivity to distribution shifts, limited reproducibility, and difficulties with explainability in high-dimensional recommendation settings. We present a Decision Transformer (DT) based framework that reframes policy learning as return-conditioned supervised learning, improving robustness, scalability, and modeling flexibility. Our contributions include a real-world comparison with CQL, a multi-reward design suitable for non-episodic tasks, a quantile regression approach to return-to-go conditioning, and a production-ready system with circular buffer-based sequence processing for near-real-time inference. Extensive offline and online experiments in a deployed notification system show that our approach improves notification utility and overall session activity while minimizing user fatigue. Compared to a multi-objective CQL-based agent, the DT-based approach achieved a +0.72% increase in sessions for notification decision-making at LinkedIn by making notification recommendation more relevant.
- North America > United States (0.04)
- Asia > Middle East > Republic of Türkiye > Bingoel Province > Bingol (0.04)
- Africa > Togo (0.04)
The Strange Ways Writers Are Proving That Their Writing Isn't ChatGPT
Sign up for the Slatest to get the most insightful analysis, criticism, and advice out there, delivered to your inbox daily. The other week, I was reading an email I'd written when a strange notion occurred to me. Would it perhaps be better, an unsettling new voice suddenly whispered, to leave it in? This is a thought that would've appalled me a year ago. As a professional writer, I have long prided myself on impeccable grammar, judiciously wielded punctuation, and (at times indulgent) verbosity.
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.61)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.61)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.50)
Large Scalable Cross-Domain Graph Neural Networks for Personalized Notification at LinkedIn
He, Shihai, Choi, Julie, Li, Tianqi, Ding, Zhiwei, Du, Peng, Bannur, Priya, Liang, Franco, Borisyuk, Fedor, Jaikumar, Padmini, Xue, Xiaobing, Gupta, Viral
Notification recommendation systems are critical to driving user engagement on professional platforms like LinkedIn. Designing such systems involves integrating heterogeneous signals across domains, capturing temporal dynamics, and optimizing for multiple, often competing, objectives. Graph Neural Networks (GNNs) provide a powerful framework for modeling complex interactions in such environments. In this paper, we present a cross-domain GNN-based system deployed at LinkedIn that unifies user, content, and activity signals into a single, large-scale graph. By training on this cross-domain structure, our model significantly outperforms single-domain baselines on key tasks, including click-through rate (CTR) prediction and professional engagement. We introduce architectural innovations including temporal modeling and multi-task learning, which further enhance performance. Deployed in LinkedIn's notification system, our approach led to a 0.10% lift in weekly active users and a 0.62% improvement in CTR. We detail our graph construction process, model design, training pipeline, and both offline and online evaluations. Our work demonstrates the scalability and effectiveness of cross-domain GNNs in real-world, high-impact applications.
- North America > United States > California > Yolo County > Davis (0.14)
- North America > United States > California > Santa Clara County > Sunnyvale (0.06)
- North America > United States > District of Columbia > Washington (0.05)
- Asia > Myanmar > Tanintharyi Region > Dawei (0.04)
Will AI wipe out the first rung of the career ladder?
This week, I'm wondering what my first jobs in journalism would have been like had generative AI been around. In other news: Elon Musk leaves a trail of chaos, and influencers are selling the text they fed to AI to make art. Generative artificial intelligence may eliminate the job you got with your diploma still in hand, say executives who offered grim assessments of the entry-level job market last week in multiple forums. Dario Amodei, CEO of Anthropic, which makes the multifunctional AI model Claude, told Axios last week that he believes that AI could cut half of all entry-level white-collar jobs and send overall unemployment rocketing to 20% within the next five years. One explanation why an AI company CEO might make such a dire prediction is to hype the capabilities of his product.
- North America > United States > New York > New York County > New York City (0.04)
- Indian Ocean (0.04)
- Asia > Pakistan (0.04)
- (2 more...)
- Government > Regional Government > North America Government > United States Government (1.00)
- Banking & Finance > Economy (1.00)
- Media > News (0.90)
- Information Technology > Services (0.70)
Efficient AI in Practice: Training and Deployment of Efficient LLMs for Industry Applications
Behdin, Kayhan, Dai, Yun, Fatahibaarzi, Ata, Gupta, Aman, Song, Qingquan, Tang, Shao, Sang, Hejian, Dexter, Gregory, Zhu, Sirou, Zhu, Siyu, Dharamsi, Tejas, Sanjabi, Maziar, Kothapalli, Vignesh, Firooz, Hamed, Fu, Zhoutong, Cao, Yihan, Hsu, Pin-Lun, Borisyuk, Fedor, Wang, Zhipeng, Mazumder, Rahul, Pillai, Natesh, Simon, Luke
Large language models (LLMs) have demonstrated remarkable performance across a wide range of industrial applications, from search and recommendations to generative tasks. Although scaling laws indicate that larger models generally yield better generalization and performance, their substantial computational requirements often render them impractical for many real-world scenarios at scale. In this paper, we present methods and insights for training small language models (SLMs) that deliver high performance and efficiency in deployment. We focus on two key techniques: (1) knowledge distillation and (2) model compression via quantization and pruning. These approaches enable SLMs to retain much of the quality of their larger counterparts while significantly reducing training, serving costs, and latency. We detail the impact of these techniques on a variety of use cases at a large professional social network platform and share deployment lessons - including hardware optimization strategies that enhance speed and throughput for both predictive and reasoning-based applications.
- North America > United States > California > Santa Clara County > Sunnyvale (0.04)
- Europe > Italy > Calabria > Catanzaro Province > Catanzaro (0.04)
- Asia > Myanmar > Tanintharyi Region > Dawei (0.04)
- Asia > Middle East > Jordan (0.04)
360Brew: A Decoder-only Foundation Model for Personalized Ranking and Recommendation
Firooz, Hamed, Sanjabi, Maziar, Englhardt, Adrian, Gupta, Aman, Levine, Ben, Olgiati, Dre, Polatkan, Gungor, Melnychuk, Iuliia, Ramgopal, Karthik, Talanine, Kirill, Srinivasan, Kutta, Simon, Luke, Sivasubramoniapillai, Natesh, Ayan, Necip Fazil, Song, Qingquan, Sriram, Samira, Ghosh, Souvik, Song, Tao, Dharamsi, Tejas, Kothapalli, Vignesh, Zhai, Xiaoling, Xu, Ya, Wang, Yu, Dai, Yun
Ranking and recommendation systems are the foundation for numerous online experiences, ranging from search results to personalized content delivery. These systems have evolved into complex, multilayered architectures that leverage vast datasets and often incorporate thousands of predictive models. The maintenance and enhancement of these models is a labor intensive process that requires extensive feature engineering. This approach not only exacerbates technical debt but also hampers innovation in extending these systems to emerging problem domains. In this report, we present our research to address these challenges by utilizing a large foundation model with a textual interface for ranking and recommendation tasks. We illustrate several key advantages of our approach: (1) a single model can manage multiple predictive tasks involved in ranking and recommendation, (2) decoder models with textual interface due to their comprehension of reasoning capabilities, can generalize to new recommendation surfaces and out-of-domain problems, and (3) by employing natural language interfaces for task definitions and verbalizing member behaviors and their social connections, we eliminate the need for feature engineering and the maintenance of complex directed acyclic graphs of model dependencies. We introduce our research pre-production model, 360Brew V1.0, a 150B parameter, decoder-only model that has been trained and fine-tuned on LinkedIn's data and tasks. This model is capable of solving over 30 predictive tasks across various segments of the LinkedIn platform, achieving performance levels comparable to or exceeding those of current production systems based on offline metrics, without task-specific fine-tuning. Notably, each of these tasks is conventionally addressed by dedicated models that have been developed and maintained over multiple years by teams of a similar or larger size than our own.
- North America > United States > Texas (0.04)
- North America > United States > New York (0.04)
- North America > United States > California > Santa Clara County > Sunnyvale (0.04)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Personal Assistant Systems (1.00)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.94)