Goto

Collaborating Authors

 adoption


Implementing advanced AI technologies in finance

MIT Technology Review

Successful AI implementation requires shifts in workplace culture as well as use cases that can scale across the enterprise. In finance departments that have long been defined by precision and control, AI has arrived less as a neatly managed upgrade than as a quiet insurgency. Employees are already using it while leadership races to impose structure, governance, and strategy after the fact. The result is a paradox: one of the most tightly regulated functions in the enterprise is now among the most experimentally transformed. What's emerging is a layered shift in how work gets done. From variance commentary and fraud detection to contract review and close narrative drafting, AI is embedding itself across workflows, particularly where unstructured data once slowed down everything.


Good Luck Getting a Mac Mini for the Next 'Several Months'

WIRED

Apple CEO Tim Cook told analysts that AI adoption has happened faster than expected. Apple CEO Tim Cook said on the company's earnings call on Thursday that it could take "several months" to meet skyrocketing demand for the Mac Mini, the company's compact but mighty, screen-free desktop computer. Cook's remarks come after coders determined in recent months that the Mac Mini was the perfect machine for agentic AI tasks. "On the Mac Mini and Mac Studio, both of these are amazing platforms for AI and agentic tools," Cook said on the earnings call, in response to analyst questions. "And customer adoption of that is happening faster than we expected." The news comes amid another record-setting quarter for the company.


Cold-Start Forecasting of New Product Life-Cycles via Conditional Diffusion Models

Zhou, Ruihan, Zhang, Zishi, Han, Jinhui, Peng, Yijie, Zhang, Xiaowei

arXiv.org Machine Learning

Forecasting the life-cycle trajectory of a newly launched product is important for launch planning, resource allocation, and early risk assessment. This task is especially difficult in the pre-launch and early post-launch phases, when product-specific outcome history is limited or unavailable, creating a cold-start problem. In these phases, firms must make decisions before demand patterns become reliably observable, while early signals are often sparse, noisy, and unstable We propose the Conditional Diffusion Life-cycle Forecaster (CDLF), a conditional generative framework for forecasting new-product life-cycle trajectories under cold start. CDLF combines three sources of information: static descriptors, reference trajectories from similar products, and newly arriving observations when available. Here, static descriptors refer to structured pre-launch characteristics of the product, such as category, price tier, brand or organization identity, scale, and access conditions. This structure allows the model to condition forecasts on relevant product context and to update them adaptively over time without retraining, yielding flexible multi-modal predictive distributions under extreme data scarcity. The method satisfies consistency with a horizon-uniform distributional error bound for recursive generation. Across studies on Intel microprocessor stock keeping unit (SKU) life cycles and the platform-mediated adoption of open large language model repositories, CDLF delivers more accurate point forecasts and higher-quality probabilistic forecasts than classical diffusion models, Bayesian updating approaches, and other state-of-the-art machine-learning baselines.


Tech billionaires fly in for Delhi AI expo as Modi jostles to lead in south

The Guardian

Campaigners fear Narendra Modi could use AI to increase state surveillance and sway elections. Campaigners fear Narendra Modi could use AI to increase state surveillance and sway elections. Silicon Valley tech billionaires will land in Delhi this week for an AI summit hosted by India's prime minister, Narendra Modi, where leaders of the global south will wrestle for control over the fast-developing technology. During the week-long AI Impact Summit, attended by thousands of tech executives, government officials and AI safety experts, tech companies valued at trillions of dollars will rub along with leaders of countries such as Kenya and Indonesia, where average wages dip well below $1,000 a month. Amid a push to speed up AI adoption across the globe, Sundar Pichai, Sam Altman and Dario Amodei, the heads of Google, OpenAI and Anthropic, will all be there.


In the AI gold rush, tech firms are embracing 72-hour weeks

BBC News

The recruitment website is jazzy, awash with pictures of happy young workers, and festooned with upbeat mini-slogans such as insane speed, infinite curiosity and customer obsession. Read a bit lower, and there are promises of perks galore: competitive compensation, free meals, free gym membership, free health and dental care and so on. But then comes the catch. Each job ad contains a warning: Please don't join if you're not excited about working ~70 hrs/week in person with some of the most ambitious people in NYC. The website belongs to Rilla, a New York-based tech business which sells AI-based systems that allow employers to monitor sales representatives when they are out and about, interacting with clients. The company has become something of a poster child for a fast-paced workplace culture known as 996, also sometimes referred to as hustle culture or grindcore.


Probing Social Bias in Labor Market Text Generation by ChatGPT: A Masked Language Model Approach

Neural Information Processing Systems

As generative large language models (LLMs) such as ChatGPT gain widespread adoption in various domains, their potential to propagate and amplify social biases, particularly in high-stakes areas such as the labor market, has become a pressing concern. AI algorithms are not only widely used in the selection of job applicants, individual job seekers may also make use of generative LLMs to help develop their job application materials. Against this backdrop, this research builds on a novel experimental design to examine social biases within ChatGPT-generated job applications in response to real job advertisements. By simulating the process of job application creation, we examine the language patterns and biases that emerge when the model is prompted with diverse job postings. Notably, we present a novel bias evaluation framework based on Masked Language Models to quantitatively assess social bias based on validated inventories of social cues/words, enabling a systematic analysis of the language used. Our findings show that the increasing adoption of generative AI, not only by employers but also increasingly by individual job seekers, can reinforce and exacerbate gender and social inequalities in the labor market through the use of biased and gendered language.


Neural Image Compression: Generalization, Robustness, and Spectral Biases

Neural Information Processing Systems

Recent advances in neural image compression (NIC) have produced models that are starting to outperform classic codecs. While this has led to growing excitement about using NIC in real-world applications, the successful adoption of any machine learning system in the wild requires it to generalize (and be robust) to unseen distribution shifts at deployment. Unfortunately, current research lacks comprehensive datasets and informative tools to evaluate and understand NIC performance in real-world settings. To bridge this crucial gap, first, this paper presents a comprehensive benchmark suite to evaluate the out-of-distribution (OOD) performance of image compression methods. Specifically, we provide CLIC-C and Kodak-C by introducing 15 corruptions to the popular CLIC and Kodak benchmarks.


Katakomba: Tools and Benchmarks for Data-Driven NetHack

Neural Information Processing Systems

NetHack is known as the frontier of reinforcement learning research where learning-based methods still need to catch up to rule-based solutions. One of the promising directions for a breakthrough is using pre-collected datasets similar to recent developments in robotics, recommender systems, and more under the umbrella of offline reinforcement learning (ORL). Recently, a large-scale NetHack dataset was released; while it was a necessary step forward, it has yet to gain wide adoption in the ORL community. In this work, we argue that there are three major obstacles for adoption: tool-wise, implementation-wise, and benchmark-wise. To address them, we develop an open-source library that provides workflow fundamentals familiar to the ORL community: pre-defined D4RL-style tasks, uncluttered baseline implementations, and reliable evaluation tools with accompanying configs and logs synced to the cloud.


CrypTen: Secure Multi-Party Computation Meets Machine Learning

Neural Information Processing Systems

Secure multi-party computation (MPC) allows parties to perform computations on data while keeping that data private. This capability has great potential for machine-learning applications: it facilitates training of machine-learning models on private data sets owned by different parties, evaluation of one party's private model using another party's private data, etc. Although a range of studies implement machine-learning models via secure MPC, such implementations are not yet mainstream. Adoption of secure MPC is hampered by the absence of flexible software frameworks that `speak the language of machine-learning researchers and engineers. To foster adoption of secure MPC in machine learning, we present CrypTen: a software framework that exposes popular secure MPC primitives via abstractions that are common in modern machine-learning frameworks, such as tensor computations, automatic differentiation, and modular neural networks. This paper describes the design of CrypTen and measure its performance on state-of-the-art models for text classification, speech recognition, and image classification. Our benchmarks show that CrypTen's GPU support and high-performance communication between (an arbitrary number of) parties allows it to perform efficient private evaluation of modern machine-learning models under a semi-honest threat model. For example, two parties using CrypTen can securely predict phonemes in speech recordings using Wav2Letter faster than real-time. We hope that CrypTen will spur adoption of secure MPC in the machine-learning community.


The Adoption Paradox for Veterinary Professionals in China: High Use of Artificial Intelligence Despite Low Familiarity

Li, Shumin, Lai, Xiaoyun

arXiv.org Artificial Intelligence

While the global integration of artificial intelligence (AI) into veterinary medicine is accelerating, its adoption dynamics in major markets such as China remain uncharacterized. This paper presents the first exploratory analysis of AI perception and adoption among veterinary professionals in China, based on a cross-sectional survey of 455 practitioners conducted in mid-2025. We identify a distinct "adoption paradox": although 71.0% of respondents have incorporated AI into their workflows, 44.6% of these active users report low familiarity with the technology. In contrast to the administrative-focused patterns observed in North America, adoption in China is practitioner-driven and centers on core clinical tasks, such as disease diagnosis (50.1%) and prescription calculation (44.8%). However, concerns regarding reliability and accuracy remain the primary barrier (54.3%), coexisting with a strong consensus (93.8%) for regulatory oversight. These findings suggest a unique "inside-out" integration model in China, characterized by high clinical utility but restricted by an "interpretability gap," underscoring the need for specialized tools and robust regulatory frameworks to safely harness AI's potential in this expanding market.