adoption
Tech billionaires fly in for Delhi AI expo as Modi jostles to lead in south
Campaigners fear Narendra Modi could use AI to increase state surveillance and sway elections. Campaigners fear Narendra Modi could use AI to increase state surveillance and sway elections. Silicon Valley tech billionaires will land in Delhi this week for an AI summit hosted by India's prime minister, Narendra Modi, where leaders of the global south will wrestle for control over the fast-developing technology. During the week-long AI Impact Summit, attended by thousands of tech executives, government officials and AI safety experts, tech companies valued at trillions of dollars will rub along with leaders of countries such as Kenya and Indonesia, where average wages dip well below $1,000 a month. Amid a push to speed up AI adoption across the globe, Sundar Pichai, Sam Altman and Dario Amodei, the heads of Google, OpenAI and Anthropic, will all be there.
- Asia > Indonesia (0.25)
- Africa > Kenya (0.25)
- North America > United States > California (0.25)
- (14 more...)
In the AI gold rush, tech firms are embracing 72-hour weeks
The recruitment website is jazzy, awash with pictures of happy young workers, and festooned with upbeat mini-slogans such as insane speed, infinite curiosity and customer obsession. Read a bit lower, and there are promises of perks galore: competitive compensation, free meals, free gym membership, free health and dental care and so on. But then comes the catch. Each job ad contains a warning: Please don't join if you're not excited about working ~70 hrs/week in person with some of the most ambitious people in NYC. The website belongs to Rilla, a New York-based tech business which sells AI-based systems that allow employers to monitor sales representatives when they are out and about, interacting with clients. The company has become something of a poster child for a fast-paced workplace culture known as 996, also sometimes referred to as hustle culture or grindcore.
- North America > United States > New York (0.24)
- North America > Central America (0.14)
- Asia > Japan (0.14)
- (15 more...)
- Law (1.00)
- Information Technology (1.00)
- Banking & Finance (0.94)
- (4 more...)
Probing Social Bias in Labor Market Text Generation by ChatGPT: A Masked Language Model Approach
As generative large language models (LLMs) such as ChatGPT gain widespread adoption in various domains, their potential to propagate and amplify social biases, particularly in high-stakes areas such as the labor market, has become a pressing concern. AI algorithms are not only widely used in the selection of job applicants, individual job seekers may also make use of generative LLMs to help develop their job application materials. Against this backdrop, this research builds on a novel experimental design to examine social biases within ChatGPT-generated job applications in response to real job advertisements. By simulating the process of job application creation, we examine the language patterns and biases that emerge when the model is prompted with diverse job postings. Notably, we present a novel bias evaluation framework based on Masked Language Models to quantitatively assess social bias based on validated inventories of social cues/words, enabling a systematic analysis of the language used. Our findings show that the increasing adoption of generative AI, not only by employers but also increasingly by individual job seekers, can reinforce and exacerbate gender and social inequalities in the labor market through the use of biased and gendered language.
Neural Image Compression: Generalization, Robustness, and Spectral Biases
Recent advances in neural image compression (NIC) have produced models that are starting to outperform classic codecs. While this has led to growing excitement about using NIC in real-world applications, the successful adoption of any machine learning system in the wild requires it to generalize (and be robust) to unseen distribution shifts at deployment. Unfortunately, current research lacks comprehensive datasets and informative tools to evaluate and understand NIC performance in real-world settings. To bridge this crucial gap, first, this paper presents a comprehensive benchmark suite to evaluate the out-of-distribution (OOD) performance of image compression methods. Specifically, we provide CLIC-C and Kodak-C by introducing 15 corruptions to the popular CLIC and Kodak benchmarks.
Katakomba: Tools and Benchmarks for Data-Driven NetHack
NetHack is known as the frontier of reinforcement learning research where learning-based methods still need to catch up to rule-based solutions. One of the promising directions for a breakthrough is using pre-collected datasets similar to recent developments in robotics, recommender systems, and more under the umbrella of offline reinforcement learning (ORL). Recently, a large-scale NetHack dataset was released; while it was a necessary step forward, it has yet to gain wide adoption in the ORL community. In this work, we argue that there are three major obstacles for adoption: tool-wise, implementation-wise, and benchmark-wise. To address them, we develop an open-source library that provides workflow fundamentals familiar to the ORL community: pre-defined D4RL-style tasks, uncluttered baseline implementations, and reliable evaluation tools with accompanying configs and logs synced to the cloud.
CrypTen: Secure Multi-Party Computation Meets Machine Learning
Secure multi-party computation (MPC) allows parties to perform computations on data while keeping that data private. This capability has great potential for machine-learning applications: it facilitates training of machine-learning models on private data sets owned by different parties, evaluation of one party's private model using another party's private data, etc. Although a range of studies implement machine-learning models via secure MPC, such implementations are not yet mainstream. Adoption of secure MPC is hampered by the absence of flexible software frameworks that `speak the language of machine-learning researchers and engineers. To foster adoption of secure MPC in machine learning, we present CrypTen: a software framework that exposes popular secure MPC primitives via abstractions that are common in modern machine-learning frameworks, such as tensor computations, automatic differentiation, and modular neural networks. This paper describes the design of CrypTen and measure its performance on state-of-the-art models for text classification, speech recognition, and image classification. Our benchmarks show that CrypTen's GPU support and high-performance communication between (an arbitrary number of) parties allows it to perform efficient private evaluation of modern machine-learning models under a semi-honest threat model. For example, two parties using CrypTen can securely predict phonemes in speech recordings using Wav2Letter faster than real-time. We hope that CrypTen will spur adoption of secure MPC in the machine-learning community.
Rare polar bear adoption could save cub's life
Rare polar bear adoption could save cub's life The cubs were born into a well-studied'celebration' of polar bears in Canada. Breakthroughs, discoveries, and DIY tips sent every weekday. Scientists in Churchill, Manitoba, Canada (aka the polar bear capital of the world) have confirmed that a wild female polar bear has adopted a cub that is not her own. This rare behavior was captured on cameras during the polar bear's annual migration along Western Hudson Bay . Researchers from Environment and Climate Change Canada and Polar Bears International spotted the mother bear (designated as bear X33991) during spring 2025, when she came out of her maternity den.
- Atlantic Ocean > North Atlantic Ocean > Hudson Bay (0.26)
- North America > Canada > Manitoba (0.25)
- North America > Canada > Alberta (0.15)
- (4 more...)
The Adoption Paradox for Veterinary Professionals in China: High Use of Artificial Intelligence Despite Low Familiarity
While the global integration of artificial intelligence (AI) into veterinary medicine is accelerating, its adoption dynamics in major markets such as China remain uncharacterized. This paper presents the first exploratory analysis of AI perception and adoption among veterinary professionals in China, based on a cross-sectional survey of 455 practitioners conducted in mid-2025. We identify a distinct "adoption paradox": although 71.0% of respondents have incorporated AI into their workflows, 44.6% of these active users report low familiarity with the technology. In contrast to the administrative-focused patterns observed in North America, adoption in China is practitioner-driven and centers on core clinical tasks, such as disease diagnosis (50.1%) and prescription calculation (44.8%). However, concerns regarding reliability and accuracy remain the primary barrier (54.3%), coexisting with a strong consensus (93.8%) for regulatory oversight. These findings suggest a unique "inside-out" integration model in China, characterized by high clinical utility but restricted by an "interpretability gap," underscoring the need for specialized tools and robust regulatory frameworks to safely harness AI's potential in this expanding market.
- North America > United States (0.04)
- North America > Canada (0.04)
- Asia > China > Jilin Province (0.04)
- (4 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Questionnaire & Opinion Survey (1.00)
- Law (1.00)
- Health & Medicine > Diagnostic Medicine (1.00)
- Information Technology (0.94)
- (3 more...)
The State of AI: A vision of the world in 2030
Senior AI editor Will Douglas Heaven talks with Tim Bradshaw, FT global tech correspondent, about what our world will look like in the next five years. Welcome back to The State of AI, a new collaboration between the and . You can read the rest of the series here. This is a subscriber-only event and you can sign up here .) Every time I'm asked what's coming next, I get a Luke Haines song stuck in my head: "Please don't ask me about the future / I am not a fortune teller." What will things be like in 2030?
- North America > United States > California > San Francisco County > San Francisco (0.05)
- Asia > China (0.05)
- North America > United States > New York (0.04)
- (2 more...)
An AI Implementation Science Study to Improve Trustworthy Data in a Large Healthcare System
Marteau, Benoit L., Hornback, Andrew, Tan, Shaun Q., Lowson, Christian, Woloff, Jason, Wang, May D.
The rapid growth of Artificial Intelligence (AI) in healthcare has sparked interest in Trustworthy AI and AI Implementation Science, both of which are essential for accelerating clinical adoption. However, strict regulations, gaps between research and clinical settings, and challenges in evaluating AI systems continue to hinder real-world implementation. This study presents an AI implementation case study within Shriners Childrens (SC), a large multisite pediatric system, showcasing the modernization of SCs Research Data Warehouse (RDW) to OMOP CDM v5.4 within a secure Microsoft Fabric environment. We introduce a Python-based data quality assessment tool compatible with SCs infrastructure, extending OHDsi's R/Java-based Data Quality Dashboard (DQD) and integrating Trustworthy AI principles using the METRIC framework. This extension enhances data quality evaluation by addressing informative missingness, redundancy, timeliness, and distributional consistency. We also compare systematic and case-specific AI implementation strategies for Craniofacial Microsomia (CFM) using the FHIR standard. Our contributions include a real-world evaluation of AI implementations, integration of Trustworthy AI principles into data quality assessment, and insights into hybrid implementation strategies that blend systematic infrastructure with use-case-driven approaches to advance AI in healthcare.
- North America > United States > Georgia > Fulton County > Atlanta (0.04)
- North America > United States > Florida > Hillsborough County > Tampa (0.04)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Information Technology > Data Science > Data Quality (1.00)
- Information Technology > Artificial Intelligence > Applied AI (1.00)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (0.89)
- Information Technology > Artificial Intelligence > Machine Learning > Ensemble Learning (0.68)