Goto

Collaborating Authors

 economics


Three things in AI to watch, according to a Nobel-winning economist

MIT Technology Review

Daron Acemoglu is more cautious than most about predictions of a jobs apocalypse. A few months before he was awarded the Nobel Prize in economics in 2024, Daron Acemoglu published a paper that earned him few fans in Silicon Valley. Contrary to what Big Tech CEOs had been promising--an overhaul of all white-collar work--Acemoglu estimated that AI would give only a small boost to US productivity and would not obviate the need for human work. It's okay at automating certain tasks, he wrote, but some jobs will be perfectly fine. Two years later, Acemoglu's measured take has not caught on. Chatter about an AI jobs apocalypse pops up everywhere from Senator Bernie Sanders's rallies to conversations I overhear in line at the grocery store.



UK agrees drone defence plan with four EU allies

BBC News

Britain is to develop new air defence weapons alongside the EU's four biggest military powers, deepening ties with the European defence sector. The project will invite manufacturers in the UK, Germany, France, Italy and Poland to submit plans to build low-cost missiles and autonomous drones. The allies are pledging a speedy process to build the weapons together, inspired by Ukraine's development of cheap drones to counter attacks from Russia. The UK's Ministry of Defence (MoD) says the programme will prioritise a lightweight, affordable surface-to-air weapon, with the first project to be delivered by next year. The plan, announced at a meeting of the five countries' defence ministers in the Polish city of Krakow, marks a boost to UK-Europe ties after the failure of talks last year over UK participation in the EU's new €150bn (£130bn) defence fund.




Training for Obsolescence? The AI-Driven Education Trap

Peterson, Andrew J.

arXiv.org Artificial Intelligence

Artificial intelligence is simultaneously transforming the production function of human capital in schools and the return to skills in the labor market. We develop a theoretical model to analyze the potential for misallocation when these two forces are considered in isolation. We study an educational planner who observes AI's immediate productivity benefits in teaching specific skills but fails to fully internalize the technology's future wage-suppressing effects on those same skills. Motivated by a pre-registered pilot study suggesting a positive correlation between a skill's "teachability" by AI and its vulnerability to automation, we show that this information friction leads to a systematic skill mismatch. The planner over-invests in skills destined for obsolescence, a distortion that increases monotonically with AI prevalence. Extensions demonstrate that this mismatch is exacerbated by the neglect of unpriced non-cognitive skills and by the endogenous over-adoption of educational technology. Our findings caution that policies promoting AI in education, if not paired with forward-looking labor market signals, may paradoxically undermine students' long-term human capital, such as by crowding out skills like persistence that are forged through intellectual struggle.


When Should Neural Data Inform Welfare? A Critical Framework for Policy Uses of Neuroeconomics

Yiven, null, Zhu, null

arXiv.org Artificial Intelligence

Neuroeconomics promises to ground welfare analysis in neural and computational evidence about how people value outcomes, learn from experience and exercise self-control. At the same time, policy and commercial actors increasingly invoke neural data to justify paternalistic regulation, "brain-based" interventions and new welfare measures. This paper asks under what conditions neural data can legitimately inform welfare judgements for policy rather than merely describing behaviour. I develop a non-empirical, model-based framework that links three levels: neural signals, computational decision models and normative welfare criteria. Within an actor-critic reinforcement-learning model, I formalise the inference path from neural activity to latent values and prediction errors and then to welfare claims. I show that neural evidence constrains welfare judgements only when the neural-computational mapping is well validated, the decision model identifies "true" interests versus context-dependent mistakes, and the welfare criterion is explicitly specified and defended. Applying the framework to addiction, neuromarketing and environmental policy, I derive a Neuroeconomic Welfare Inference Checklist for regulators and for designers of NeuroAI systems. The analysis treats brains and artificial agents as value-learning systems while showing that internal reward signals, whether biological or artificial, are computational quantities and cannot be treated as welfare measures without an explicit normative model.


DMind Benchmark: Toward a Holistic Assessment of LLM Capabilities across the Web3 Domain

Huang, Enhao, Sun, Pengyu, Lin, Zixin, Chen, Alex, Ouyang, Joey, Wang, Haobo, Hu, Kaichun, Yi, James, Li, Frank, Zhang, Zhiyu, Xu, Tianxiang, Zhao, Gang, Ling, Ziang, Yang, Lowes

arXiv.org Artificial Intelligence

Large Language Models (LLMs) have achieved impressive performance in diverse natural language processing tasks, but specialized domains such as Web3 present new challenges and require more tailored evaluation. Despite the significant user base and capital flows in Web3, encompassing smart contracts, decentralized finance (DeFi), non-fungible tokens (NFTs), decentralized autonomous organizations (DAOs), on-chain governance, and novel token-economics, no comprehensive benchmark has systematically assessed LLM performance in this domain. To address this gap, we introduce the DMind Benchmark, a holistic Web3-oriented evaluation suite covering nine critical subfields: fundamental blockchain concepts, blockchain infrastructure, smart contract, DeFi mechanisms, DAOs, NFTs, token economics, meme concept, and security vulnerabilities. Beyond multiple-choice questions, DMind Benchmark features domain-specific tasks such as contract debugging and on-chain numeric reasoning, mirroring real-world scenarios. We evaluated 26 models, including ChatGPT, Claude, DeepSeek, Gemini, Grok, and Qwen, uncovering notable performance gaps in specialized areas like token economics and security-critical contract analysis. While some models excel in blockchain infrastructure tasks, advanced subfields remain challenging. Our benchmark dataset and evaluation pipeline are open-sourced on https://huggingface.co/datasets/DMindAI/DMind_Benchmark, reaching number one in Hugging Face's trending dataset charts within a week of release.


Data for Inclusion: The Redistributive Power of Data Economics

Vallarino, Diego

arXiv.org Artificial Intelligence

While credit is often portrayed as the fuel of development, access to credi t is unevenly distributed -- not merely as a function of income or collateral, but increasingly as a function of data visibility. In this context, the core hypothesis of this paper is that data, when governed ethically and reused efficiently, operates as a re distributive economic asset. The idea that being poor is more expensive is not new; it has been conceptualized as the "poverty premium" -- where low - income individuals pay higher effective prices for credit, insurance, and other services (Carrière - Swallow & Haksar, 2019). Y et what has ch anged is the infrastructure of decision - making: creditworthiness is increasingly determined by algorithmic systems whose inputs are not equitably distributed. Individuals with limited credit histories or fragmented digital footprints remain invisible, not due to financial incapacity, but due to informational exclusion. This asymmetry is not merely a market failure -- it is a structural inequality encoded in data regimes. W e argue that positive credit data -- payment histories, utilization patterns, and account stability -- constitutes a nonrival input that, once generated, can be reused across institutions at near - zero marginal cost without diminishing its value (Jones & Tonetti, 2020; Acemoglu et al., 2023). However, the ability to extract value from such data remains highly uneven. In traditional credit markets, the absence of negative signals penalizes borrowers more than the presence of positive behavior benefits them.


From Individual Learning to Market Equilibrium: Correcting Structural and Parametric Biases in RL Simulations of Economic Models

Chen, Ruxin, Zhang, Zeqiang

arXiv.org Artificial Intelligence

The application of Reinforcement Learning (RL) to economic modeling reveals a fundamental conflict between the assumptions of equilibrium theory and the emergent behavior of learning agents. While canonical economic models assume atomistic agents act as `takers' of aggregate market conditions, a naive single-agent RL simulation incentivizes the agent to become a `manipulator' of its environment. This paper first demonstrates this discrepancy within a search-and-matching model with concave production, showing that a standard RL agent learns a non-equilibrium, monopsonistic policy. Additionally, we identify a parametric bias arising from the mismatch between economic discounting and RL's treatment of intertemporal costs. To address both issues, we propose a calibrated Mean-Field Reinforcement Learning framework that embeds a representative agent in a fixed macroeconomic field and adjusts the cost function to reflect economic opportunity costs. Our iterative algorithm converges to a self-consistent fixed point where the agent's policy aligns with the competitive equilibrium. This approach provides a tractable and theoretically sound methodology for modeling learning agents in economic systems within the broader domain of computational social science.