economics
UK agrees drone defence plan with four EU allies
Britain is to develop new air defence weapons alongside the EU's four biggest military powers, deepening ties with the European defence sector. The project will invite manufacturers in the UK, Germany, France, Italy and Poland to submit plans to build low-cost missiles and autonomous drones. The allies are pledging a speedy process to build the weapons together, inspired by Ukraine's development of cheap drones to counter attacks from Russia. The UK's Ministry of Defence (MoD) says the programme will prioritise a lightweight, affordable surface-to-air weapon, with the first project to be delivered by next year. The plan, announced at a meeting of the five countries' defence ministers in the Polish city of Krakow, marks a boost to UK-Europe ties after the failure of talks last year over UK participation in the EU's new €150bn (£130bn) defence fund.
- North America > United States (0.50)
- Asia > Russia (0.37)
- Europe > Ukraine (0.27)
- (18 more...)
- Leisure & Entertainment (1.00)
- Government > Military (1.00)
Training for Obsolescence? The AI-Driven Education Trap
Artificial intelligence is simultaneously transforming the production function of human capital in schools and the return to skills in the labor market. We develop a theoretical model to analyze the potential for misallocation when these two forces are considered in isolation. We study an educational planner who observes AI's immediate productivity benefits in teaching specific skills but fails to fully internalize the technology's future wage-suppressing effects on those same skills. Motivated by a pre-registered pilot study suggesting a positive correlation between a skill's "teachability" by AI and its vulnerability to automation, we show that this information friction leads to a systematic skill mismatch. The planner over-invests in skills destined for obsolescence, a distortion that increases monotonically with AI prevalence. Extensions demonstrate that this mismatch is exacerbated by the neglect of unpriced non-cognitive skills and by the endogenous over-adoption of educational technology. Our findings caution that policies promoting AI in education, if not paired with forward-looking labor market signals, may paradoxically undermine students' long-term human capital, such as by crowding out skills like persistence that are forged through intellectual struggle.
- North America > United States > Tennessee > Davidson County > Nashville (0.04)
- North America > United States > Massachusetts > Suffolk County > Boston (0.04)
- Europe > Switzerland (0.04)
- (2 more...)
- Research Report > Experimental Study (1.00)
- Research Report > New Finding (0.66)
- Education > Educational Technology (0.88)
- Banking & Finance > Economy (0.88)
- Information Technology > Artificial Intelligence > Machine Learning (0.93)
- Information Technology > Artificial Intelligence > Natural Language (0.68)
- Information Technology > Artificial Intelligence > Representation & Reasoning (0.68)
- Information Technology > Artificial Intelligence > Cognitive Science (0.68)
When Should Neural Data Inform Welfare? A Critical Framework for Policy Uses of Neuroeconomics
Neuroeconomics promises to ground welfare analysis in neural and computational evidence about how people value outcomes, learn from experience and exercise self-control. At the same time, policy and commercial actors increasingly invoke neural data to justify paternalistic regulation, "brain-based" interventions and new welfare measures. This paper asks under what conditions neural data can legitimately inform welfare judgements for policy rather than merely describing behaviour. I develop a non-empirical, model-based framework that links three levels: neural signals, computational decision models and normative welfare criteria. Within an actor-critic reinforcement-learning model, I formalise the inference path from neural activity to latent values and prediction errors and then to welfare claims. I show that neural evidence constrains welfare judgements only when the neural-computational mapping is well validated, the decision model identifies "true" interests versus context-dependent mistakes, and the welfare criterion is explicitly specified and defended. Applying the framework to addiction, neuromarketing and environmental policy, I derive a Neuroeconomic Welfare Inference Checklist for regulators and for designers of NeuroAI systems. The analysis treats brains and artificial agents as value-learning systems while showing that internal reward signals, whether biological or artificial, are computational quantities and cannot be treated as welfare measures without an explicit normative model.
- Law (1.00)
- Health & Medicine > Therapeutic Area > Neurology (1.00)
d6d231705f96d5a35aeb3a76402e49a3-AuthorFeedback.pdf
We will be sure to cite this work and explain the relationship with ours. Appendix F. We will replace the original version with these It is true there can be oscillation under conditions in Thm.3, as they only guarantee We discuss stability in Appendix F (line 662-677), and will clarify this in the main body. "Harm/benefit of fairness if natural equality is not broken". This is examined in Thm.4 (line 218-219): equality is As long as an appropriate "state" (sufficient statistics) can be identified, the Markov More discussion is in Appendix F (line 662-677).
- North America > United States (1.00)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Government > Regional Government > North America Government > United States Government (0.93)
- Health & Medicine > Therapeutic Area (0.73)
- Banking & Finance > Economy (0.68)
DMind Benchmark: Toward a Holistic Assessment of LLM Capabilities across the Web3 Domain
Huang, Enhao, Sun, Pengyu, Lin, Zixin, Chen, Alex, Ouyang, Joey, Wang, Haobo, Hu, Kaichun, Yi, James, Li, Frank, Zhang, Zhiyu, Xu, Tianxiang, Zhao, Gang, Ling, Ziang, Yang, Lowes
Large Language Models (LLMs) have achieved impressive performance in diverse natural language processing tasks, but specialized domains such as Web3 present new challenges and require more tailored evaluation. Despite the significant user base and capital flows in Web3, encompassing smart contracts, decentralized finance (DeFi), non-fungible tokens (NFTs), decentralized autonomous organizations (DAOs), on-chain governance, and novel token-economics, no comprehensive benchmark has systematically assessed LLM performance in this domain. To address this gap, we introduce the DMind Benchmark, a holistic Web3-oriented evaluation suite covering nine critical subfields: fundamental blockchain concepts, blockchain infrastructure, smart contract, DeFi mechanisms, DAOs, NFTs, token economics, meme concept, and security vulnerabilities. Beyond multiple-choice questions, DMind Benchmark features domain-specific tasks such as contract debugging and on-chain numeric reasoning, mirroring real-world scenarios. We evaluated 26 models, including ChatGPT, Claude, DeepSeek, Gemini, Grok, and Qwen, uncovering notable performance gaps in specialized areas like token economics and security-critical contract analysis. While some models excel in blockchain infrastructure tasks, advanced subfields remain challenging. Our benchmark dataset and evaluation pipeline are open-sourced on https://huggingface.co/datasets/DMindAI/DMind_Benchmark, reaching number one in Hugging Face's trending dataset charts within a week of release.
- Asia > China > Zhejiang Province > Hangzhou (0.04)
- Europe > Switzerland (0.04)
- Asia > China > Guangdong Province > Shenzhen (0.04)
- Information Technology > Security & Privacy (1.00)
- Education (1.00)
- Banking & Finance > Trading (1.00)
- Information Technology > Services > e-Commerce Services (0.66)
From Individual Learning to Market Equilibrium: Correcting Structural and Parametric Biases in RL Simulations of Economic Models
The application of Reinforcement Learning (RL) to economic modeling reveals a fundamental conflict between the assumptions of equilibrium theory and the emergent behavior of learning agents. While canonical economic models assume atomistic agents act as `takers' of aggregate market conditions, a naive single-agent RL simulation incentivizes the agent to become a `manipulator' of its environment. This paper first demonstrates this discrepancy within a search-and-matching model with concave production, showing that a standard RL agent learns a non-equilibrium, monopsonistic policy. Additionally, we identify a parametric bias arising from the mismatch between economic discounting and RL's treatment of intertemporal costs. To address both issues, we propose a calibrated Mean-Field Reinforcement Learning framework that embeds a representative agent in a fixed macroeconomic field and adjusts the cost function to reflect economic opportunity costs. Our iterative algorithm converges to a self-consistent fixed point where the agent's policy aligns with the competitive equilibrium. This approach provides a tractable and theoretically sound methodology for modeling learning agents in economic systems within the broader domain of computational social science.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- North America > Puerto Rico > San Juan > San Juan (0.04)
- (2 more...)
Data for Inclusion: The Redistributive Power of Data Economics
While credit is often portrayed as the fuel of development, access to credi t is unevenly distributed -- not merely as a function of income or collateral, but increasingly as a function of data visibility. In this context, the core hypothesis of this paper is that data, when governed ethically and reused efficiently, operates as a re distributive economic asset. The idea that being poor is more expensive is not new; it has been conceptualized as the "poverty premium" -- where low - income individuals pay higher effective prices for credit, insurance, and other services (Carrière - Swallow & Haksar, 2019). Y et what has ch anged is the infrastructure of decision - making: creditworthiness is increasingly determined by algorithmic systems whose inputs are not equitably distributed. Individuals with limited credit histories or fragmented digital footprints remain invisible, not due to financial incapacity, but due to informational exclusion. This asymmetry is not merely a market failure -- it is a structural inequality encoded in data regimes. W e argue that positive credit data -- payment histories, utilization patterns, and account stability -- constitutes a nonrival input that, once generated, can be reused across institutions at near - zero marginal cost without diminishing its value (Jones & Tonetti, 2020; Acemoglu et al., 2023). However, the ability to extract value from such data remains highly uneven. In traditional credit markets, the absence of negative signals penalizes borrowers more than the presence of positive behavior benefits them.
- South America > Uruguay (0.06)
- North America > United States > Tennessee > Davidson County > Nashville (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- (3 more...)
- Information Technology > Artificial Intelligence (1.00)
- Information Technology > Data Science > Data Mining (0.46)
The Economics of AI Foundation Models: Openness, Competition, and Governance
Xu, Fasheng, Wang, Xiaoyu, Chen, Wei, Xie, Karen
The strategic choice of model "openness" has become a defining issue for the foundation model (FM) ecosystem. While this choice is intensely debated, its underlying economic drivers remain underexplored. We construct a two-period game-theoretic model to analyze how openness shapes competition in an AI value chain, featuring an incumbent developer, a downstream deployer, and an entrant developer. Openness exerts a dual effect: it amplifies knowledge spillovers to the entrant, but it also enhances the incumbent's advantage through a "data flywheel effect," whereby greater user engagement today further lowers the deployer's future fine-tuning cost. Our analysis reveals that the incumbent's optimal first-period openness is surprisingly non-monotonic in the strength of the data flywheel effect. When the data flywheel effect is either weak or very strong, the incumbent prefers a higher level of openness; however, for an intermediate range, it strategically restricts openness to impair the entrant's learning. This dynamic gives rise to an "openness trap," a critical policy paradox where transparency mandates can backfire by removing firms' strategic flexibility, reducing investment, and lowering welfare. We extend the model to show that other common interventions can be similarly ineffective. Vertical integration, for instance, only benefits the ecosystem when the data flywheel effect is strong enough to overcome the loss of a potentially more efficient competitor. Likewise, government subsidies intended to spur adoption can be captured entirely by the incumbent through strategic price and openness adjustments, leaving the rest of the value chain worse off. By modeling the developer's strategic response to competitive and regulatory pressures, we provide a robust framework for analyzing competition and designing effective policy in the complex and rapidly evolving FM ecosystem.
- Asia > Japan (0.14)
- North America > United States > Connecticut (0.04)
- Europe > France (0.04)
- (2 more...)
- Law (1.00)
- Information Technology (1.00)
- Government (1.00)
- Banking & Finance > Trading (0.67)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.46)