foundation
Epstein's shadow: Why Bill Gates pulled out of Modi's AI summit
Epstein's shadow: Why Bill Gates pulled out of Modi's AI summit Microsoft founder Bill Gates has cancelled his keynote speech at India's flagship AI summit just hours before he was due to take the stage on Thursday. Gates, who has faced renewed scrutiny over his past ties to the late sex offender Jeffrey Epstein, withdrew to "ensure the focus remains on the AI Summit's key priorities", the Gates Foundation said in a statement. India's Prime Minister Narendra Modi had billed the summit as an opportunity for India to shape the future of AI, drawing high-profile attendees, including French President Emmanuel Macron and Brazilian President Luiz Inacio Lula da Silva. Instead, it has been dogged by controversy, from Gates's abrupt exit to an incident in which an Indian university tried to pass off a Chinese-made robotic dog as its own innovation. So, what exactly went wrong at India's flagship AI gathering and why has it drawn such intense scrutiny?
- North America > United States (1.00)
- South America > Brazil (0.89)
- Europe > France (0.55)
- (9 more...)
How to Organize Safely in the Age of Surveillance
From threat modeling to encrypted collaboration apps, we've collected experts' tips and tools for safely and effectively building a group--even while being targeted and tracked by the powerful. Rarely in modern US history have so many Americans opposed the actions of the federal government with so little hope for a top-down political solution. That's left millions of people seeking a bottom-up approach to resistance: grassroots organizing. Yet as Americans assemble their own movements to protect and support immigrants, push back against the Department of Homeland Security's dangerous incursions into cities, and protest for civil rights and policy changes, they face a federal government that possesses vast surveillance powers and sweeping cooperation from the Silicon Valley companies that hold Americans' data. That means political, social, and economic organizing presents a risky dilemma. How do you bring people of all ages, backgrounds, and technical abilities into a mass movement without exposing them to monitoring and targeting by a government--and in particular Immigration and Customs Enforcement and Customs and Border Protection, agencies with paramilitary ambitions, a tendency to break the law, and more funding than some countries' militaries. Organizing safely in an age of surveillance increasingly requires not only technical security know-how, but also a tricky balance between secrecy and openness, says Eva Galperin, the director of cybersecurity at the Electronic Frontier Foundation, a nonprofit focused on digital civil liberties.
- North America > United States > California (0.34)
- Europe > Switzerland (0.14)
- North America > United States > Arizona (0.04)
- (4 more...)
- Information Technology > Security & Privacy (1.00)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence (1.00)
- Information Technology > Communications > Mobile (0.68)
The era of agentic chaos and how data will save us
Autonomous agents will soon run thousands of enterprise workflows, and only organizations with unified, trusted, context-rich data will prevent chaos and unlock reliable value at scale. AI agents are moving beyond coding assistants and customer service chatbots into the operational core of the enterprise. The ROI is promising, but autonomy without alignment is a recipe for chaos. Business leaders need to lay the essential foundations now. Agents are independently handling end-to-end processes across lead generation, supply chain optimization, customer support, and financial reconciliation. A mid-sized organization could easily run 4,000 agents, each making decisions that affect revenue, compliance, and customer experience.
- North America > United States > Massachusetts (0.05)
- Asia > China (0.05)
Mathematicians spent 2025 exploring the edge of mathematics
In 2025, the edges of mathematics came a little more sharply into view when members of the online Busy Beaver Challenge community closed in on a huge number that threatens to defy the logical underpinnings of the subject. This number is the next in the "Busy Beaver" sequence, a series of ever-larger numbers that emerges from a seemingly simple question - how do we know if a computer program will run forever? To find out, researchers turn to the work of mathematician Alan Turing, who showed that any computer algorithm can be mimicked by imagining a simplified device called a Turing machine. More complex algorithms correspond to Turing machines with larger sets of instructions or, in mathematical parlance, more states. For example BB(1) is 1 and BB(2) is 6, so making the algorithm twice as complex increases its runtime sixfold.
Credal Learning Theory
Statistical learning theory is the foundation of machine learning, providing theoretical bounds for the risk of models learned from a (single) training set, assumed to issue from an unknown probability distribution. In actual deployment, however, the data distribution may (and often does) vary, causing domain adaptation/generalization issues. In this paper we lay the foundations for a `credal' theory of learning, using convex sets of probabilities (credal sets) to model the variability in the data-generating distribution. Such credal sets, we argue, may be inferred from a finite sample of training sets. Bounds are derived for the case of finite hypotheses spaces (both assuming realizability or not), as well as infinite model spaces, which directly generalize classical results.
Optimal approximation using complex-valued neural networks
Complex-valued neural networks (CVNNs) have recently shown promising empirical success, for instance for increasing the stability of recurrent neural networks and for improving the performance in tasks with complex-valued inputs, such as MRI fingerprinting. While the overwhelming success of Deep Learning in the real-valued case is supported by a growing mathematical foundation, such a foundation is still largely lacking in the complex-valued case. We thus analyze the expressivity of CVNNs by studying their approximation properties. Our results yield the first quantitative approximation bounds for CVNNs that apply to a wide class of activation functions including the popular modReLU and complex cardioid activation functions. Precisely, our results apply to any activation function that is smooth but not polyharmonic on some non-empty open set; this is the natural generalization of the class of smooth and non-polynomial activation functions to the complex setting. Our main result shows that the approximation error scales as $m^{-k/(2n)}$ for $m \to \infty$ where $m$ is the number of neurons, $k$ the smoothness of the target function and $n$ is the (complex) input dimension. Under a natural continuity assumption, we show that this rate is optimal; we further discuss the optimality when dropping this assumption. Moreover, we prove that the problem of approximating $C^k$-functions using continuous approximation methods unavoidably suffers from the curse of dimensionality.
TranSimHub:A Unified Air-Ground Simulation Platform for Multi-Modal Perception and Decision-Making
Wang, Maonan, Chen, Yirong, Cai, Yuxin, Pang, Aoyu, Xie, Yuejiao, Ma, Zian, Xu, Chengcheng, Jiang, Kemou, Wang, Ding, Roullet, Laurent, Chen, Chung Shue, Cui, Zhiyong, Kan, Yuheng, Lepech, Michael, Pun, Man-On
Air-ground collaborative intelligence is becoming a key approach for next-generation urban intelligent transportation management, where aerial and ground systems work together on perception, communication, and decision-making. However, the lack of a unified multi-modal simulation environment has limited progress in studying cross-domain perception, coordination under communication constraints, and joint decision optimization. To address this gap, we present TranSimHub, a unified simulation platform for air-ground collaborative intelligence. TranSimHub offers synchronized multi-view rendering across RGB, depth, and semantic segmentation modalities, ensuring consistent perception between aerial and ground viewpoints. It also supports information exchange between the two domains and includes a causal scene editor that enables controllable scenario creation and counterfactual analysis under diverse conditions such as different weather, emergency events, and dynamic obstacles. We release TranSimHub as an open-source platform that supports end-to-end research on perception, fusion, and control across realistic air and ground traffic scenes. Our code is available at https://github.com/Traffic-Alpha/TransSimHub.
- North America > United States > District of Columbia > Washington (0.04)
- Europe > Italy (0.04)
- Asia > China > Shanghai > Shanghai (0.04)
- (2 more...)
- Transportation > Infrastructure & Services (1.00)
- Transportation > Air (1.00)
- Transportation > Ground > Road (0.70)
A Unified Formal Theory on the Logical Limits of Symbol Grounding
This paper synthesizes a series of formal proofs to construct a unified theory on the logical limits of the Symbol Grounding Problem. We distinguish between internal meaning (sense), which formal systems can possess via axioms, and external grounding (reference), which is a necessary condition for connecting symbols to the world. We demonstrate through a four-stage argument that meaningful grounding within a formal system must arise from a process that is external, dynamic, and non-fixed algorithmic. First, we show that for a purely symbolic system, the impossibility of grounding is a direct consequence of its definition. Second, we extend this limitation to systems with any finite, static set of pre-established meanings (Semantic Axioms). By formally modeling the computationalist hypothesis-which equates grounding with internal derivation-we prove via Gödelian arguments that such systems cannot consistently and completely define a "groundability predicate" for all truths. Third, we demonstrate that the "grounding act" for emergent meanings cannot be inferred from internal rules but requires an axiomatic, meta-level update. Drawing on Turing's concept of Oracle Machines and Piccinini's analysis of the mathematical objection, we identify this update as physical transduction. Finally, we prove that this process cannot be simulated by a fixed judgment algorithm, validating the logical necessity of embodied interaction.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.05)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
OpenAI, Anthropic, and Block Are Teaming Up to Make AI Agents Play Nice
American AI giants are backing a new effort to establish open standards for building agentic software and tools. OpenAI, Anthropic, and Block have cofounded a new open source organization--the Agentic AI Foundation--to promote standards for artificial intelligence agents. The three companies are also transferring ownership of some widely used agentic technologies over to the foundation. This includes Anthropic's Model Context Protocol (MCP), which allows agents to connect and interact; OpenAI's Agents.md These technologies were already free to use, but through the new foundation it will be possible for others to contribute to their development.
- Asia > China (0.06)
- North America > United States > California (0.05)
- Europe > Slovakia (0.05)
- Europe > Czechia (0.05)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.86)
Beyond the Black Box: A Cognitive Architecture for Explainable and Aligned AI
Current AI paradigms, as "architects of experience," face fundamental challenges in explainability and value alignment. This paper introduces "Weight-Calculatism," a novel cognitive architecture grounded in first principles, and demonstrates its potential as a viable pathway toward Artificial General Intelligence (AGI). The architecture deconstructs cognition into indivisible Logical Atoms and two fundamental operations: Pointing and Comparison. Decision-making is formalized through an interpretable Weight-Calculation model (Weight = Benefit * Probability), where all values are traceable to an auditable set of Initial Weights. This atomic decomposition enables radical explainability, intrinsic generality for novel situations, and traceable value alignment. We detail its implementation via a graph-algorithm-based computational engine and a global workspace workflow, supported by a preliminary code implementation and scenario validation. Results indicate that the architecture achieves transparent, human-like reasoning and robust learning in unprecedented scenarios, establishing a practical and theoretical foundation for building trustworthy and aligned AGI.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- North America > United States > New Jersey > Bergen County > Mahwah (0.04)
- (3 more...)
- Health & Medicine (0.46)
- Transportation > Air (0.40)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.95)
- Information Technology > Artificial Intelligence > Cognitive Science > Problem Solving (0.68)
- Information Technology > Artificial Intelligence > Cognitive Science > Cognitive Architectures (0.61)