Goto

Collaborating Authors

 kratsio


Trump launches 'Genesis Mission' to harness AI for scientific breakthroughs

Al Jazeera

Trump launches'Genesis Mission' to harness AI for scientific breakthroughs United States President Donald Trump has unveiled a national initiative to mobilise artificial intelligence (AI) for accelerating scientific breakthroughs. Trump signed an executive order on Monday to establish "The Genesis Mission", the latest iteration of his administration's aggressive strategy for spurring AI development through deregulation, infrastructure investment and public-private collaboration. Under the initiative, US supercomputers and data resources will be integrated to create a "closed-loop AI experimentation platform", according to the order. The White House, which likened the initiative to the Apollo programme that put the first man on the moon, said priority areas of focus would include the "greatest scientific challenges of our time," such as nuclear fusion, semiconductors, critical materials and space exploration. Michael Kratsios, the White House's top science adviser, said the initiative took a "revolutionary approach" to scientific research.


One model to solve them all: 2BSDE families via neural operators

Furuya, Takashi, Kratsios, Anastasis, Possamaï, Dylan, Raonić, Bogdan

arXiv.org Artificial Intelligence

We introduce a mild generative variant of the classical neural operator model, which leverages Kolmogorov--Arnold networks to solve infinite families of second-order backward stochastic differential equations ($2$BSDEs) on regular bounded Euclidean domains with random terminal time. Our first main result shows that the solution operator associated with a broad range of $2$BSDE families is approximable by appropriate neural operator models. We then identify a structured subclass of (infinite) families of $2$BSDEs whose neural operator approximation requires only a polynomial number of parameters in the reciprocal approximation rate, as opposed to the exponential requirement in general worst-case neural operator guarantees.


Quantifying The Limits of AI Reasoning: Systematic Neural Network Representations of Algorithms

Kratsios, Anastasis, Zvigelsky, Dennis, Hart, Bradd

arXiv.org Artificial Intelligence

A main open question in contemporary AI research is quantifying the forms of reasoning neural networks can perform when perfectly trained. This paper answers this by interpreting reasoning tasks as circuit emulation, where the gates define the type of reasoning; e.g. Boolean gates for predicate logic, tropical circuits for dynamic programming, arithmetic and analytic gates for symbolic mathematical representation, and hybrids thereof for deeper reasoning; e.g. higher-order logic. We present a systematic meta-algorithm that converts essentially any circuit into a feedforward neural network (NN) with ReLU activations by iteratively replacing each gate with a canonical ReLU MLP emulator. We show that, on any digital computer, our construction emulates the circuit exactly--no approximation, no rounding, modular overflow included--demonstrating that no reasoning task lies beyond the reach of neural networks. The number of neurons in the resulting network (parametric complexity) scales with the circuit's complexity, and the network's computational graph (structure) mirrors that of the emulated circuit. This formalizes the folklore that NNs networks trade algorithmic run-time (circuit runtime) for space complexity (number of neurons). We derive a range of applications of our main result, from emulating shortest-path algorithms on graphs with cubic--size NNs, to simulating stopped Turing machines with roughly quadratically--large NNs, and even the emulation of randomized Boolean circuits. Lastly, we demonstrate that our result is strictly more powerful than a classical universal approximation theorem: any universal function approximator can be encoded as a circuit and directly emulated by a NN.


Details of Trump's highly anticipated AI plan revealed by White House ahead of major speech

FOX News

The Trump administration revealed details of its highly anticipated artificial intelligence plan of action ahead of President Donald Trump's major speech later on Wednesday, which is expected to also include the president signing at least one executive order related to the U.S.' artificial intelligence race. Administration leaders, including White House Office of Science and Technology policy director Michael Kratsios and AI and crypto czar David Sacks, held a background call with the media Wednesday morning and outlined a three-pillar plan of action for artificial intelligence focused on American workers, free speech and protecting U.S.-built technologies. "We want to center America's workers, and make sure they benefit from AI," Sacks said on the call while describing the three pillars. "The second is that we believe that AI systems should be free of ideological bias and not be designed to pursue socially engineered agendas," Sacks said. "And so we have a number of proposals there on how to make sure that AI remains truth-seeking and trustworthy. And then the third principle that cuts across the pillars is that we believe we have to prevent our advanced technologies from being misused or stolen by malicious actors. And we also have to monitor for emerging and unforeseen risks from AI." President Donald Trump is expected to deliver a major speech focused on artificial intelligence on July 23, 2025.


Trump signs new executive orders intended to make flying cars a reality, slash flight times

FOX News

A aviation company is turning heads with an electric vertical take-off and landing vehicle. President Donald Trump signed three new executive orders on Friday aimed at accelerating American drone innovation and supersonic air travel, while also restoring security to American airspace. The three orders will be critical to American safety and security, White House officials involved in the drafting of the orders indicated, particularly in light of major worldwide events coming to the United States in the next few years, such as the World Cup and the Olympics. In addition to bolstering safety and security, the new orders will also spur greater innovation in the aerospace and drone sectors, something White House officials said has been stifled in recent years as a result of burdensome regulations. "Flying cars are not just for the Jetsons," Michael Kratsios, a lead tech policy adviser at the White House said.


US government announces it has achieved ability to 'manipulate space and time' with new technology

Daily Mail - Science & tech

The Trump Administration quietly revealed it has futuristic technologies that literally bend time during a speech on'the golden age of American innovation.' The director of the White House Office of Science and Technology Policy, Michael Kratsios, declared that the US currently has the ability to'manipulate time and space' and'leave distance annihilated.' Kratsios made the bold statement on Monday during the Endless Frontiers Retreat, a scientific conference in Texas focused on promoting US technological innovations to maintain global competitiveness. The rest of the director's speech touched on American breakthroughs of the past and undoing Biden-era policies that the Trump Administration claims stifled innovation - adding that the regulatory process on new tech has been a burden since the 1970s. Kratsios actually referenced this again at the end of his speech, saying that Americans will soon have the choice to'craft new technologies and give themselves to scientific discoveries that will bend time and space.'


White House: US will lead in AI, but China is catching up

FOX News

Kurt'CyberGuy' Knutsson on President-elect Trump's plan to deregulate cryptocurrency and A.I. in his second administration. EXCLUSIVE: China's innovation in artificial intelligence is "accelerating," according to Michael Kratsios, director of the White House Office of Science and Technology. He told Fox News Digital that the United States' "promote and protect" strategy will solidify its standing as the world's dominant power in AI. Kratsios, who served as chief technology officer during the first Trump administration, sat for an exclusive interview with Fox News Digital on Monday. FLASHBACK: US TECHNOLOGY CHIEF WARNS CHINA'TWISTING' ARTIFICIAL INTELLIGENCE TO TARGET CRITICS, AS AMERICA JOINS GLOBAL PACT "The White House in the first Trump administration redefined national tech policy to focus on American leadership in emerging technologies, and those were technologies like artificial intelligence, quantum computing and 5G, [which] were big back then," Kratsios said.


Guiding Two-Layer Neural Network Lipschitzness via Gradient Descent Learning Rate Constraints

Sung, Kyle, Kratsios, Anastasis, Forman, Noah

arXiv.org Machine Learning

We demonstrate that applying an eventual decay to the learning rate (LR) in empirical risk minimization (ERM), where the mean-squared-error loss is minimized using standard gradient descent (GD) for training a two-layer neural network with Lipschitz activation functions, ensures that the resulting network exhibits a high degree of Lipschitz regularity, that is, a small Lipschitz constant. Moreover, we show that this decay does not hinder the convergence rate of the empirical risk, now measured with the Huber loss, toward a critical point of the non-convex empirical risk. From these findings, we derive generalization bounds for two-layer neural networks trained with GD and a decaying LR with a sub-linear dependence on its number of trainable parameters, suggesting that the statistical behaviour of these networks is independent of overparameterization. We validate our theoretical results with a series of toy numerical experiments, where surprisingly, we observe that networks trained with constant step size GD exhibit similar learning and regularity properties to those trained with a decaying LR. This suggests that neural networks trained with standard GD may already be highly regular learners.


Low-dimensional approximations of the conditional law of Volterra processes: a non-positive curvature approach

Arabpour, Reza, Armstrong, John, Galimberti, Luca, Kratsios, Anastasis, Livieri, Giulia

arXiv.org Artificial Intelligence

Predicting the conditional evolution of Volterra processes with stochastic volatility is a crucial challenge in mathematical finance. While deep neural network models offer promise in approximating the conditional law of such processes, their effectiveness is hindered by the curse of dimensionality caused by the infinite dimensionality and non-smooth nature of these problems. To address this, we propose a two-step solution. Firstly, we develop a stable dimension reduction technique, projecting the law of a reasonably broad class of Volterra process onto a low-dimensional statistical manifold of non-positive sectional curvature. Next, we introduce a sequentially deep learning model tailored to the manifold's geometry, which we show can approximate the projected conditional law of the Volterra process. Our model leverages an auxiliary hypernetwork to dynamically update its internal parameters, allowing it to encode non-stationary dynamics of the Volterra process, and it can be interpreted as a gating mechanism in a mixture of expert models where each expert is specialized at a specific point in time. Our hypernetwork further allows us to achieve approximation rates that would seemingly only be possible with very large networks.


Digital Computers Break the Curse of Dimensionality: Adaptive Bounds via Finite Geometry

Kratsios, Anastasis, Neuman, A. Martina, Pammer, Gudmund

arXiv.org Artificial Intelligence

Many of the foundations of machine learning rely on the idealized premise that all input and output spaces are infinite, e.g.~$\mathbb{R}^d$. This core assumption is systematically violated in practice due to digital computing limitations from finite machine precision, rounding, and limited RAM. In short, digital computers operate on finite grids in $\mathbb{R}^d$. By exploiting these discrete structures, we show the curse of dimensionality in statistical learning is systematically broken when models are implemented on real computers. Consequentially, we obtain new generalization bounds with dimension-free rates for kernel and deep ReLU MLP regressors, which are implemented on real-world machines. Our results are derived using a new non-asymptotic concentration of measure result between a probability measure over any finite metric space and its empirical version associated with $N$ i.i.d. samples when measured in the $1$-Wasserstein distance. Unlike standard concentration of measure results, the concentration rates in our bounds do not hold uniformly for all sample sizes $N$; instead, our rates can adapt to any given $N$. This yields significantly tighter bounds for realistic sample sizes while achieving the optimal worst-case rate of $\mathcal{O}(1/N^{1/2})$ for massive. Our results are built on new techniques combining metric embedding theory with optimal transport