First 1B business with one human employee will happen in 2026, says Anthropic CEO
AI can perform tasks such as writing, coding, reasoning, and researching with great accuracy -- all tasks that are key to starting your own company. That begs the question: Can AI help people start their very own billion-dollar business? Anthropic CEO Dario Amodei believes the answer is yes, and it's sooner than you may think. When asked at Anthropic's first developer conference, Code with Claude, when the first billion-dollar company with one human employee would happen, Amodei confidently responded, "2026." Also: Anthropic's latest Claude AI models are here - and you can try one for free today At the same event, Anthropic unveiled its most powerful family of models yet -- Claude Opus 4 and Sonnet 4 -- which can code, reason, and support agentic capabilities better than ever before.
Curvature Regularization to Prevent Distortion in Graph Embedding Hongbin Pei 1,4, Bingzhe Wei 2 Kevin Chen-Chuan Chang 2,3
Recent research on graph embedding has achieved success in various applications. Most graph embedding methods preserve the proximity in a graph into a manifold in an embedding space. We argue an important but neglected problem about this proximity-preserving strategy: Graph topology patterns, while preserved well into an embedding manifold by preserving proximity, may distort in the ambient embedding Euclidean space, and hence to detect them becomes difficult for machine learning models. To address the problem, we propose curvature regularization, to enforce flatness for embedding manifolds, thereby preventing the distortion. We present a novel angle-based sectional curvature, termed ABS curvature, and accordingly three kinds of curvature regularization to induce flat embedding manifolds during graph embedding. We integrate curvature regularization into five popular proximity-preserving embedding methods, and empirical results in two applications show significant improvements on a wide range of open graph datasets.
List-Decodable Sparse Mean Estimation
In this paper, we consider that the underlying distribution D is Gaussian with k-sparse mean. Our main contribution is the first polynomial-time algorithm that enjoys sample complexity O poly(k, log d), i.e. poly-logarithmic in the dimension. One of our core algorithmic ingredients is using low-degree sparse polynomials to filter outliers, which may find more applications.
In the appendix, we present details of the selective search algorithm, the ImageNet linear evaluation results and the broader impact of this work
A.1 Implementation Details There are mainly three parameters in the selective search approach: scale, ฯ and min_size. The parameter scale controls the number and size of the produced segments, that higher scale means less but larger segments. The parameter ฯ is the diameter of the Gaussian kernel used for smoothing the image prior for segmentation. We use the default values of this approach: scale = 500, ฯ = 0.9 and min_size = 10. A.2 Visualization Figure 1 shows the proposals generated by the selective search approach, which shows reasonably good to cover the objects.
Generative Forests
We focus on generative AI for a type of data that still represent one of the most prevalent form of data: tabular data. Our paper introduces two key contributions: a new powerful class of forest-based models fit for such tasks and a simple training algorithm with strong convergence guarantees in a boosting model that parallels that of the original weak / strong supervised learning setting. This algorithm can be implemented by a few tweaks to the most popular induction scheme for decision tree induction (i.e.
Geometry Processing with Neural Fields
Most existing geometry processing algorithms use meshes as the default shape representation. Manipulating meshes, however, requires one to maintain high quality in the surface discretization. For example, changing the topology of a mesh usually requires additional procedures such as remeshing. This paper instead proposes the use of neural fields for geometry processing. Neural fields can compactly store complicated shapes without spatial discretization. Moreover, neural fields are infinitely differentiable, which allows them to be optimized for objectives that involve higher-order derivatives.
Task-Free Continual Learning via Online Discrepancy Distance Learning
Learning from non-stationary data streams, also called Task-Free Continual Learning (TFCL) remains challenging due to the absence of explicit task information in most applications. Even though recently some algorithms have been proposed for TFCL, these methods lack theoretical guarantees. Moreover, there are no theoretical studies about forgetting during TFCL. This paper develops a new theoretical analysis framework that derives generalization bounds based on the discrepancy distance between the visited samples and the entire information made available for training the model. This analysis provides new insights into the forgetting behaviour in classification tasks. Inspired by this theoretical model, we propose a new approach enabled with the dynamic component expansion mechanism for a mixture model, namely Online Discrepancy Distance Learning (ODDL). ODDL estimates the discrepancy between the current memory and the already accumulated knowledge as an expansion signal aiming to ensure a compact network architecture with optimal performance. We then propose a new sample selection approach that selectively stores the samples into the memory buffer through the discrepancybased measure, further improving the performance. We perform several TFCL experiments with the proposed methodology, which demonstrate that the proposed approach achieves the state of the art performance.