Not enough data to create a plot.
Try a different view from the menu above.
Jeff Bridges Is Digging It
The interior of Jeff Bridges's garage, in Santa Barbara, California, has the ramshackle ease of an extravagant dorm room: a tiger-print rug, a potter's wheel, guitars, a rogue toothbrush, taped-up printouts of ideas he finds provocative or perhaps grounding ("Enlightenment is a communal experience"), and piles of books, from Richard Powers's "Bewilderment" to "Who Cares?! A black-and-white portrait of Captain Beefheart, incongruously dressed in a jacket and tie, hangs on a wall near an electric piano. When I arrived, on a recent afternoon, I did not take note of a lava lamp, but its presence didn't feel out of the question. Bridges was wearing rubber slides and a periwinkle-blue cardigan. He excitedly spread out a large furry blanket on a recliner and invited me to sit down: "Your throne, man!" he said. Earlier this month, Bridges released "Slow Magic, 1977-1978," a series of songs he recorded when he was in his late twenties, an emergent movie star, and involved in a regular Wednesday-night jam session with a coterie of musicians and oddballs from the west side of Los Angeles (the jams were organized by Steve Baim, who attended University High School with Bridges; they took place in various beach houses and, occasionally, at the Village, the recording studio where, around the same time, Fleetwood Mac was making "Tusk"). "Slow Magic" is great and also bonkers. On "Kong," Bridges recounts a story line he pitched for a potential "King Kong" sequel (in 1976, Bridges starred as the long-haired primatologist Jack Prescott in a "Kong" remake produced by Dino De Laurentiis); the track features animated narration from the actor Burgess Meredith, and its lyrics are centered on the revelation that Kong is actually a robot. "It's a sad story, but he was just a monkey machine!" Bridges wails in a tottering falsetto. On "Obnoxious," a weirdly tender song about feeling sad and having a stomachache ("I went to the bathroom / And threw up"), there are echoes of Frank Zappa and the Band. What I like most about the record is how social it feels: friends in a room, being dumb, intermittently (even inadvertently) doing something miraculous. "When recording technology kept improving, I said, 'Oh, I don't need anybody!
Pro-life journalist assaulted on street assigns blame to Democratic rhetoric
'Live Action' journalist Savannah Craven Antao speaks out after being punched by an interviewee on'The Will Cain Show.' Pro-life activist Savannah Craven Antao believes the Democratic Party's recent rhetoric about "punching" at their Republican opponents contributed to the attack that left her bloody during a recent interview. Antao, a young pro-life influencer who was punched in the face by a woman she was interviewing in New York City earlier this month, pointed to Rep. Jasmine Crockett's, D-Texas, recent line about Democrats "punching" as inspiring the attack that happened to her. "She said, 'I think that you punch,'" Antao told Fox News Digital. "'I think you're okay with punching.' So yeah โ pretty much just describes the left at this point. They're totally fine with just using force like that to hurt people if they don't agree with them."
There's a way to get all your favorite AI tools for life
TL;DR: 1min.AI combines popular AI tools like GPT-4.0 and Midjourney, and lifetime access is only 79.97. AI tools like ChatGPT popped into existence, totally changed the professional world, and then immediately became very expensive. It's hard to get by without them now, but that doesn't mean you have to pay for each one individually. Instead of shelling out for OpenAI, Midjourney, and everything else, now you can get the same AI models all under one umbrella. This platform goes way beyond just text generation.
Small Language Models Are the New Rage, Researchers Say
The original version of this story appeared in Quanta Magazine. Large language models work well because they're so large. The latest models from OpenAI, Meta, and DeepSeek use hundreds of billions of "parameters"--the adjustable knobs that determine connections among data and get tweaked during the training process. With more parameters, the models are better able to identify patterns and connections, which in turn makes them more powerful and accurate. But this power comes at a cost.
Optimal sparse phase retrieval via a quasi-Bayesian approach
This paper addresses the problem of sparse phase retrieval, a fundamental inverse problem in applied mathematics, physics, and engineering, where a signal need to be reconstructed using only the magnitude of its transformation while phase information remains inaccessible. Leveraging the inherent sparsity of many real-world signals, we introduce a novel sparse quasi-Bayesian approach and provide the first theoretical guarantees for such an approach. Specifically, we employ a scaled Student distribution as a continuous shrinkage prior to enforce sparsity and analyze the method using the PAC-Bayesian inequality framework. Our results establish that the proposed Bayesian estimator achieves minimax-optimal convergence rates under sub-exponential noise, matching those of state-of-the-art frequentist methods. To ensure computational feasibility, we develop an efficient Langevin Monte Carlo sampling algorithm. Through numerical experiments, we demonstrate that our method performs comparably to existing frequentist techniques, highlighting its potential as a principled alternative for sparse phase retrieval in noisy settings.
Constants of motion network revisited
Fang, Wenqi, Chen, Chao, Yang, Yongkui, Wang, Zheng
Discovering constants of motion is meaningful in helping understand the dynamical systems, but inevitably needs proficient mathematical skills and keen analytical capabilities. With the prevalence of deep learning, methods employing neural networks, such as Constant Of Motion nETwork (COMET), are promising in handling this scientific problem. Although the COMET method can produce better predictions on dynamics by exploiting the discovered constants of motion, there is still plenty of room to sharpen it. In this paper, we propose a novel neural network architecture, built using the singular-value-decomposition (SVD) technique, and a two-phase training algorithm to improve the performance of COMET. Extensive experiments show that our approach not only retains the advantages of COMET, such as applying to non-Hamiltonian systems and indicating the number of constants of motion, but also can be more lightweight and noise-robust than COMET.
aweSOM: a CPU/GPU-accelerated Self-organizing Map and Statistically Combined Ensemble Framework for Machine-learning Clustering Analysis
Ha, Trung, Nรคttilรค, Joonas, Davelaar, Jordy
We introduce aweSOM, an open-source Python package for machine learning (ML) clustering and classification, using a Self-organizing Maps (SOM) algorithm that incorporates CPU/GPU acceleration to accommodate large ($N > 10^6$, where $N$ is the number of data points), multidimensional datasets. aweSOM consists of two main modules, one that handles the initialization and training of the SOM, and another that stacks the results of multiple SOM realizations to obtain more statistically robust clusters. Existing Python-based SOM implementations (e.g., POPSOM, Yuan (2018); MiniSom, Vettigli (2018); sklearn-som) primarily serve as proof-of-concept demonstrations, optimized for smaller datasets, but lacking scalability for large, multidimensional data. aweSOM provides a solution for this gap in capability, with good performance scaling up to $\sim 10^8$ individual points, and capable of utilizing multiple features per point. We compare the code performance against the legacy implementations it is based on, and find a 10-100x speed up, as well as significantly improved memory efficiency, due to several built-in optimizations.
Offline Dynamic Inventory and Pricing Strategy: Addressing Censored and Dependent Demand
In this paper, we study the offline sequential feature-based pricing and inventory control problem where the current demand depends on the past demand levels and any demand exceeding the available inventory is lost. Our goal is to leverage the offline dataset, consisting of past prices, ordering quantities, inventory levels, covariates, and censored sales levels, to estimate the optimal pricing and inventory control policy that maximizes long-term profit. While the underlying dynamic without censoring can be modeled by Markov decision process (MDP), the primary obstacle arises from the observed process where demand censoring is present, resulting in missing profit information, the failure of the Markov property, and a non-stationary optimal policy. To overcome these challenges, we first approximate the optimal policy by solving a high-order MDP characterized by the number of consecutive censoring instances, which ultimately boils down to solving a specialized Bellman equation tailored for this problem. Inspired by offline reinforcement learning and survival analysis, we propose two novel data-driven algorithms to solving these Bellman equations and, thus, estimate the optimal policy. Furthermore, we establish finite sample regret bounds to validate the effectiveness of these algorithms. Finally, we conduct numerical experiments to demonstrate the efficacy of our algorithms in estimating the optimal policy. To the best of our knowledge, this is the first data-driven approach to learning optimal pricing and inventory control policies in a sequential decision-making environment characterized by censored and dependent demand. The implementations of the proposed algorithms are available at https://github.com/gundemkorel/Inventory_Pricing_Control
Ordinary Least Squares as an Attention Mechanism
I show that ordinary least squares (OLS) predictions can be rewritten as the output of a restricted attention module, akin to those forming the backbone of large language models. This connection offers an alternative perspective on attention beyond the conventional information retrieval framework, making it more accessible to researchers and analysts with a background in traditional statistics. It falls into place when OLS is framed as a similarity-based method in a transformed regressor space, distinct from the standard view based on partial correlations. In fact, the OLS solution can be recast as the outcome of an alternative problem: minimizing squared prediction errors by optimizing the embedding space in which training and test vectors are compared via inner products. Rather than estimating coefficients directly, we equivalently learn optimal encoding and decoding operations for predictors. From this vantage point, OLS maps naturally onto the query-key-value structure of attention mechanisms. Building on this foundation, I discuss key elements of Transformer-style attention and draw connections to classic ideas from time series econometrics.
Conditional Independence Test Based on Transport Maps
He, Chenxuan, Gao, Yuan, Zhu, Liping, Huang, Jian
Testing conditional independence between two random vectors given a third is a fundamental and challenging problem in statistics, particularly in multivariate nonparametric settings due to the complexity of conditional structures. We propose a novel framework for testing conditional independence using transport maps. At the population level, we show that two well-defined transport maps can transform the conditional independence test into an unconditional independence test, this substantially simplifies the problem. These transport maps are estimated from data using conditional continuous normalizing flow models. Within this framework, we derive a test statistic and prove its consistency under both the null and alternative hypotheses. A permutation-based procedure is employed to evaluate the significance of the test. We validate the proposed method through extensive simulations and real-data analysis. Our numerical studies demonstrate the practical effectiveness of the proposed method for conditional independence testing.