townsend
You're storing your pans wrong! Expert reveals why you should NEVER stack pots on top of each other in the cupboard
Horrifying next twist in the Alexander brothers case: MAUREEN CALLAHAN exposes an unthinkable perversion that's been hiding in plain sight Alexander brothers' alleged HIGH SCHOOL gang rape video: Classmates speak out on sick'taking turns' footage... as creepy unseen photos are exposed Model Cindy Crawford, 60, mocked for her'out of touch' morning routine: 'Nothing about this is normal' Kentucky mother and daughter turn down $26.5MILLION to sell their farms to secretive tech giant that wants to build data center there Live Nation executives mocked'stupid' concert-goers in emails where they bragged about how to best rip them off: '$60 for closer grass' NFL superstar Xavier Worthy spills all on Travis Kelce, the Chiefs' struggles... and having Taylor Swift as his No 1 fan Heartbreaking video shows very elderly DoorDash driver shuffle down customer's driveway with coffee order because he is too poor to retire Amber Valletta, 52, was a '90s Vogue model who made movies with Sandra Bullock and Kate Hudson, see her now Nancy Mace throws herself into Iran warzone as she goes rogue on Middle East rescue mission: 'I AM that person' Hidden toxins in kids' treats EXPOSED: Health guru Jillian Michaels' sit-down with Casey DeSantis reveals dangers lurking in popular foods They're heavy, bulky and awkward to put away - but you should never pile your pans on top of each other, an expert has warned. Kitchen storage is a challenge in most homes and stacking items may seem like the most obvious solution. But Chris Townsend, a home moving expert, said this is one of the most common and damaging kitchen mistakes people can make. He warned the weight and friction involved with putting pots and pans on top of each other can cause a surprising amount of damage over time. 'The inside of your pans takes the brunt of the damage when they're stacked,' Mr Townsend, from Three Movers, said.
- Asia > Middle East > Iran (0.26)
- North America > United States > Kentucky (0.24)
- Europe > Middle East > Malta > Port Region > Southern Harbour District > Valletta (0.24)
- (17 more...)
- Media > Television (1.00)
- Media > Music (1.00)
- Media > Film (1.00)
- (6 more...)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence (1.00)
- Information Technology > Communications > Mobile (0.69)
Russia says talks to end Ukraine war 'serious' but rules out concessions
What is in the 28-point US plan for Ukraine? Why is Europe opposing Trump's peace plan? Is the fall of Pokrovsk inevitable? 'A corruption scandal may well end the Ukraine war' Russia says talks to end Ukraine war'serious' but rules out concessions Russia says the United States-brokered talks to end the war with Ukraine are "serious", but its officials caution that an agreement is a long way off and Moscow would offer no major concessions to Kyiv. Kremlin spokesman Dmitry Peskov said in televised comments on Wednesday that the negotiations were ongoing and "the process is serious."
- Asia > Russia (1.00)
- North America > United States (0.93)
- Europe > Russia > Central Federal District > Moscow Oblast > Moscow (0.29)
- (7 more...)
- Government > Regional Government > Europe Government > Russia Government (0.36)
- Government > Regional Government > Asia Government > Russia Government (0.36)
- Government > Regional Government > North America Government > United States Government (0.32)
- Government > Regional Government > Europe Government > Ukraine Government (0.31)
Operator learning without the adjoint
Boullé, Nicolas, Halikias, Diana, Otto, Samuel E., Townsend, Alex
There is a mystery at the heart of operator learning: how can one recover a non-self-adjoint operator from data without probing the adjoint? Current practical approaches suggest that one can accurately recover an operator while only using data generated by the forward action of the operator without access to the adjoint. However, naively, it seems essential to sample the action of the adjoint. In this paper, we partially explain this mystery by proving that without querying the adjoint, one can approximate a family of non-self-adjoint infinite-dimensional compact operators via projection onto a Fourier basis. We then apply the result to recovering Green's functions of elliptic partial differential operators and derive an adjoint-free sample complexity bound. While existing theory justifies low sample complexity in operator learning, ours is the first adjoint-free analysis that attempts to close the gap between theory and practice.
- North America > United States > Washington > King County > Seattle (0.14)
- North America > United States > New York > Tompkins County > Ithaca (0.04)
- North America > United States > California (0.04)
- (2 more...)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (0.93)
- Information Technology > Mathematics of Computing (0.87)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.67)
DCR-Consistency: Divide-Conquer-Reasoning for Consistency Evaluation and Improvement of Large Language Models
Cui, Wendi, Zhang, Jiaxin, Li, Zhuohang, Damien, Lopez, Das, Kamalika, Malin, Bradley, Kumar, Sricharan
Evaluating the quality and variability of text generated by Large Language Models (LLMs) poses a significant, yet unresolved research challenge. Traditional evaluation methods, such as ROUGE and BERTScore, which measure token similarity, often fail to capture the holistic semantic equivalence. This results in a low correlation with human judgments and intuition, which is especially problematic in high-stakes applications like healthcare and finance where reliability, safety, and robust decision-making are highly critical. This work proposes DCR, an automated framework for evaluating and improving the consistency of LLM-generated texts using a divide-conquer-reasoning approach. Unlike existing LLM-based evaluators that operate at the paragraph level, our method employs a divide-and-conquer evaluator (DCE) that breaks down the paragraph-to-paragraph comparison between two generated responses into individual sentence-to-paragraph comparisons, each evaluated based on predefined criteria. To facilitate this approach, we introduce an automatic metric converter (AMC) that translates the output from DCE into an interpretable numeric score. Beyond the consistency evaluation, we further present a reason-assisted improver (RAI) that leverages the analytical reasons with explanations identified by DCE to generate new responses aimed at reducing these inconsistencies. Through comprehensive and systematic empirical analysis, we show that our approach outperforms state-of-the-art methods by a large margin (e.g., +19.3% and +24.3% on the SummEval dataset) in evaluating the consistency of LLM generation across multiple benchmarks in semantic, factual, and summarization consistency tasks. Our approach also substantially reduces nearly 90% of output inconsistencies, showing promise for effective hallucination mitigation.
- North America > The Bahamas (0.14)
- Asia > Japan (0.14)
- Europe > United Kingdom > England (0.05)
- (6 more...)
- Leisure & Entertainment > Sports > Olympic Games (0.69)
- Government (0.68)
Operator learning for hyperbolic partial differential equations
Wang, Christopher, Townsend, Alex
We construct the first rigorously justified probabilistic algorithm for recovering the solution operator of a hyperbolic partial differential equation (PDE) in two variables from input-output training pairs. The primary challenge of recovering the solution operator of hyperbolic PDEs is the presence of characteristics, along which the associated Green's function is discontinuous. Therefore, a central component of our algorithm is a rank detection scheme that identifies the approximate location of the characteristics. By combining the randomized singular value decomposition with an adaptive hierarchical partition of the domain, we construct an approximant to the solution operator using $O(\Psi_\epsilon^{-1}\epsilon^{-7}\log(\Xi_\epsilon^{-1}\epsilon^{-1}))$ input-output pairs with relative error $O(\Xi_\epsilon^{-1}\epsilon)$ in the operator norm as $\epsilon\to0$, with high probability. Here, $\Psi_\epsilon$ represents the existence of degenerate singular values of the solution operator, and $\Xi_\epsilon$ measures the quality of the training data. Our assumptions on the regularity of the coefficients of the hyperbolic PDE are relatively weak given that hyperbolic PDEs do not have the ``instantaneous smoothing effect'' of elliptic and parabolic PDEs, and our recovery rate improves as the regularity of the coefficients increases.
- North America > United States > New York > Tompkins County > Ithaca (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
Random Edge Coding: One-Shot Bits-Back Coding of Large Labeled Graphs
Severo, Daniel, Townsend, James, Khisti, Ashish, Makhzani, Alireza
We present a one-shot method for compressing large labeled graphs called Random Edge Coding. When paired with a parameter-free model based on P\'olya's Urn, the worst-case computational and memory complexities scale quasi-linearly and linearly with the number of observed edges, making it efficient on sparse graphs, and requires only integer arithmetic. Key to our method is bits-back coding, which is used to sample edges and vertices without replacement from the edge-list in a way that preserves the structure of the graph. Optimality is proven under a class of random graph models that are invariant to permutations of the edges and of vertices within an edge. Experiments indicate Random Edge Coding can achieve competitive compression performance on real-world network datasets and scales to graphs with millions of nodes and edges.
- North America > Canada > Ontario > Toronto (0.14)
- Europe > Netherlands > North Holland > Amsterdam (0.04)
- Asia > Middle East > Jordan (0.04)
- (4 more...)
Live facial recognition labelled 'Orwellian' as Met police push ahead with use
Live facial recognition cameras are a form of mass surveillance, human rights campaigners have said, as the Met police said it would press ahead with its use of the "gamechanging" technology. Britain's largest force said the technology could be used to catch terrorists and find missing people after research published on Wednesday reported a "substantial improvement" in its accuracy. The research, carried out by the National Physical Laboratory (NPL), found there were minimal discrepancies for race and sex when the technology was used at certain settings. It was commissioned by the Met and South Wales police in late 2021 after fierce public debate about police use of the technology. But the human rights groups Liberty, Big Brother Watch and Amnesty have said the technology is oppressive and "turns us into walking ID cards".
- Europe > United Kingdom > Wales (0.25)
- Europe > United Kingdom > Scotland (0.05)
- Europe > Russia (0.05)
- (2 more...)
Learning Green's functions associated with time-dependent partial differential equations
Boullé, Nicolas, Kim, Seick, Shi, Tianyi, Townsend, Alex
Neural operators are a popular technique in scientific machine learning to learn a mathematical model of the behavior of unknown physical systems from data. Neural operators are especially useful to learn solution operators associated with partial differential equations (PDEs) from pairs of forcing functions and solutions when numerical solvers are not available or the underlying physics is poorly understood. In this work, we attempt to provide theoretical foundations to understand the amount of training data needed to learn time-dependent PDEs. Given input-output pairs from a parabolic PDE in any spatial dimension $n\geq 1$, we derive the first theoretically rigorous scheme for learning the associated solution operator, which takes the form of a convolution with a Green's function $G$. Until now, rigorously learning Green's functions associated with time-dependent PDEs has been a major challenge in the field of scientific machine learning because $G$ may not be square-integrable when $n>1$, and time-dependent PDEs have transient dynamics. By combining the hierarchical low-rank structure of $G$ together with randomized numerical linear algebra, we construct an approximant to $G$ that achieves a relative error of $\smash{\mathcal{O}(\Gamma_\epsilon^{-1/2}\epsilon)}$ in the $L^1$-norm with high probability by using at most $\smash{\mathcal{O}(\epsilon^{-\frac{n+2}{2}}\log(1/\epsilon))}$ input-output training pairs, where $\Gamma_\epsilon$ is a measure of the quality of the training dataset for learning $G$, and $\epsilon>0$ is sufficiently small.
- North America > United States > New York > Tompkins County > Ithaca (0.04)
- North America > United States > Indiana (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Asia > South Korea > Seoul > Seoul (0.04)
Rational neural network advances partial differentiation equation learning
Math is the language of the physical world, and Alex Townsend sees mathematical patterns everywhere: in weather, in the way soundwaves move, and even in the spots or stripes zebra fish develop in embryos. "Since Newton wrote down calculus, we have been deriving calculus equations called differential equations to model physical phenomena," said Townsend, associate professor of mathematics in the College of Arts and Sciences. This way of deriving laws of calculus works, Townsend said, if you already know the physics of the system. But what about learning physical systems for which the physics remains unknown? In the new and growing field of partial differential equation (PDE) learning, mathematicians collect data from natural systems and then use trained computer neural networks in order to try to derive underlying mathematical equations.