trough
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > United States > California (0.04)
Predicting Market Troughs: A Machine Learning Approach with Causal Interpretation
Rao, Peilin, Rojas, Randall R.
This paper provides robust, new evidence on the causal drivers of market troughs. We demonstrate that conclusions about these triggers are critically sensitive to model specification, moving beyond restrictive linear models with a flexible DML average partial effect causal machine learning framework. Our robust estimates identify the volatility of options-implied risk appetite and market liquidity as key causal drivers, relationships misrepresented or obscured by simpler models. These findings provide high-frequency empirical support for intermediary asset pricing theories. This causal analysis is enabled by a high-performance nowcasting model that accurately identifies capitulation events in real-time.
- North America > United States > California > Los Angeles County > Los Angeles (0.14)
- North America > United States > Illinois > Cook County > Chicago (0.04)
- North America > United States > New York (0.04)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Banking & Finance > Trading (1.00)
- Banking & Finance > Economy (1.00)
Reliability, Embeddedness, and Agency: A Utility-Driven Mathematical Framework for Agent-Centric AI Adoption
We formalize three design axioms for sustained adoption of agent-centric AI systems executing multi-step tasks: (A1) Reliability > Novelty; (A2) Embed > Destination; (A3) Agency > Chat. We model adoption as a sum of a decaying novelty term and a growing utility term and derive the phase conditions for troughs/overshoots with full proofs. We introduce: (i) an identifiability/confounding analysis for $(α,β,N_0,U_{\max})$ with delta-method gradients; (ii) a non-monotone comparator (logistic-with-transient-bump) evaluated on the same series to provide additional model comparison; (iii) ablations over hazard families $h(\cdot)$ mapping $ΔV \to β$; (iv) a multi-series benchmark (varying trough depth, noise, AR structure) reporting coverage (type-I error, power); (v) calibration of friction proxies against time-motion/survey ground truth with standard errors; (vi) residual analyses (autocorrelation and heteroskedasticity) for each fitted curve; (vii) preregistered windowing choices for pre/post estimation; (viii) Fisher information & CRLB for $(α,β)$ under common error models; (ix) microfoundations linking $\mathcal{T}$ to $(N_0,U_{\max})$; (x) explicit comparison to bi-logistic, double-exponential, and mixture models; and (xi) threshold sensitivity to $C_f$ heterogeneity. Figures and tables are reflowed for readability, and the bibliography restores and extends non-logistic/Bass adoption references (Gompertz, Richards, Fisher-Pry, Mansfield, Griliches, Geroski, Peres). All code and logs necessary to reproduce the synthetic analyses are embedded as LaTeX listings.
Delving into: the quantification of Ai-generated content on the internet (synthetic data)
While it is increasingly evident that the internet is becoming saturated with content created by generated Ai large language models, accurately measuring the scale of this phenomenon has proven challenging. By analyzing the frequency of specific keywords commonly used by ChatGPT, this paper demonstrates that such linguistic markers can effectively be used to esti-mate the presence of generative AI content online. The findings suggest that at least 30% of text on active web pages originates from AI-generated sources, with the actual proportion likely ap-proaching 40%. Given the implications of autophagous loops, this is a sobering realization.
- North America > United States > California > San Francisco County > San Francisco (0.15)
- Oceania > Australia > New South Wales > Goulburn County > Albury (0.04)
- North America > United States > California > Los Angeles County > Beverly Hills (0.04)
- Europe > Ireland (0.04)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.55)
Statistical Model Criticism using Kernel Two Sample Tests
We propose an exploratory approach to statistical model criticism using maximum mean discrepancy (MMD) two sample tests. Typical approaches to model criticism require a practitioner to select a statistic by which to measure discrepancies between data and a statistical model. MMD two sample tests are instead constructed as an analytic maximisation over a large space of possible statistics and therefore automatically select the statistic which most shows any discrepancy. We demonstrate on synthetic data that the selected statistic, called the witness function, can be used to identify where a statistical model most misrepresents the data it was trained on. We then apply the procedure to real data where the models being assessed are restricted Boltzmann machines, deep belief networks and Gaussian process regression and demonstrate the ways in which these models fail to capture the properties of the data they are trained on.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > United States > California (0.04)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.87)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.67)
A world suffused with AI probably wouldn't be good for us – or the planet John Naughton
What to do when surrounded by people who are losing their minds about the Newest New Thing? Answer: reach for the Gartner Hype Cycle, an ingenious diagram that maps the progress of an emerging technology through five phases: the "technology trigger", which is followed by a rapid rise to the "peak of inflated expectations"; this is succeeded by a rapid decline into the "trough of disillusionment", after which begins a gentle climb up the "slope of enlightenment" – before eventually (often years or decades later) reaching the "plateau of productivity". Given the current hysteria about AI, I thought I'd check to see where it is on the chart. It shows that generative AI (the polite term for ChatGPT and co) has just reached the peak of inflated expectations. That squares with the fevered predictions of the tech industry (not to mention governments) that AI will be transformative and will soon be ubiquitous.
- North America > United States > New York (0.05)
- North America > Canada > Quebec > Montreal (0.05)
- Asia > Middle East > Israel (0.05)
- Asia > China > Beijing > Beijing (0.05)
Fortify the Shortest Stave in Attention: Enhancing Context Awareness of Large Language Models for Effective Tool Use
Chen, Yuhan, Lv, Ang, Lin, Ting-En, Chen, Changyu, Wu, Yuchuan, Huang, Fei, Li, Yongbin, Yan, Rui
Recent advancements in large language models (LLMs) have significantly expanded their functionality and skills as tool agents. In this paper, we argue that a waveform pattern in the model's attention allocation has an impact on the tool use performance, which degrades when the position of essential information hits the trough zone. To address this issue, we propose a novel inference method named Attention Buckets. This approach enables LLMs to handle context by conducting parallel processes, each featuring a unique RoPE angle base that shapes the attention waveform. Attention Buckets ensures that an attention trough of a particular process can be compensated with an attention peak of another run, reducing the risk of the LLM missing essential information residing within the attention trough. Our extensive experiments on the widely recognized tool use benchmark demonstrate the efficacy of our approach, where a 7B-parameter open-source model enhanced by Attention Buckets achieves SOTA performance on par with GPT-4.
What is the hype cycle for robotics?
We've all seen or heard of the Hype Cycle. It's a visual depiction of the lifecycle stages a technology goes through from the initial development to commercial maturity. It's a useful way to track what technologies are compatible with your organization's needs. There are five stages of the Hype Cycle, which take us through the initial excitement trigger, that leads to the peak of inflated expectations followed by disillusionment. It's only as a product moves into more tangible market use, sometimes called'The Slope of Enlightenment', that we start to reach full commercial viability.
Tuning into brainwave rhythms speeds up learning in adults, study finds
Scientists have shown for the first time that briefly tuning into a person's individual brainwave cycle before they perform a learning task dramatically boosts the speed at which cognitive skills improve. Calibrating rates of information delivery to match the natural tempo of our brains increases our capacity to absorb and adapt to new information, according to the team behind the study. University of Cambridge researchers say that these techniques could help us retain "neuroplasticity" much later in life and advance lifelong learning. "Each brain has its own natural rhythm, generated by the oscillation of neurons working together," said Prof Zoe Kourtzi, senior author of the study from Cambridge's Department of Psychology. "We simulated these fluctuations so the brain is in tune with itself – and in the best state to flourish."
Pioneering Machine Learning Technique on the Hypothalamus Gives Insight Into Nature of Aggression - Neuroscience News
Summary: Using machine learning technology, researchers provide new insight into the neural mechanisms that govern anger and aggression. Have you ever been cut off while driving and found yourself swearing and laying on the horn? Or come home from a long day at work and lashed out at whoever left the dishes unwashed? From petty anger to the devastating violence we see in the news, acts of aggression can be difficult to comprehend. Research has yielded puzzling paradoxes about how rage works in the brain.