Not enough data to create a plot.
Try a different view from the menu above.
Approaching Human-Level Forecasting with Language Models Danny Halawi * Fred Zhang * Chen Yueh-Han
Forecasting future events is important for policy and decision making. In this work, we study whether language models (LMs) can forecast at the level of competitive human forecasters. Towards this goal, we develop a retrieval-augmented LM system designed to automatically search for relevant information, generate forecasts, and aggregate predictions. To facilitate our study, we collect a large dataset of questions from competitive forecasting platforms. Under a test set published after the knowledge cut-offs of our LMs, we evaluate the end-to-end performance of our system against the aggregates of human forecasts. On average, the system nears the crowd aggregate of competitive forecasters, and in some settings surpasses it. Our work suggests that using LMs to forecast the future could provide accurate predictions at scale and help to inform institutional decision making.
Online Linear Optimization with Many Hints Ashok Cutkosky Department of Computer Science Dept. of Electrical and Computer Engineering University of Utah Boston University Salt Lake City, UT
We study an online linear optimization (OLO) problem in which the learner is provided access to K "hint" vectors in each round prior to making a decision. In this setting, we devise an algorithm that obtains logarithmic regret whenever there exists a convex combination of the K hints that has positive correlation with the cost vectors. This significantly extends prior work that considered only the case K =1. To accomplish this, we develop a way to combine many arbitrary OLO algorithms to obtain regret only a logarithmically worse factor than the minimum regret of the original algorithms in hindsight; this result is of independent interest.
Israel kills municipal worker at water well in south Lebanon: Mayor
An Israeli drone strike that has killed one person in a south Lebanon village targeted a municipal worker operating a water well, not a Hezbollah member as the Israeli military had claimed, according to the Mayor of Nabatieh al-Fawqa Zein Ali Ghandour. Ghandour said on Thursday that the victim, Mahmoud Hasan Atwi, was "martyred" while on his official duty of trying to provide water for the people of the town. "We condemn in the strongest terms this blatant aggression against civilians and civilian infrastructure as well as the Lebanese state and its institutions," the mayor said in a statement. Ghandour called on the international community to press the issue and put an end to Israeli violations. The Israeli military had claimed that it fired at a "Hezbollah operative" who it said was "rehabilitating a site" used by the group.
Splatter a Video: Video Gaussian Representation for Versatile Processing
Video representation is a long-standing problem that is crucial for various downstream tasks, such as tracking, depth prediction, segmentation, view synthesis, and editing. However, current methods either struggle to model complex motions due to the absence of 3D structure or rely on implicit 3D representations that are ill-suited for manipulation tasks. To address these challenges, we introduce a novel explicit 3D representation--video Gaussian representation--that embeds a video into 3D Gaussians. Our proposed representation models video appearance in a 3D canonical space using explicit Gaussians as proxies and associates each Gaussian with 3D motions for video motion. This approach offers a more intrinsic and explicit representation than layered atlas or volumetric pixel matrices. To obtain such a representation, we distill 2D priors, such as optical flow and depth, from foundation models to regularize learning in this ill-posed setting. Extensive applications demonstrate the versatility of our new video representation. It has been proven effective in numerous video processing tasks, including tracking, consistent video depth and feature refinement, motion and appearance editing, and stereoscopic video generation.
Momentum Aggregation for Private Non-convex ERM
We introduce new algorithms and convergence guarantees for privacy-preserving non-convex Empirical Risk Minimization (ERM) on smooth d-dimensional objectives. We develop an improved sensitivity analysis of stochastic gradient descent on smooth objectives that exploits the recurrence of examples in different epochs.
Using Set Operations to Evaluate the Lexical and Semantic Robustness of Language Models
Set theory is foundational to mathematics and, when sets are finite, to reasoning about the world. An intelligent system should perform set operations consistently, regardless of superficial variations in the operands. Initially designed for semantically-oriented NLP tasks, large language models (LLMs) are now being evaluated on algorithmic tasks. Because sets are comprised of arbitrary symbols (e.g.
follows. R1, R4: The results are very specific to the particular model: Indeed it is the case that our theoretical results assume that data providers are constrained in l
Firstly, we thank the reviewers for their valuable comments. Whilst it is not reasonable in practice to assume that data is sampled i.i.d. As previously stated, we believe our work forms a first step in achieving this goal. Note that SPGs are bilevel optimisation problems, which are, in general, NP-hard. R2: Why would the learner ever evaluate on both the manipulated and unmanipulated data in practice?: We believe that We believe that our theoretical model captures this dynamic.
Gaussian Graph Network: Learning Efficient and Generalizable Gaussian Representations from Multi-view Images
While conventional methods require per-scene optimization, more recently several feed-forward methods have been proposed to generate pixel-aligned Gaussian representations with a learnable network, which are generalizable to different scenes. However, these methods simply combine pixel-aligned Gaussians from multiple views as scene representations, thereby leading to artifacts and extra memory cost without fully capturing the relations of Gaussians from different images. In this paper, we propose Gaussian Graph Network (GGN) to generate efficient and generalizable Gaussian representations. Specifically, we construct Gaussian Graphs to model the relations of Gaussian groups from different views. To support message passing at Gaussian level, we reformulate the basic graph operations over Gaussian representations, enabling each Gaussian to benefit from its connected Gaussian groups with Gaussian feature fusion.
SpeechAlign: Aligning Speech Generation to Human Preferences Dong Zhang
Speech language models have significantly advanced in generating realistic speech, with neural codec language models standing out. However, the integration of preference optimization to align speech outputs to human preferences is often neglected. This paper addresses this gap by first analyzing the distribution gap in codec language models, highlighting how it leads to discrepancies between the training and inference phases, which negatively affects performance. Then we explore leveraging preference optimization to bridge the distribution gap. We introduce SpeechAlign, an iterative self-improvement strategy that aligns speech language models to human preferences. SpeechAlign involves constructing a preference codec dataset contrasting golden codec tokens against synthetic tokens, followed by preference optimization to improve the codec language model. This cycle of improvement is carried out iteratively to steadily convert weak models to strong ones. Through both subjective and objective evaluations, we show that SpeechAlign can bridge the distribution gap and facilitating continuous selfimprovement of the speech language model. Moreover, SpeechAlign exhibits robust generalization capabilities and works for smaller models.