Not enough data to create a plot.
Try a different view from the menu above.
Education
On Giant's Shoulders: Effortless Weakto Strong by Dynamic Logits Fusion
Efficient fine-tuning of large language models for task-specific applications is imperative, yet the vast number of parameters in these models makes their training increasingly challenging. Despite numerous proposals for effective methods, a substantial memory overhead remains for gradient computations during updates. Can we fine-tune a series of task-specific small models and transfer their knowledge directly to a much larger model without additional training? In this paper, we explore weak-to-strong specialization using logit arithmetic, facilitating a direct answer to this question. Existing weak-to-strong methods often employ a static knowledge transfer ratio and a single small model for transferring complex knowledge, which leads to suboptimal performance.
Distributionally Robust Imitation Learning
We consider the imitation learning problem of learning a policy in a Markov Decision Process (MDP) setting where the reward function is not given, but demonstrations from experts are available. Although the goal of imitation learning is to learn a policy that produces behaviors nearly as good as the experts' for a desired task, assumptions of consistent optimality for demonstrated behaviors are often violated in practice. Finding a policy that is distributionally robust against noisy demonstrations based on an adversarial construction potentially solves this problem by avoiding optimistic generalizations of the demonstrated data.
Hard Negative Mixing for Contrastive Learning
The uniformity experiment is based on Wang and Isola [53]. We follow the same definitions of the losses/metrics as presented in the paper. We set α = 2 and t = 2. All features were L2-normalized, as the metrics are defined on the hypersphere. B.1 Proxy task: Effect of MLP and Stronger Augmentation Following our discussion in Section 3, we wanted to verify that hardness of the proxy task for MoCo [19] is directly correlated to the difficulty of the transformations set, i.e. proxy task hardness can modulated via the positive pair.
Q: Question-Asking LLMs and a Benchmark for Reliable Interactive Clinical Reasoning
Users typically engage with LLMs interactively, yet most existing benchmarks evaluate them in a static, single-turn format, posing reliability concerns in interactive scenarios. We identify a key obstacle towards reliability: LLMs are trained to answer any question, even with incomplete context or insufficient knowledge.
Regret in Online Recommendation Systems
This paper proposes a theoretical analysis of recommendation systems in an online setting, where items are sequentially recommended to users over time. In each round, a user, randomly picked from a population of m users, requests a recommendation. The decision-maker observes the user and selects an item from a catalogue of n items. Importantly, an item cannot be recommended twice to the same user. The probabilities that a user likes each item are unknown. The performance of the recommendation algorithm is captured through its regret, considering as a reference an Oracle algorithm aware of these probabilities. We investigate various structural assumptions on these probabilities: we derive for each structure regret lower bounds, and devise algorithms achieving these limits. Interestingly, our analysis reveals the relative weights of the different components of regret: the component due to the constraint of not presenting the same item twice to the same user, that due to learning the chances users like items, and finally that arising when learning the underlying structure.
Task-Free Continual Learning via Online Discrepancy Distance Learning
Learning from non-stationary data streams, also called Task-Free Continual Learning (TFCL) remains challenging due to the absence of explicit task information in most applications. Even though recently some algorithms have been proposed for TFCL, these methods lack theoretical guarantees. Moreover, there are no theoretical studies about forgetting during TFCL. This paper develops a new theoretical analysis framework that derives generalization bounds based on the discrepancy distance between the visited samples and the entire information made available for training the model. This analysis provides new insights into the forgetting behaviour in classification tasks. Inspired by this theoretical model, we propose a new approach enabled with the dynamic component expansion mechanism for a mixture model, namely Online Discrepancy Distance Learning (ODDL). ODDL estimates the discrepancy between the current memory and the already accumulated knowledge as an expansion signal aiming to ensure a compact network architecture with optimal performance. We then propose a new sample selection approach that selectively stores the samples into the memory buffer through the discrepancybased measure, further improving the performance. We perform several TFCL experiments with the proposed methodology, which demonstrate that the proposed approach achieves the state of the art performance.
Chatbots will be able to teach children TWICE as fast as teachers in the next 10 years, says the 'godfather of AI'
Chatbots will be able to teach children more than twice as fast as teachers can within the next decade, the so-called godfather of AI has predicted. Geoffrey Hinton, who won a Nobel Prize for his work on the technology, also claimed AI personal tutors would'be much more efficient and less boring'. Speaking at Gitex Europe, the British computer scientist said: 'It's not there yet, but it's coming, and so we'll get much better education at many levels.' AI personal tutors are already being trialled in UK schools, with the technology now able to talk directly to the student and adapt lesson plans to their knowledge level. The government has already funnelled millions of pounds into AI education initiatives – though it has claimed the technology will'absolutely not' replace teachers.
The best live Memorial Day mattress deals in 2025: Shop Nectar, Brooklyn Bedding, Purple, and more
Just a few weeks left in the school year, warmer temperatures, and weekend barbecues on the calendar mean we've made it out of winter's hibernation. But that doesn't mean sleep should get put on the backburner. Sleep is one of life's basic pillars, and it impacts our mood, health, brain function, and much more. If you've ever had a terrible month of sleep, you know how detrimental a sleep deficit can be on pretty much every aspect of waking hours. Instead of putting the milk in the cupboard on account of a sleepy brain, prioritize sleep this summer by snagging a luxurious new mattress while it's on sale.
Chicago paper publishes AI-generated 'summer reading list' with books that don't exist
Texas high school student Elliston Berry joins'Fox & Friends' to discuss the House's passage of a new bill that criminalizes the sharing of non-consensual intimate images, including content created with artificial intelligence. The Chicago Sun-Times admitted on Tuesday that it published an AI-generated list of books that don't exist for its summer reading list. On Sunday, the publication released a special 64-page section titled "Heat Index: Your Guide to the Best of Summer" which featured a list of 15 recommended books for summer. However, upon further look, it was found that 10 of the 15 books on the list were not real. One example included a book called "Nightshade Market" by Min Jin Lee, which was described as a "riveting tale set in Seoul's underground economy" and follows "three women whose paths intersect in an illegal night market" exploring "class, gender and the shadow economies beneath prosperous societies."
ColdGANs: Taming Language GANs with Cautious Sampling Strategies Thomas Scialom, Paul-Alexis Dray
Training regimes based on Maximum Likelihood Estimation (MLE) suffer from known limitations, often leading to poorly generated text sequences. At the root of these limitations is the mismatch between training and inference, i.e. the so-called exposure bias, exacerbated by considering only the reference texts as correct, while in practice several alternative formulations could be as good. Generative Adversarial Networks (GANs) can mitigate those limitations but the discrete nature of text has hindered their application to language generation: the approaches proposed so far, based on Reinforcement Learning, have been shown to underperform MLE. Departing from previous works, we analyze the exploration step in GANs applied to text generation, and show how classical sampling results in unstable training. We propose to consider alternative exploration strategies in a GAN framework that we name ColdGANs, where we force the sampling to be close to the distribution modes to get smoother learning dynamics. For the first time, to the best of our knowledge, the proposed language GANs compare favorably to MLE, and obtain improvements over the state-of-the-art on three generative tasks, namely unconditional text generation, question generation, and abstractive summarization.