Plotting

 Panama


Joint Entropy Search for Multi-Objective Bayesian Optimization Ben Tu

Neural Information Processing Systems

Many real-world problems can be phrased as a multi-objective optimization problem, where the goal is to identify the best set of compromises between the competing objectives. Multi-objective Bayesian optimization (BO) is a sample efficient strategy that can be deployed to solve these vector-valued optimization problems where access is limited to a number of noisy objective function evaluations. In this paper, we propose a novel information-theoretic acquisition function for BO called Joint Entropy Search (JES), which considers the joint information gain for the optimal set of inputs and outputs. We present several analytical approximations to the JES acquisition function and also introduce an extension to the batch setting.


Plan-on-Graph: Self-Correcting Adaptive Planning of Large Language Model on Knowledge Graphs

Neural Information Processing Systems

Large Language Models (LLMs) have shown remarkable reasoning capabilities on complex tasks, but they still suffer from out-of-date knowledge, hallucinations, and opaque decision-making. In contrast, Knowledge Graphs (KGs) can provide explicit and editable knowledge for LLMs to alleviate these issues. Existing paradigm of KG-augmented LLM manually predefines the breadth of exploration space and requires flawless navigation in KGs. However, this paradigm cannot adaptively explore reasoning paths in KGs based on the question semantics and self-correct erroneous reasoning paths, resulting in a bottleneck in efficiency and effect. To address these limitations, we propose a novel self-correcting adaptive planning paradigm for KG-augmented LLM named Plan-on-Graph (PoG), which first decomposes the question into several sub-objectives and then repeats the process of adaptively exploring reasoning paths, updating memory, and reflecting on the need to self-correct erroneous reasoning paths until arriving at the answer. Specifically, three important mechanisms of Guidance, Memory, and Reflection are designed to work together, to guarantee the adaptive breadth of self-correcting planning for graph reasoning. Finally, extensive experiments on three real-world datasets demonstrate the effectiveness and efficiency of PoG.


Plan-on-Graph: Self-Correcting Adaptive Planning of Large Language Model on Knowledge Graphs

Neural Information Processing Systems

Large Language Models (LLMs) have shown remarkable reasoning capabilities on complex tasks, but they still suffer from out-of-date knowledge, hallucinations, and opaque decision-making. In contrast, Knowledge Graphs (KGs) can provide explicit and editable knowledge for LLMs to alleviate these issues. Existing paradigm of KG-augmented LLM manually predefines the breadth of exploration space and requires flawless navigation in KGs. However, this paradigm cannot adaptively explore reasoning paths in KGs based on the question semantics and self-correct erroneous reasoning paths, resulting in a bottleneck in efficiency and effect. To address these limitations, we propose a novel self-correcting adaptive planning paradigm for KG-augmented LLM named Plan-on-Graph (PoG), which first decomposes the question into several sub-objectives and then repeats the process of adaptively exploring reasoning paths, updating memory, and reflecting on the need to self-correct erroneous reasoning paths until arriving at the answer. Specifically, three important mechanisms of Guidance, Memory, and Reflection are designed to work together, to guarantee the adaptive breadth of self-correcting planning for graph reasoning. Finally, extensive experiments on three real-world datasets demonstrate the effectiveness and efficiency of PoG.


Liner Shipping Network Design with Reinforcement Learning

arXiv.org Artificial Intelligence

This paper proposes a novel reinforcement learning framework to address the Liner Shipping Network Design Problem (LSNDP), a challenging combinatorial optimization problem focused on designing cost-efficient maritime shipping routes. Traditional methods for solving the LSNDP typically involve decomposing the problem into sub-problems, such as network design and multi-commodity flow, which are then tackled using approximate heuristics or large neighborhood search (LNS) techniques. In contrast, our approach employs a model-free reinforcement learning algorithm on the network design, integrated with a heuristic-based multi-commodity flow solver, to produce competitive results on the publicly available LINERLIB benchmark. Additionally, our method also demonstrates generalization capabilities by producing competitive solutions on the benchmark instances after training on perturbed instances.


Plan-on-Graph: Self-Correcting Adaptive Planning of Large Language Model on Knowledge Graphs

arXiv.org Artificial Intelligence

Large Language Models (LLMs) have shown remarkable reasoning capabilities on complex tasks, but they still suffer from out-of-date knowledge, hallucinations, and opaque decision-making. In contrast, Knowledge Graphs (KGs) can provide explicit and editable knowledge for LLMs to alleviate these issues. Existing paradigm of KG-augmented LLM manually predefines the breadth of exploration space and requires flawless navigation in KGs. However, this paradigm cannot adaptively explore reasoning paths in KGs based on the question semantics and self-correct erroneous reasoning paths, resulting in a bottleneck in efficiency and effect. To address these limitations, we propose a novel self-correcting adaptive planning paradigm for KG-augmented LLM named Plan-on-Graph (PoG), which first decomposes the question into several sub-objectives and then repeats the process of adaptively exploring reasoning paths, updating memory, and reflecting on the need to self-correct erroneous reasoning paths until arriving at the answer. Specifically, three important mechanisms of Guidance, Memory, and Reflection are designed to work together, to guarantee the adaptive breadth of self-correcting planning for graph reasoning. Finally, extensive experiments on three real-world datasets demonstrate the effectiveness and efficiency of PoG.


Testing the Efficacy of Hyperparameter Optimization Algorithms in Short-Term Load Forecasting

arXiv.org Artificial Intelligence

Accurate forecasting of electrical demand is essential for maintaining a stable and reliable power grid, optimizing the allocation of energy resources, and promoting efficient energy consumption practices. This study investigates the effectiveness of five hyperparameter optimization (HPO) algorithms -- Random Search, Covariance Matrix Adaptation Evolution Strategy (CMA--ES), Bayesian Optimization, Partial Swarm Optimization (PSO), and Nevergrad Optimizer (NGOpt) across univariate and multivariate Short-Term Load Forecasting (STLF) tasks. Using the Panama Electricity dataset (n=48,049), we evaluate HPO algorithms' performances on a surrogate forecasting algorithm, XGBoost, in terms of accuracy (i.e., MAPE, $R^2$) and runtime. Performance plots visualize these metrics across varying sample sizes from 1,000 to 20,000, and Kruskal--Wallis tests assess the statistical significance of the performance differences. Results reveal significant runtime advantages for HPO algorithms over Random Search. In univariate models, Bayesian optimization exhibited the lowest accuracy among the tested methods. This study provides valuable insights for optimizing XGBoost in the STLF context and identifies areas for future research.


Bridging the Training-Inference Gap in LLMs by Leveraging Self-Generated Tokens

arXiv.org Artificial Intelligence

Language models are often trained to maximize the likelihood of the next token given past tokens in the training dataset. However, during inference time, they are utilized differently, generating text sequentially and auto-regressively by using previously generated tokens as input to predict the next one. Marginal differences in predictions at each step can cascade over successive steps, resulting in different distributions from what the models were trained for and potentially leading to unpredictable behavior. This paper proposes two simple approaches based on model own generation to address this discrepancy between the training and inference time. Our first approach is Batch-Scheduled Sampling, where, during training, we stochastically choose between the ground-truth token from the dataset and the model's own generated token as input to predict the next token. This is done in an offline manner, modifying the context window by interleaving ground-truth tokens with those generated by the model. Our second approach is Reference-Answer-based Correction, where we explicitly incorporate a self-correction capability into the model during training. This enables the model to effectively self-correct the gaps between the generated sequences and the ground truth data without relying on an external oracle model. By incorporating our proposed strategies during training, we have observed an overall improvement in performance compared to baseline methods, as demonstrated by our extensive experiments using summarization, general question-answering, and math question-answering tasks.


On the Benefits of Robot Platooning for Navigating Crowded Environments

arXiv.org Artificial Intelligence

This paper studies how groups of robots can effectively navigate through a crowd of agents. It quantifies the performance of platooning and less constrained, greedy strategies, and the extent to which these strategies disrupt the crowd agents. Three scenarios are considered: (i) passive crowds, (ii) counter-flow crowds, and (iii) perpendicular-flow crowds. Through simulations consisting of up to 200 robots, we show that for navigating passive and counter-flow crowds, the platooning strategy is less disruptive and more effective in dense crowds than the greedy strategy, whereas for navigating perpendicular-flow crowds, the greedy strategy outperforms the platooning strategy in either aspect. Moreover, we propose an adaptive strategy that can switch between platooning and greedy behavioral states, and demonstrate that it combines the strengths of both strategies in all the scenarios considered.


Top scientist warns AI could surpass human intelligence by 2027 - decades earlier than previously predicted

Daily Mail - Science & tech

The computer scientist and CEO who popularized the term'artificial general intelligence' (AGI) believes AI is verging on an exponential'intelligence explosion.' The PhD mathematician and futurist Ben Goertzel made the prediction while closing out a summit on AGI this month: 'It seems quite plausible we could get to human-level AGI within, let's say, the next three to eight years.' 'Once you get to human-level AGI,' Goertzel, sometimes called'father of AGI,' added, 'within a few years you could get a radically superhuman AGI.' While the futurist admitted that he'could be wrong,' he went on to predict that the only impediment to a runaway, ultra-advanced AI -- far more advanced than its human makers -- would be if the bot's'own conservatism' advised caution. Mathematician and futurist Ben Goertzel made the prediction while closing out a summit on AGI las week: 'we could get to human-level AGI within, let's say, the next three to eight years' Goertzel made his predictions during his closing remarks last week at the '2024 Beneficial AI Summit and Unconference,' partially sponsored by his own firm SingularityNET where he is CEO.


Google wants you to listen to coral reefs. It just might help restore them.

Mashable

Google wants your help in preserving and restoring coral reefs, and has designed a platform to help with this mission in mere minutes. All you have to do is tune in. Called "Calling in Our Corals"(opens in a new tab), the new citizen science project is a collaboration between Google Arts and Culture and marine biologists across the globe. People are being asked to listen to underwater recordings of coral reefs in Marine Protected Areas through an online platform, identifying sounds made by fish, shrimp, and other marine creatures in order to monitor ecosystems and determine opportunities for reef restoration. Scientists have placed hydrophones in 10 reefs across Australia, Indonesia, Philippines, the U.S., Panama, and Sweden, which record 24 hours a day, meaning there's hundreds of hours to sift through.