Goto

Collaborating Authors

 cgo


Chain of Grounded Objectives: Bridging Process and Goal-oriented Prompting for Code Generation

Yeo, Sangyeop, Hwang, Seung-won, Ma, Yu-Seung

arXiv.org Artificial Intelligence

The use of Large Language Models (LLMs) for code generation has gained significant attention in recent years. Existing methods often aim to improve the quality of generated code by incorporating additional contextual information or guidance into input prompts. Many of these approaches adopt sequential reasoning strategies, mimicking human-like step-by-step thinking. However, such strategies may constrain flexibility, as they do not always align with the structured characteristics of programming languages. This paper introduces the Chain of Grounded Objectives (CGO), a method that embeds functional objectives into input prompts to enhance code generation. By leveraging appropriately structured objectives as input and avoiding explicit sequential procedures, CGO adapts effectively to the structured nature of programming tasks. Empirical evaluations demonstrate that CGO effectively enhances code generation, addressing limitations of existing approaches.


Tailoring Vaccine Messaging with Common-Ground Opinions

Stureborg, Rickard, Chen, Sanxing, Xie, Ruoyu, Patel, Aayushi, Li, Christopher, Zhu, Chloe Qinyu, Hu, Tingnan, Yang, Jun, Dhingra, Bhuwan

arXiv.org Artificial Intelligence

One way to personalize chatbot interactions is by establishing common ground with the intended reader. A domain where establishing mutual understanding could be particularly impactful is vaccine concerns and misinformation. Vaccine interventions are forms of messaging which aim to answer concerns expressed about vaccination. Tailoring responses in this domain is difficult, since opinions often have seemingly little ideological overlap. We define the task of tailoring vaccine interventions to a Common-Ground Opinion (CGO). Tailoring responses to a CGO involves meaningfully improving the answer by relating it to an opinion or belief the reader holds. In this paper we introduce TAILOR-CGO, a dataset for evaluating how well responses are tailored to provided CGOs. We benchmark several major LLMs on this task; finding GPT-4-Turbo performs significantly better than others. We also build automatic evaluation metrics, including an efficient and accurate BERT model that outperforms finetuned LLMs, investigate how to successfully tailor vaccine messaging to CGOs, and provide actionable recommendations from this investigation. Code and model weights: https://github.com/rickardstureborg/tailor-cgo Dataset: https://huggingface.co/datasets/DukeNLP/tailor-cgo


Functional Constrained Optimization for Risk Aversion and Sparsity Control

Cheng, Yi, Lan, Guanghui, Romeijn, H. Edwin

arXiv.org Artificial Intelligence

Risk and sparsity requirements often need to be enforced simultaneously in many applications, e.g., in portfolio optimization, assortment planning, and treatment planning. Properly balancing these potentially conflicting requirements entails the formulation of functional constrained optimization with either convex or nonconvex objectives. In this paper, we focus on projection-free methods that can generate a sparse trajectory for solving these challenging functional constrained optimization problems. Specifically, for the convex setting, we propose a Level Conditional Gradient (LCG) method, which leverages a level-set framework to update the approximation of the optimal value and an inner conditional gradient oracle (CGO) for solving mini-max subproblems. We show that the method achieves $\mathcal{O}\big(\frac{1}{\epsilon^2}\log\frac{1}{\epsilon}\big)$ iteration complexity for solving both smooth and nonsmooth cases without dependency on a possibly large size of optimal dual Lagrange multiplier. For the nonconvex setting, we introduce the Level Inexact Proximal Point (IPP-LCG) method and the Direct Nonconvex Conditional Gradient (DNCG) method. The first approach taps into the advantage of LCG by transforming the problem into a series of convex subproblems and exhibits an $\mathcal{O}\big(\frac{1}{\epsilon^3}\log\frac{1}{\epsilon}\big)$ iteration complexity for finding an ($\epsilon,\epsilon$)-KKT point. The DNCG is the first single-loop projection-free method, with iteration complexity bounded by $\mathcal{O}\big(1/\epsilon^4\big)$ for computing a so-called $\epsilon$-Wolfe point. We demonstrate the effectiveness of LCG, IPP-LCG and DNCG by devising formulations and conducting numerical experiments on two risk averse sparse optimization applications: a portfolio selection problem with and without cardinality requirement, and a radiation therapy planning problem in healthcare.


Cooperative Group Optimization System

#artificialintelligence

The cooperative group optimization (CGO) system consists of a group of intelligent agents cooperating with their peers in a sharing environment for realizing a common intention of finding high-quality solution(s) based on the landscape representation of an optimization task. CGO has also been applied on numerical optimization problem (NOP) to find solutions in high-dimensional nonlinear continuous space. Some algorithms, including Dissipative Particle Swarm Optimization (DPSO), Differential Evolution (DE), Social Cognitive Optimization (SCO), Genetic Algorithms (GA), and Electromagnetism-like Mechanism (EM) Heuristic, etc, and their hybrids (e.g., DEPSO), could be easily implemented into CGO. Both SCO and DEPSO have been incorporated into the NLPSolver extension of Calc in Apache Office. DEPSO was used for finding narrow admissible k-tuples.