Goto

Collaborating Authors

 Cao, Yijie


Character-based Outfit Generation with Vision-augmented Style Extraction via LLMs

arXiv.org Artificial Intelligence

The outfit generation problem involves recommending a complete outfit to a user based on their interests. Existing approaches focus on recommending items based on anchor items or specific query styles but do not consider customer interests in famous characters from movie, social media, etc. In this paper, we define a new Character-based Outfit Generation (COG) problem, designed to accurately interpret character information and generate complete outfit sets according to customer specifications such as age and gender. To tackle this problem, we propose a novel framework LVA-COG that leverages Large Language Models (LLMs) to extract insights from customer interests (e.g., character information) and employ prompt engineering techniques for accurate understanding of customer preferences. Additionally, we incorporate text-to-image models to enhance the visual understanding and generation (factual or counterfactual) of cohesive outfits. Our framework integrates LLMs with text-to-image models and improves the customer's approach to fashion by generating personalized recommendations. With experiments and case studies, we demonstrate the effectiveness of our solution from multiple dimensions.


A General Framework on Enhancing Portfolio Management with Reinforcement Learning

arXiv.org Artificial Intelligence

Portfolio management is the art and science in fiance that concerns continuous reallocation of funds and assets across financial instruments to meet the desired returns to risk profile. Deep reinforcement learning (RL) has gained increasing interest in portfolio management, where RL agents are trained base on financial data to optimize the asset reallocation process. Though there are prior efforts in trying to combine RL and portfolio management, previous works did not consider practical aspects such as transaction costs or short selling restrictions, limiting their applicability. To address these limitations, we propose a general RL framework for asset management that enables continuous asset weights, short selling and making decisions with relevant features. We compare the performance of three different RL algorithms: Policy Gradient with Actor-Critic (PGAC), Proximal Policy Optimization (PPO), and Evolution Strategies (ES) and demonstrate their advantages in a simulated environment with transaction costs. Our work aims to provide more options for utilizing RL frameworks in real-life asset management scenarios and can benefit further research in financial applications.