Zhang, Xiaoli
FinRobot: An Open-Source AI Agent Platform for Financial Applications using Large Language Models
Yang, Hongyang, Zhang, Boyu, Wang, Neng, Guo, Cheng, Zhang, Xiaoli, Lin, Likun, Wang, Junlin, Zhou, Tianyu, Guan, Mao, Zhang, Runjia, Wang, Christina Dan
As financial institutions and professionals increasingly incorporate Large Language Models (LLMs) into their workflows, substantial barriers, including proprietary data and specialized knowledge, persist between the finance sector and the AI community. These challenges impede the AI community's ability to enhance financial tasks effectively. Acknowledging financial analysis's critical role, we aim to devise financial-specialized LLM-based toolchains and democratize access to them through open-source initiatives, promoting wider AI adoption in financial decision-making. In this paper, we introduce FinRobot, a novel open-source AI agent platform supporting multiple financially specialized AI agents, each powered by LLM. Specifically, the platform consists of four major layers: 1) the Financial AI Agents layer that formulates Financial Chain-of-Thought (CoT) by breaking sophisticated financial problems down into logical sequences; 2) the Financial LLM Algorithms layer dynamically configures appropriate model application strategies for specific tasks; 3) the LLMOps and DataOps layer produces accurate models by applying training/fine-tuning techniques and using task-relevant data; 4) the Multi-source LLM Foundation Models layer that integrates various LLMs and enables the above layers to access them directly. Finally, FinRobot provides hands-on for both professional-grade analysts and laypersons to utilize powerful AI techniques for advanced financial analysis. We open-source FinRobot at \url{https://github.com/AI4Finance-Foundation/FinRobot}.
Curriculum-based Sensing Reduction in Simulation to Real-World Transfer for In-hand Manipulation
Tao, Lingfeng, Zhang, Jiucai, Zheng, Qiaojie, Zhang, Xiaoli
Simulation to Real-World Transfer allows affordable and fast training of learning-based robots for manipulation tasks using Deep Reinforcement Learning methods. Currently, Sim2Real uses Asymmetric Actor-Critic approaches to reduce the rich idealized features in simulation to the accessible ones in the real world. However, the feature reduction from the simulation to the real world is conducted through an empirically defined one-step curtail. Small feature reduction does not sufficiently remove the actor's features, which may still cause difficulty setting up the physical system, while large feature reduction may cause difficulty and inefficiency in training. To address this issue, we proposed Curriculum-based Sensing Reduction to enable the actor to start with the same rich feature space as the critic and then get rid of the hard-to-extract features step-by-step for higher training performance and better adaptation for real-world feature space. The reduced features are replaced with random signals from a Deep Random Generator to remove the dependency between the output and the removed features and avoid creating new dependencies. The methods are evaluated on the Allegro robot hand in a real-world in-hand manipulation task. The results show that our methods have faster training and higher task performance than baselines and can solve real-world tasks when selected tactile features are reduced.
Stable In-hand Manipulation with Finger Specific Multi-agent Shadow Reward
Tao, Lingfeng, Zhang, Jiucai, Zhang, Xiaoli
Deep Reinforcement Learning has shown its capability to solve the high degrees of freedom in control and the complex interaction with the object in the multi-finger dexterous in-hand manipulation tasks. Current DRL approaches prefer sparse rewards to dense rewards for the ease of training but lack behavior constraints during the manipulation process, leading to aggressive and unstable policies that are insufficient for safety-critical in-hand manipulation tasks. Dense rewards can regulate the policy to learn stable manipulation behaviors with continuous reward constraints but are hard to empirically define and slow to converge optimally. This work proposes the Finger-specific Multi-agent Shadow Reward (FMSR) method to determine the stable manipulation constraints in the form of dense reward based on the state-action occupancy measure, a general utility of DRL that is approximated during the learning process. Information Sharing (IS) across neighboring agents enables consensus training to accelerate the convergence. The methods are evaluated in two in-hand manipulation tasks on the Shadow Hand. The results show FMSR+IS converges faster in training, achieving a higher task success rate and better manipulation stability than conventional dense reward. The comparison indicates FMSR+IS achieves a comparable success rate even with the behavior constraint but much better manipulation stability than the policy trained with a sparse reward.
Physics-Guided Hierarchical Reward Mechanism for Learning-Based Robotic Grasping
Jung, Yunsik, Tao, Lingfeng, Bowman, Michael, Zhang, Jiucai, Zhang, Xiaoli
Learning-based grasping can afford real-time grasp motion planning of multi-fingered robotics hands thanks to its high computational efficiency. However, learning-based methods are required to explore large search spaces during the learning process. The search space causes low learning efficiency, which has been the main barrier to its practical adoption. In addition, the trained policy lacks a generalizable outcome unless objects are identical to the trained objects. In this work, we develop a novel Physics-Guided Deep Reinforcement Learning with a Hierarchical Reward Mechanism to improve learning efficiency and generalizability for learning-based autonomous grasping. Unlike conventional observation-based grasp learning, physics-informed metrics are utilized to convey correlations between features associated with hand structures and objects to improve learning efficiency and outcomes. Further, the hierarchical reward mechanism enables the robot to learn prioritized components of the grasping tasks. Our method is validated in robotic grasping tasks with a 3-finger MICO robot arm. The results show that our method outperformed the standard Deep Reinforcement Learning methods in various robotic grasping tasks.
A Multi-Agent Approach for Adaptive Finger Cooperation in Learning-based In-Hand Manipulation
Tao, Lingfeng, Zhang, Jiucai, Bowman, Michael, Zhang, Xiaoli
In-hand manipulation is challenging for a multi-finger robotic hand due to its high degrees of freedom and the complex interaction with the object. To enable in-hand manipulation, existing deep reinforcement learning based approaches mainly focus on training a single robot-structure-specific policy through the centralized learning mechanism, lacking adaptability to changes like robot malfunction. To solve this limitation, this work treats each finger as an individual agent and trains multiple agents to control their assigned fingers to complete the in-hand manipulation task cooperatively. We propose the Multi-Agent Global-Observation Critic and Local-Observation Actor (MAGCLA) method, where the critic can observe all agents' actions globally, and the actor only locally observes its neighbors' actions. Besides, conventional individual experience replay may cause unstable cooperation due to the asynchronous performance increment of each agent, which is critical for in-hand manipulation tasks. To solve this issue, we propose the Synchronized Hindsight Experience Replay (SHER) method to synchronize and efficiently reuse the replayed experience across all agents. The methods are evaluated in two in-hand manipulation tasks on the Shadow dexterous hand. The results show that SHER helps MAGCLA achieve comparable learning efficiency to a single policy, and the MAGCLA approach is more generalizable in different tasks. The trained policies have higher adaptability in the robot malfunction test compared to the baseline multi-agent and single-agent approaches.
Multi-Phase Multi-Objective Dexterous Manipulation with Adaptive Hierarchical Curriculum
Tao, Lingfeng, Zhang, Jiucai, Zhang, Xiaoli
Dexterous manipulation tasks usually have multiple objectives, and the priorities of these objectives may vary at different phases of a manipulation task. Varying priority makes a robot hardly or even failed to learn an optimal policy with a deep reinforcement learning (DRL) method. To solve this problem, we develop a novel Adaptive Hierarchical Reward Mechanism (AHRM) to guide the DRL agent to learn manipulation tasks with multiple prioritized objectives. The AHRM can determine the objective priorities during the learning process and update the reward hierarchy to adapt to the changing objective priorities at different phases. The proposed method is validated in a multi-objective manipulation task with a JACO robot arm in which the robot needs to manipulate a target with obstacles surrounded. The simulation and physical experiment results show that the proposed method improved robot learning in task performance and learning efficiency.
Comprehensive process-molten pool relations modeling using CNN for wire-feed laser additive manufacturing
Jamnikar, Noopur, Liu, Sen, Brice, Craig, Zhang, Xiaoli
Wire-feed laser additive manufacturing (WLAM) is gaining wide interest due to its high level of automation, high deposition rates, and good quality of printed parts. In-process monitoring and feedback controls that would reduce the uncertainty in the quality of the material are in the early stages of development. Machine learning promises the ability to accelerate the adoption of new processes and property design in additive manufacturing by making process-structure-property connections between process setting inputs and material quality outcomes. The molten pool dimensional information and temperature are the indicators for achieving the high quality of the build, which can be directly controlled by processing parameters. For the purpose of in situ quality control, the process parameters should be controlled in real-time based on sensed information from the process, in particular the molten pool. Thus, the molten pool-process relations are of preliminary importance. This paper analyzes experimentally collected in situ sensing data from the molten pool under a set of controlled process parameters in a WLAM system. The variations in the steady-state and transient state of the molten pool are presented with respect to the change of independent process parameters. A multi-modality convolutional neural network (CNN) architecture is proposed for predicting the control parameter directly from the measurable molten pool sensor data for achieving desired geometric and microstructural properties. Dropout and regularization are applied to the CNN architecture to avoid the problem of overfitting. The results highlighted that the multi-modal CNN, which receives temperature profile as an external feature to the features extracted from the image data, has improved prediction performance compared to the image-based uni-modality CNN approach.
Forming Human-Robot Cooperation for Tasks with General Goal using Evolutionary Value Learning
Tao, Lingfeng, Bowman, Michael, Zhang, Jiucai, Zhang, Xiaoli
In human-robot cooperation, the robot cooperates with the human to accomplish the task together. Existing approaches assume the human has a specific goal during the cooperation, and the robot infers and acts toward it. However, in real-world environments, a human usually only has a general goal (e.g., general direction or area in motion planning) at the beginning of the cooperation which needs to be clarified to a specific goal (e.g., an exact position) during cooperation. The specification process is interactive and dynamic, which depends on the environment and the behavior of the partners. The robot that does not consider the goal specification process may cause frustration to the human partner, elongate the time to come to an agreement, and compromise or fail team performance. We present Evolutionary Value Learning (EVL) approach which uses a State-based Multivariate Bayesian Inference method to model the dynamics of goal specification process in HRC, and an Evolutionary Value Updating method to actively enhance the process of goal specification and cooperation formation. This enables the robot to simultaneously help the human to specify the goal and learn a cooperative policy in a Reinforcement Learning manner. In experiments with real human subjects, the robot equipped with EVL outperforms existing methods with faster goal specification processes and better team performance.
A Review of Methodologies for Natural-Language-Facilitated Human-Robot Cooperation
Liu, Rui, Zhang, Xiaoli
Natural-language-facilitated human-robot cooperation (NLC) refers to using natural language (NL) to facilitate interactive information sharing and task executions with a common goal constraint between robots and humans. Recently, NLC research has received increasing attention. Typical NLC scenarios include robotic daily assistance, robotic health caregiving, intelligent manufacturing, autonomous navigation, and robot social accompany. However, a thorough review, that can reveal latest methodologies to use NL to facilitate human-robot cooperation, is missing. In this review, a comprehensive summary about methodologies for NLC is presented. NLC research includes three main research focuses: NL instruction understanding, NL-based execution plan generation, and knowledge-world mapping. In-depth analyses on theoretical methods, applications, and model advantages and disadvantages are made. Based on our paper review and perspective, potential research directions of NLC are summarized.
Systems of natural-language-facilitated human-robot cooperation: A review
Liu, Rui, Zhang, Xiaoli
Natural-language-facilitated human-robot cooperation (NLC), in which natural language (NL) is used to share knowledge between a human and a robot for conducting intuitive human-robot cooperation (HRC), is continuously developing in the recent decade. Currently, NLC is used in several robotic domains such as manufacturing, daily assistance and health caregiving. It is necessary to summarize current NLC-based robotic systems and discuss the future developing trends, providing helpful information for future NLC research. In this review, we first analyzed the driving forces behind the NLC research. Regarding to a robot s cognition level during the cooperation, the NLC implementations then were categorized into four types {NL-based control, NL-based robot training, NL-based task execution, NL-based social companion} for comparison and discussion. Last based on our perspective and comprehensive paper review, the future research trends were discussed.