Yu, Shiyu
DT4ECG: A Dual-Task Learning Framework for ECG-Based Human Identity Recognition and Human Activity Detection
You, Siyu, Gu, Boyuan, Yang, Yanhui, Yu, Shiyu, Guo, Shisheng
This article introduces DT4ECG, an innovative dual-task learning framework for Electrocardiogram (ECG)-based human identity recognition and activity detection. The framework employs a robust one-dimensional convolutional neural network (1D-CNN) backbone integrated with residual blocks to extract discriminative ECG features. To enhance feature representation, we propose a novel Sequence Channel Attention (SCA) mechanism, which combines channel-wise and sequential context attention to prioritize informative features across both temporal and channel dimensions. Furthermore, to address gradient imbalance in multi-task learning, we integrate GradNorm, a technique that dynamically adjusts loss weights based on gradient magnitudes, ensuring balanced training across tasks. Experimental results demonstrate the superior performance of our model, achieving accuracy rates of 99.12% in ID classification and 90.11% in activity classification. These findings underscore the potential of the DT4ECG framework in enhancing security and user experience across various applications such as fitness monitoring and personalized healthcare, thereby presenting a transformative approach to integrating ECG-based biometrics in everyday technologies.
AutoManual: Generating Instruction Manuals by LLM Agents via Interactive Environmental Learning
Chen, Minghao, Li, Yihang, Yang, Yanting, Yu, Shiyu, Lin, Binbin, He, Xiaofei
Large Language Models (LLM) based agents have shown promise in autonomously completing tasks across various domains, e.g., robotics, games, and web navigation. However, these agents typically require elaborate design and expert prompts to solve tasks in specific domains, which limits their adaptability. We introduce AutoManual, a framework enabling LLM agents to autonomously build their understanding through interaction and adapt to new environments. AutoManual categorizes environmental knowledge into diverse rules and optimizes them in an online fashion by two agents: 1) The Planner codes actionable plans based on current rules for interacting with the environment. 2) The Builder updates the rules through a well-structured rule system that facilitates online rule management and essential detail retention. To mitigate hallucinations in managing rules, we introduce \textit{case-conditioned prompting} strategy for the Builder. Finally, the Formulator agent compiles these rules into a comprehensive manual. The self-generated manual can not only improve the adaptability but also guide the planning of smaller LLMs while being human-readable. Given only one simple demonstration, AutoManual significantly improves task success rates, achieving 97.4\% with GPT-4-turbo and 86.2\% with GPT-3.5-turbo on ALFWorld benchmark tasks. The source code will be available soon.