Goto

Collaborating Authors

 trad


SupplementaryMaterials: ImprovingDeepLearning InterpretabilitybySaliencyGuidedTraining

Neural Information Processing Systems

This would be particularly useful for large datasets like imagenet. Table 2 shows the area under accuracydrop curve(AUC) on MNIST Figure 4for gradient when training traditionally,training using saliencyguided procedure andfine-tuning (smaller AUCindicates better performance).


A Survey of Reasoning and Agentic Systems in Time Series with Large Language Models

Chang, Ching, Shi, Yidan, Cao, Defu, Yang, Wei, Hwang, Jeehyun, Wang, Haixin, Pang, Jiacheng, Wang, Wei, Liu, Yan, Peng, Wen-Chih, Chen, Tien-Fu

arXiv.org Artificial Intelligence

Time series reasoning treats time as a first-class axis and incorporates intermediate evidence directly into the answer. This survey defines the problem and organizes the literature by reasoning topology with three families: direct reasoning in one step, linear chain reasoning with explicit intermediates, and branch-structured reasoning that explores, revises, and aggregates. The topology is crossed with the main objectives of the field, including traditional time series analysis, explanation and understanding, causal inference and decision making, and time series generation, while a compact tag set spans these axes and captures decomposition and verification, ensembling, tool use, knowledge access, multimodality, agent loops, and LLM alignment regimes. Methods and systems are reviewed across domains, showing what each topology enables and where it breaks down in faithfulness or robustness, along with curated datasets, benchmarks, and resources that support study and deployment (https://github.com/blacksnail789521/Time-Series-Reasoning-Survey). Evaluation practices that keep evidence visible and temporally aligned are highlighted, and guidance is distilled on matching topology to uncertainty, grounding with observable artifacts, planning for shift and streaming, and treating cost and latency as design budgets. We emphasize that reasoning structures must balance capacity for grounding and self-correction against computational cost and reproducibility, while future progress will likely depend on benchmarks that tie reasoning quality to utility and on closed-loop testbeds that trade off cost and risk under shift-aware, streaming, and long-horizon settings. Taken together, these directions mark a shift from narrow accuracy toward reliability at scale, enabling systems that not only analyze but also understand, explain, and act on dynamic worlds with traceable evidence and credible outcomes.



TRAD: Enhancing LLM Agents with Step-Wise Thought Retrieval and Aligned Decision

Zhou, Ruiwen, Yang, Yingxuan, Wen, Muning, Wen, Ying, Wang, Wenhao, Xi, Chunling, Xu, Guoqiang, Yu, Yong, Zhang, Weinan

arXiv.org Artificial Intelligence

Numerous large language model (LLM) agents have been built for different tasks like web navigation and online shopping due to LLM's wide knowledge and text-understanding ability. Among these works, many of them utilize in-context examples to achieve generalization without the need for fine-tuning, while few of them have considered the problem of how to select and effectively utilize these examples. Recently, methods based on trajectory-level retrieval with task meta-data and using trajectories as in-context examples have been proposed to improve the agent's overall performance in some sequential decision making tasks. However, these methods can be problematic due to plausible examples retrieved without task-specific state transition dynamics and long input with plenty of irrelevant context. In this paper, we propose a novel framework (TRAD) to address these issues. TRAD first conducts Thought Retrieval, achieving step-level demonstration selection via thought matching, leading to more helpful demonstrations and less irrelevant input noise. Then, TRAD introduces Aligned Decision, complementing retrieved demonstration steps with their previous or subsequent steps, which enables tolerance for imperfect thought and provides a choice for balance between more context and less noise. Extensive experiments on ALFWorld and Mind2Web benchmarks show that TRAD not only outperforms state-of-the-art models but also effectively helps in reducing noise and promoting generalization. Furthermore, TRAD has been deployed in real-world scenarios of a global business insurance company and improves the success rate of robotic process automation.


Semi-Abstract Value-Based Argumentation Framework

Jeromela, Jovan

arXiv.org Artificial Intelligence

In his seminal paper, Phan Minh Dung (1995) proposed abstract argumentation framework, which models argumentation using directed graphs where structureless arguments are the nodes and attacks among the arguments are the edges. In the following years, many extensions of this framework were introduced. These extensions typically add a certain form of structure to the arguments. This thesis showcases two such extensions -- value-based argumentation framework by Trevor Bench-Capon (2002) and semi-abstract argumentation framework by Esther Anna Corsi and Christian Ferm\"uller (2017). The former introduces a mapping function that links individual arguments to a set of ordered values, enabling a distinction between objectively and subjectively acceptable arguments. The latter links claims of individual arguments to propositional formulae and then applies newly-introduced attack principles in order to make implicit attacks explicit and to enable a definition of a consequence relation that relies on neither the truth values nor the interpretations in the usual sense. The contribution of this thesis is two-fold. Firstly, the new semi-abstract value-based argumentation framework is introduced. This framework maps propositional formulae associated with individual arguments to a set of ordered values. Secondly, a complex moral dilemma is formulated using the original and the value-based argumentation frameworks showcasing the expressivity of these formalisms.


Marrying Fairness and Explainability in Supervised Learning

Grabowicz, Przemyslaw, Perello, Nicholas, Mishra, Aarshee

arXiv.org Artificial Intelligence

Machine learning algorithms that aid human decision-making may inadvertently discriminate against certain protected groups. We formalize direct discrimination as a direct causal effect of the protected attributes on the decisions, while induced discrimination as a change in the causal influence of non-protected features associated with the protected attributes. The measurements of marginal direct effect (MDE) and SHapley Additive exPlanations (SHAP) reveal that state-of-the-art fair learning methods can induce discrimination via association or reverse discrimination in synthetic and real-world datasets. To inhibit discrimination in algorithmic systems, we propose to nullify the influence of the protected attribute on the output of the system, while preserving the influence of remaining features. We introduce and study post-processing methods achieving such objectives, finding that they yield relatively high model accuracy, prevent direct discrimination, and diminishes various disparity measures, e.g., demographic disparity.


Improving Deep Learning Interpretability by Saliency Guided Training

Ismail, Aya Abdelsalam, Bravo, Héctor Corrada, Feizi, Soheil

arXiv.org Artificial Intelligence

Saliency methods have been widely used to highlight important input features in model predictions. Most existing methods use backpropagation on a modified gradient function to generate saliency maps. Thus, noisy gradients can result in unfaithful feature attributions. In this paper, we tackle this issue and introduce a {\it saliency guided training}procedure for neural networks to reduce noisy gradients used in predictions while retaining the predictive performance of the model. Our saliency guided training procedure iteratively masks features with small and potentially noisy gradients while maximizing the similarity of model outputs for both masked and unmasked inputs. We apply the saliency guided training procedure to various synthetic and real data sets from computer vision, natural language processing, and time series across diverse neural architectures, including Recurrent Neural Networks, Convolutional Networks, and Transformers. Through qualitative and quantitative evaluations, we show that saliency guided training procedure significantly improves model interpretability across various domains while preserving its predictive performance.