Lyu, Cheng
AI-Driven Day-to-Day Route Choice
Wang, Leizhen, Duan, Peibo, He, Zhengbing, Lyu, Cheng, Chen, Xin, Zheng, Nan, Yao, Li, Ma, Zhenliang
Understanding individual travel behaviors is critical for developing efficient and sustainable transportation systems. Travel behavioral analysis aims to capture the decision-making process of individual travel execution, including travel route choice, travel mode choice, departure time choice, and trip purpose. Among these choices, modeling route choice not only helps analyze and understand travelers' behaviors, but also constitutes the essential part of traffic assignment methods [1]. Specifically, it enables the evaluation of travelers' perceptions of route characteristics, the forecasting of behavior in hypothetical scenarios, the prediction of future traffic dynamics on transportation networks, and the understanding of travelers' responses to travel information. Real-world route choice is complex because of the inherent difficulties in accurately representing human behavior, travelers' limited knowledge of network composition, uncertainties in perceptions of route characteristics, and the lack of precise information about travelers' preferences [1]. To overcome these limitations, DTD traffic dynamics have attracted significant attention since they focus on drivers' dynamic shifts in route choices and the evolution of traffic flow over time, rather than merely static equilibrium states. DTD models are flexible to incorporate diverse behavioral rules such as forecasting [2, 3], bounded rationality [4, 5], decision-making based on prospects [6, 7], marginal utility effects [8, 9], and social interactions [10]. Despite these advantages identified in [11] and [12], DTD models still struggle to accurately reflect the observed fluctuations in traffic dynamics, particularly the persistent deviations around User Equilibrium (UE) noted in empirical studies [13, 14, 15]. To better understand traffic dynamics, Agent-Based Modeling (ABM) offers a promising alternative.
EdgeQAT: Entropy and Distribution Guided Quantization-Aware Training for the Acceleration of Lightweight LLMs on the Edge
Shen, Xuan, Kong, Zhenglun, Yang, Changdi, Han, Zhaoyang, Lu, Lei, Dong, Peiyan, Lyu, Cheng, Li, Chih-hsiang, Guo, Xuehang, Shu, Zhihao, Niu, Wei, Leeser, Miriam, Zhao, Pu, Wang, Yanzhi
Despite the remarkable strides of Large Language Models (LLMs) in various fields, the wide applications of LLMs on edge devices are limited due to their massive parameters and computations. To address this, quantization is commonly adopted to generate lightweight LLMs with efficient computations and fast inference. However, Post-Training Quantization (PTQ) methods dramatically degrade in quality when quantizing weights, activations, and KV cache together to below 8 bits. Besides, many Quantization-Aware Training (QAT) works quantize model weights, leaving the activations untouched, which do not fully exploit the potential of quantization for inference acceleration on the edge. In this paper, we propose EdgeQAT, the Entropy and Distribution Guided QAT for the optimization of lightweight LLMs to achieve inference acceleration on Edge devices. We first identify that the performance drop of quantization primarily stems from the information distortion in quantized attention maps, demonstrated by the different distributions in quantized query and key of the self-attention mechanism. Then, the entropy and distribution guided QAT is proposed to mitigate the information distortion. Moreover, we design a token importance-aware adaptive method to dynamically quantize the tokens with different bit widths for further optimization and acceleration. Our extensive experiments verify the substantial improvements with our framework across various datasets. Furthermore, we achieve an on-device speedup of up to 2.37x compared with its FP16 counterparts across multiple edge devices, signaling a groundbreaking advancement.
Traffic4cast at NeurIPS 2022 -- Predict Dynamics along Graph Edges from Sparse Node Data: Whole City Traffic and ETA from Stationary Vehicle Detectors
Neun, Moritz, Eichenberger, Christian, Martin, Henry, Spanring, Markus, Siripurapu, Rahul, Springer, Daniel, Deng, Leyan, Wu, Chenwang, Lian, Defu, Zhou, Min, Lumiste, Martin, Ilie, Andrei, Wu, Xinhua, Lyu, Cheng, Lu, Qing-Long, Mahajan, Vishal, Lu, Yichao, Li, Jiezhang, Li, Junjun, Gong, Yue-Jiao, Grötschla, Florian, Mathys, Joël, Wei, Ye, Haitao, He, Fang, Hui, Malm, Kevin, Tang, Fei, Kopp, Michael, Kreil, David, Hochreiter, Sepp
The global trends of urbanization and increased personal mobility force us to rethink the way we live and use urban space. The Traffic4cast competition series tackles this problem in a data-driven way, advancing the latest methods in machine learning for modeling complex spatial systems over time. In this edition, our dynamic road graph data combine information from road maps, $10^{12}$ probe data points, and stationary vehicle detectors in three cities over the span of two years. While stationary vehicle detectors are the most accurate way to capture traffic volume, they are only available in few locations. Traffic4cast 2022 explores models that have the ability to generalize loosely related temporal vertex data on just a few nodes to predict dynamic future traffic states on the edges of the entire road graph. In the core challenge, participants are invited to predict the likelihoods of three congestion classes derived from the speed levels in the GPS data for the entire road graph in three cities 15 min into the future. We only provide vehicle count data from spatially sparse stationary vehicle detectors in these three cities as model input for this task. The data are aggregated in 15 min time bins for one hour prior to the prediction time. For the extended challenge, participants are tasked to predict the average travel times on super-segments 15 min into the future - super-segments are longer sequences of road segments in the graph. The competition results provide an important advance in the prediction of complex city-wide traffic states just from publicly available sparse vehicle data and without the need for large amounts of real-time floating vehicle data.