Goto

Collaborating Authors

 wisconsin-madison


V2X-LLM: Enhancing V2X Integration and Understanding in Connected Vehicle Corridors

Wu, Keshu, Li, Pei, Zhou, Yang, Gan, Rui, You, Junwei, Cheng, Yang, Zhu, Jingwen, Parker, Steven T., Ran, Bin, Noyce, David A., Tu, Zhengzhong

arXiv.org Artificial Intelligence

The advancement of Connected and Automated Vehicles (CAVs) and Vehicle-to-Everything (V2X) offers significant potential for enhancing transportation safety, mobility, and sustainability. However, the integration and analysis of the diverse and voluminous V2X data, including Basic Safety Messages (BSMs) and Signal Phase and Timing (SPaT) data, present substantial challenges, especially on Connected Vehicle Corridors. These challenges include managing large data volumes, ensuring real-time data integration, and understanding complex traffic scenarios. Although these projects have developed an advanced CAV data pipeline that enables real-time communication between vehicles, infrastructure, and other road users for managing connected vehicle and roadside unit (RSU) data, significant hurdles in data comprehension and real-time scenario analysis and reasoning persist. To address these issues, we introduce the V2X-LLM framework, a novel enhancement to the existing CV data pipeline. V2X-LLM leverages Large Language Models (LLMs) to improve the understanding and real-time analysis of V2X data. The framework includes four key tasks: Scenario Explanation, offering detailed narratives of traffic conditions; V2X Data Description, detailing vehicle and infrastructure statuses; State Prediction, forecasting future traffic states; and Navigation Advisory, providing optimized routing instructions. By integrating LLM-driven reasoning with V2X data within the data pipeline, the V2X-LLM framework offers real-time feedback and decision support for traffic management. This integration enhances the accuracy of traffic analysis, safety, and traffic optimization. Demonstrations in a real-world urban corridor highlight the framework's potential to advance intelligent transportation systems.


Improving Bilingual Capabilities of Language Models to Support Diverse Linguistic Practices in Education

Syamkumar, Anand, Tseng, Nora, Barron, Kaycie, Yang, Shanglin, Karumbaiah, Shamya, Uppal, Rheeya, Hu, Junjie

arXiv.org Artificial Intelligence

Large language models (LLMs) offer promise in generating educational content, providing instructor feedback, and reducing teacher workload on assessments. While prior studies have focused on studying LLM-powered learning analytics, limited research has examined how effective LLMs are in a bilingual context. In this paper, we study the effectiveness of multilingual large language models (MLLMs) across monolingual (English-only, Spanish-only) and bilingual (Spanglish) student writing. We present a learning analytics use case that details LLM performance in assessing acceptable and unacceptable explanations of Science and Social Science concepts. Our findings reveal a significant bias in the grading performance of pre-trained models for bilingual writing compared to English-only and Spanish-only writing. Following this, we fine-tune open-source MLLMs including Llama 3.1 and Mistral NeMo using synthetic datasets generated in English, Spanish, and Spanglish. Our experiments indicate that the models perform significantly better for all three languages after fine-tuning with bilingual data. This study highlights the potential of enhancing MLLM effectiveness to support authentic language practices amongst bilingual learners. It also aims to illustrate the value of incorporating non-English languages into the design and implementation of language models in education.


A Digital Twin Framework for Physical-Virtual Integration in V2X-Enabled Connected Vehicle Corridors

Wu, Keshu, Li, Pei, Cheng, Yang, Parker, Steven T., Ran, Bin, Noyce, David A., Ye, Xinyue

arXiv.org Artificial Intelligence

Transportation Cyber-Physical Systems (T-CPS) are critical in improving traffic safety, reliability, and sustainability by integrating computing, communication, and control in transportation systems. The connected vehicle corridor is at the forefront of this transformation, where Cellular Vehicle-to-Everything (C-V2X) technology facilitates real-time data exchange between infrastructure, vehicles, and road users. However, challenges remain in processing and synchronizing the vast V2X data from vehicles and roadside units, particularly when ensuring scalability, data integrity, and operational resilience. This paper presents a digital twin framework for T-CPS, developed from a real-world connected vehicle corridor to address these challenges. By leveraging C-V2X technology and real-time data from infrastructure, vehicles, and road users, the digital twin accurately replicates vehicle behaviors, signal phases, and traffic patterns within the CARLA simulation environment. This framework demonstrates high fidelity between physical and digital systems and ensures robust synchronization of vehicle trajectories and signal phases through extensive experiments. Moreover, the digital twin's scalable and redundant architecture enhances data integrity, making it capable of supporting future large-scale C-V2X deployments. The digital twin is a vital tool in T-CPS, enabling real-time traffic monitoring, prediction, and optimization to enhance the reliability and safety of transportation systems.


Goal-based Neural Physics Vehicle Trajectory Prediction Model

Gan, Rui, Shi, Haotian, Li, Pei, Wu, Keshu, An, Bocheng, Li, Linheng, Ma, Junyi, Ma, Chengyuan, Ran, Bin

arXiv.org Artificial Intelligence

Vehicle trajectory prediction plays a vital role in intelligent transportation systems and autonomous driving, as it significantly affects vehicle behavior planning and control, thereby influencing traffic safety and efficiency. Numerous studies have been conducted to predict short-term vehicle trajectories in the immediate future. However, long-term trajectory prediction remains a major challenge due to accumulated errors and uncertainties. Additionally, balancing accuracy with interpretability in the prediction is another challenging issue in predicting vehicle trajectory. To address these challenges, this paper proposes a Goal-based Neural Physics Vehicle Trajectory Prediction Model (GNP). The GNP model simplifies vehicle trajectory prediction into a two-stage process: determining the vehicle's goal and then choosing the appropriate trajectory to reach this goal. The GNP model contains two sub-modules to achieve this process. The first sub-module employs a multi-head attention mechanism to accurately predict goals. The second sub-module integrates a deep learning model with a physics-based social force model to progressively predict the complete trajectory using the generated goals. The GNP demonstrates state-of-the-art long-term prediction accuracy compared to four baseline models. We provide interpretable visualization results to highlight the multi-modality and inherent nature of our neural physics framework. Additionally, ablation studies are performed to validate the effectiveness of our key designs.


DMLR: Data-centric Machine Learning Research -- Past, Present and Future

Oala, Luis, Maskey, Manil, Bat-Leah, Lilith, Parrish, Alicia, Gürel, Nezihe Merve, Kuo, Tzu-Sheng, Liu, Yang, Dror, Rotem, Brajovic, Danilo, Yao, Xiaozhe, Bartolo, Max, Rojas, William A Gaviria, Hileman, Ryan, Aliment, Rainier, Mahoney, Michael W., Risdal, Meg, Lease, Matthew, Samek, Wojciech, Dutta, Debojyoti, Northcutt, Curtis G, Coleman, Cody, Hancock, Braden, Koch, Bernard, Tadesse, Girmaw Abebe, Karlaš, Bojan, Alaa, Ahmed, Dieng, Adji Bousso, Noy, Natasha, Reddi, Vijay Janapa, Zou, James, Paritosh, Praveen, van der Schaar, Mihaela, Bollacker, Kurt, Aroyo, Lora, Zhang, Ce, Vanschoren, Joaquin, Guyon, Isabelle, Mattson, Peter

arXiv.org Artificial Intelligence

Drawing from discussions at the inaugural DMLR workshop at ICML 2023 and meetings prior, in this report we outline the relevance of community engagement and infrastructure development for the creation of next-generation public datasets that will advance machine learning science. We chart a path forward as a collective effort to sustain the creation and maintenance of these datasets and methods towards positive scientific, societal and business impact.


Exploring the Use of Collaborative Robots in Cinematography

Praveena, Pragathi, Cagiltay, Bengisu, Gleicher, Michael, Mutlu, Bilge

arXiv.org Artificial Intelligence

Robotic technology can support the creation of new tools that improve the creative process of cinematography. It is crucial to consider the specific requirements and perspectives of industry professionals when designing and developing these tools. In this paper, we present the results from exploratory interviews with three cinematography practitioners, which included a demonstration of a prototype robotic system. We identified many factors that can impact the design, adoption, and use of robotic support for cinematography, including: (1) the ability to meet requirements for cost, quality, mobility, creativity, and reliability; (2) the compatibility and integration of tools with existing workflows, equipment, and software; and (3) the potential for new creative opportunities that robotic technology can open up. Our findings provide a starting point for future co-design projects that aim to support the work of cinematographers with collaborative robots.


Here's How Forbes Got The ChatGPT AI To Write 2 College Essays In 20 Minutes

#artificialintelligence

Not only does ChatGPT write clear, compelling essays, but it can also conjure up its own personal ... [ ] details and embellishments that could up a students' chance of acceptance and would be difficult to verify. Forbes' full conversation with ChatGPT, OpenAI's newest natural language model, is pasted below. Each of the college admissions essays took less than 10 minutes to complete. Read our story about ChatGPT's capacity to write college applications here. Forbes: Hi GPT, I'd like you to write a college application essay as if you were an 18-year-old high school senior whose parents are from Bangalore, India but who now own a restaurant in Newton, Mass. He is a competitive swimmer, and in 10th grade he broke his shoulder. He is interested in majoring in business.


Exploring Generative Adversarial Networks for Image-to-Image Translation in STEM Simulation

Lawrence, Nick, Shen, Mingren, Yin, Ruiqi, Feng, Cloris, Morgan, Dane

arXiv.org Artificial Intelligence

The use of accurate scanning transmission electron microscopy (STEM) image simulation methods require large computation times that can make their use infeasible for the simulation of many images. Other simulation methods based on linear imaging models, such as the convolution method, are much faster but are too inaccurate to be used in application. In this paper, we explore deep learning models that attempt to translate a STEM image produced by the convolution method to a prediction of the high accuracy multislice image. We then compare our results to those of regression methods. We find that using the deep learning model Generative Adversarial Network (GAN) provides us with the best results and performs at a similar accuracy level to previous regression models on the same dataset.


Artificial-intelligence tools supported

#artificialintelligence

Zhou Zang is awarded three grants for her work to develop machine-learning models and artificial-intelligence tools to increase agricultural productivity and sustainability. She is an assistant professor of biological systems engineering at the University of Wisconsin-Madison. Zhou Zhang, an assistant professor of biological systems engineering at the University of Wisconsin-Madison, recently was awarded three grants for her work to develop machine-learning models and artificial-intelligence tools to increase agricultural productivity and sustainability. The U.S. Department of Agriculture's National Institute of Food and Agriculture awarded the grants. Project Description: The research team is comprised of Zhang and Matthew Digman, both assistant professors of biological systems engineering, and Paul Mitchell, a professor of agricultural and applied economics – all at UW-Madison.


New machine learning and data science option offers ECE undergrads in-demand skills - College of Engineering - University of Wisconsin-Madison

#artificialintelligence

In the last couple of decades, technology has become very efficient at collecting information from the physical world, including wearable medical sensors, radar systems integrated into automobiles and satellites monitoring earth's climate--as well as from humans by monitoring the decisions they make. But that massive trove of data is mostly useless on its own; sophisticated computer algorithms are needed to find patterns, extract meaning and make predictions from the data. That's why the University of Wisconsin-Madison Department of Electrical and Computer Engineering launched the machine learning and data science option for both undergraduate electrical engineering and computer engineering majors. The option requires 18 elective credits in the 120-hour bachelor's degree consisting of courses focusing on machine learning and data science in engineering. Courses in the option cover coding for data manipulation, analysis, and visualization, and machine learning topics from applied linear algebra and probability through artificial neural networks and deep learning. When students graduate, the option is noted on their transcript, giving them a valuable credential in future employment searches.