Goto

Collaborating Authors

 Liu, Alexander


Anticipating Driving Behavior through Deep Learning-Based Policy Prediction

arXiv.org Artificial Intelligence

In this endeavor, we developed a comprehensive system that processes integrated visual features derived from video frames captured by a regular camera, along with depth details obtained from a point cloud scanner. This system is designed to anticipate driving actions, encompassing both vehicle speed and steering angle. To ensure its reliability, we conducted assessments where we juxtaposed the projected outcomes with the established norms adhered to by skilled real-world drivers. Our evaluation outcomes indicate that the forecasts achieve a noteworthy level of accuracy in a minimum of half the test scenarios (ranging around 50-80%, contingent on the specific model). Notably, the utilization of amalgamated features yielded superior performance in comparison to using video frames in isolation, as demonstrated by most of the cases.


Crowdsourcing Evaluations of Classifier Interpretability

AAAI Conferences

This paper presents work using crowdsourcing to assess explanations for supervised text classification. In this paper, an explanation is defined to be a set of words from the input text that a classifier or human believes to be most useful for making a classification decision. We compared two types of explanations for classification decisions: human-generated and computer-generated. The comparison is based on whether the type of the explanation was identifiable and on which type of explanation was preferred. Crowdsourcing was used to collect two types of data for these experiments. First, human-generated explanations were collected by having users select an appropriate category for a piece of text and highlight words that best support this category. Second, users were asked to compare human- and computer-generated explanations and indicate which they preferred and why. The crowdsourced data used for this paper was collected primarily via Amazon’s Mechanical Turk, using several quality control methods. We found that in one test corpus, the two explanation types were virtually indistinguishable, and that participants did not have a significant preference for one type over another. For another corpus, the explanations were slightly more distinguishable, and participants preferred the computer-generated explanations at a small, but statistically significant, level. We conclude that computer-generated explanations for text classification can be comparable in quality to human-generated explanations.