The variability of the clusters generated by clustering techniques in the domain of latitude and longitude variables of fatal crash data are significantly unpredictable. This unpredictability, caused by the randomness of fatal crash incidents, reduces the accuracy of crash frequency (i.e., counts of fatal crashes per cluster) which is used to measure traffic safety in practice. In this paper, a quantitative measure of traffic safety that is not significantly affected by the aforementioned variability is proposed. It introduces a fatal point -- a segment with the highest frequency of fatality -- concept based on cluster characteristics and detects them by imposing rounding errors to the hundredth decimal place of the longitude. The frequencies of the cluster and the cluster's fatal point are combined to construct a low-sensitive quantitative measure of traffic safety for the cluster. The performance of the proposed measure of traffic safety is then studied by varying the parameter k of k-means clustering with the expectation that other clustering techniques can be adopted in a similar fashion. The 2015 North Carolina fatal crash dataset of Fatality Analysis Reporting System (FARS) is used to evaluate the proposed fatal point concept and perform experimental analysis to determine the effectiveness of the proposed measure. The empirical study shows that the average traffic safety, measured by the proposed quantitative measure over several clusters, is not significantly affected by the variability, compared to that of the standard crash frequency.
This paper presents a mesoscopic stochastic model for the reconstruction of vehicle trajectories from data made available by subsets of (probe) vehicles. Long-range vehicle interactions are applied in a totally asymmetric simple exclusion process to capture information made available to connected and autonomous vehicles. The dynamics are represented by a factor graph, which enables learning of traffic dynamics from historical data using Bayesian belief propagation. Adequate probe penetration levels for faithful reconstruction on single-lane roads is investigated. The estimation technique is tested using a vehicle trajectory dataset generated using an independent microscopic traffic simulator. Although the parameters of the traffic state estimation model are learned from (simulated) historical data, the proposed algorithm is found to be robust to unpredictable conditions. Moreover, by exposing the algorithm to varying traffic conditions with increasingly larger datasets, the probe penetration rates required to capture the traffic dynamics effectively can be substantially reduced. The results also highlight the need to take into account randomness in the spatio-temporal coverage associated with probe data for reliable state estimation algorithms.
This paper proposes a new method for an optimized mapping of temporal variables, describing a temporal stream data, into the recently proposed NeuCube spiking neural network architecture. This optimized mapping extends the use of the NeuCube, which was initially designed for spatiotemporal brain data, to work on arbitrary stream data and to achieve a better accuracy of temporal pattern recognition, a better and earlier event prediction and a better understanding of complex temporal stream data through visualization of the NeuCube connectivity. The effect of the new mapping is demonstrated on three bench mark problems. The first one is early prediction of patient sleep stage event from temporal physiological data. The second one is pattern recognition of dynamic temporal patterns of traffic in the Bay Area of California and the last one is the Challenge 2012 contest data set. In all cases the use of the proposed mapping leads to an improved accuracy of pattern recognition and event prediction and a better understanding of the data when compared to traditional machine learning techniques or spiking neural network reservoirs with arbitrary mapping of the variables.
It's not too hard to find real-time traffic data, but it's usually specific to one car make. Here has unveiled a Real-Time Traffic service that has cars from Audi, BMW and Mercedes-Benz (all co-owners of Here) sharing their live sensor data to provide more accurate traffic alerts than you'd get from external probes alone. And more than 30 of the 60-plus countries covered by the service can also take advantage of safety warnings based on sensor data you expect from incidents, such as hard braking to avoid a crash. It's easy to imagine cities drawing on Here's info to adjust traffic light patterns and adjust roadways, while ridesharing companies and navigation app developers could use it to provide better arrival time estimates.
For the last few years, the amount of data has significantly increased in the companies. It is the reason why data analysis methods have to evolve to meet new demands. In this article, we introduce a practical analysis of a large database from a telecommunication operator. The problem is to segment a territory and characterize the retrieved areas owing to their inhabitant behavior in terms of mobile telephony. We have call detail records collected during five months in France. We propose a two stages analysis. The first one aims at grouping source antennas which originating calls are similarly distributed on target antennas and conversely for target antenna w.r.t. source antenna. A geographic projection of the data is used to display the results on a map of France. The second stage discretizes the time into periods between which we note changes in distributions of calls emerging from the clusters of source antennas. This enables an analysis of temporal changes of inhabitants behavior in every area of the country.