Goto

Collaborating Authors

 traffic camera



FARSEC: A Reproducible Framework for Automatic Real-Time Vehicle Speed Estimation Using Traffic Cameras

Liebe, Lucas, Sauerwald, Franz, Sawicki, Sylwester, Schneider, Matthias, Schuhmann, Leo, Buz, Tolga, Boes, Paul, Ahmadov, Ahmad, de Melo, Gerard

arXiv.org Artificial Intelligence

Estimating the speed of vehicles using traffic cameras is a crucial task for traffic surveillance and management, enabling more optimal traffic flow, improved road safety, and lower environmental impact. Transportation-dependent systems, such as for navigation and logistics, have great potential to benefit from reliable speed estimation. While there is prior research in this area reporting competitive accuracy levels, their solutions lack reproducibility and robustness across different datasets. To address this, we provide a novel framework for automatic real-time vehicle speed calculation, which copes with more diverse data from publicly available traffic cameras to achieve greater robustness. Our model employs novel techniques to estimate the length of road segments via depth map prediction. Additionally, our framework is capable of handling realistic conditions such as camera movements and different video stream inputs automatically. We compare our model to three well-known models in the field using their benchmark datasets. While our model does not set a new state of the art regarding prediction performance, the results are competitive on realistic CCTV videos. At the same time, our end-to-end pipeline offers more consistent results, an easier implementation, and better compatibility. Its modular structure facilitates reproducibility and future improvements.


Most Americans are recorded 238 TIMES a week by security cameras, study reveals

Daily Mail - Science & tech

The typical American is recorded by security cameras 238 times a week, according to a new report from Safety.com. That figure includes surveillance video taken at work, on the road, in stores and in the home. The study found that Americans are filmed 160 times while driving, as there are about an average of 20 cameras on a span of 29 miles. And the average employee has been spotted by surveillance cameras at 40 times a week. However, for those who frequently travel or work in highly patrolled areas the number of times they are captured on film skyrockets to more than 1,000 times a week.


Traffic Prediction Framework for OpenStreetMap using Deep Learning based Complex Event Processing and Open Traffic Cameras

Yadav, Piyush, Sarkar, Dipto, Salwala, Dhaval, Curry, Edward

arXiv.org Artificial Intelligence

Displaying near-real-time traffic information is a useful feature of digital navigation maps. However, most commercial providers rely on privacy-compromising measures such as deriving location information from cellphones to estimate traffic. The lack of an open-source traffic estimation method using open data platforms is a bottleneck for building sophisticated navigation services on top of OpenStreetMap (OSM). We propose a deep learning-based Complex Event Processing (CEP) method that relies on publicly available video camera streams for traffic estimation. The proposed framework performs near-real-time object detection and objects property extraction across camera clusters in parallel to derive multiple measures related to traffic with the results visualized on OpenStreetMap. The estimation of object properties (e.g. vehicle speed, count, direction) provides multidimensional data that can be leveraged to create metrics and visualization for congestion beyond commonly used density-based measures. Our approach couples both flow and count measures during interpolation by considering each vehicle as a sample point and their speed as weight. We demonstrate multidimensional traffic metrics (e.g. flow rate, congestion estimation) over OSM by processing 22 traffic cameras from London streets. The system achieves a near-real-time performance of 1.42 seconds median latency and an average F-score of 0.80.


This Small Company Is Turning Utah Into a Surveillance Panopticon

#artificialintelligence

The state of Utah has given an artificial intelligence company real-time access to state traffic cameras, CCTV and "public safety" cameras, 911 emergency systems, location data for state-owned vehicles, and other sensitive data. The company, called Banjo, says that it's combining this data with information collected from social media, satellites, and other apps, and claims its algorithms "detect anomalies" in the real world. The lofty goal of Banjo's system is to alert law enforcement of crimes as they happen. It claims it does this while somehow stripping all personal data from the system, allowing it to help cops without putting anyone's privacy at risk. As with other algorithmic crime systems, there is little public oversight or information about how, exactly, the system determines what is worth alerting cops to. In its pitches to prospective clients, Banjo promises its technology, called "Live Time Intelligence," can identify, and potentially help police solve, an incredible variety of crimes in real-time. Banjo says its AI can help police solve child kidnapping cases "in seconds," identify active shooter situations as they happen, or potentially send an alert when there's a traffic accident, airbag deployment, fire, or a car is driving the wrong way down the road. Banjo says it has "a solution for homelessness" and can help with the opioid epidemic by detecting "opioid events." It offers "artificial intelligence processing" of state-owned audio sensors that "include but may not be limited to speech recognition and natural language processing" as well as automatic scene detection, object recognition, and vehicle detection on real-time video footage pulled in from Utah's cameras.


This Small Company Is Turning Utah Into a Surveillance Panopticon

#artificialintelligence

The state of Utah has given an artificial intelligence company real-time access to state traffic cameras, CCTV and "public safety" cameras, 911 emergency systems, location data for state-owned vehicles, and other sensitive data. The company, called Banjo, says that it's combining this data with information collected from social media, satellites, and other apps, and claims its algorithms "detect anomalies" in the real world. The lofty goal of Banjo's system is to alert law enforcement of crimes as they happen. It claims it does this while somehow stripping all personal data from the system, allowing it to help cops without putting anyone's privacy at risk. As with other algorithmic crime systems, there is little public oversight or information about how, exactly, the system determines what is worth alerting cops to. In its pitches to prospective clients, Banjo promises its technology, called "Live Time Intelligence," can identify, and potentially help police solve, an incredible variety of crimes in real-time. Banjo says its AI can help police solve child kidnapping cases "in seconds," identify active shooter situations as they happen, or potentially send an alert when there's a traffic accident, airbag deployment, fire, or a car is driving the wrong way down the road. Banjo says it has "a solution for homelessness" and can help with the opioid epidemic by detecting "opioid events." It offers "artificial intelligence processing" of state-owned audio sensors that "include but may not be limited to speech recognition and natural language processing" as well as automatic scene detection, object recognition, and vehicle detection on real-time video footage pulled in from Utah's cameras.


Patterns of Urban Foot Traffic Dynamics

Dobler, Gregory, Vani, Jordan, Dam, Trang Tran Linh

arXiv.org Machine Learning

Using publicly available traffic camera data in New York City, we quantify time-dependent patterns in aggregate pedestrian foot traffic. These patterns exhibit repeatable diurnal behaviors that differ for weekdays and weekends but are broadly consistent across neighborhoods in the borough of Manhattan. Weekday patterns contain a characteristic 3-peak structure with increased foot traffic around 9:00am, 12:00-1:00pm, and 5:00pm aligned with the "9-to-5" work day in which pedestrians are on the street during their morning commute, during lunch hour, and then during their evening commute. Weekend days do not show a peaked structure, but rather increase steadily until sunset. Our study period of June 28, 2017 to September 11, 2017 contains two holidays, the 4th of July and Labor Day, and their foot traffic patterns are quantitatively similar to weekend days despite the fact that they fell on weekdays. Projecting all days in our study period onto the weekday/weekend phase space (by regressing against the average weekday and weekend day) we find that Friday foot traffic can be represented as a mixture of both the 3-peak weekday structure and non-peaked weekend structure. We also show that anomalies in the foot traffic patterns can be used for detection of events and network-level disruptions. Finally, we show that clustering of foot traffic time series generates associations between cameras that are spatially aligned with Manhattan neighborhood boundaries indicating that foot traffic dynamics encode information about neighborhood character.


Ford Using Artificial Intelligence to Solve Urban Driving Problems

#artificialintelligence

Ford's transition from automaker to mobility company took another step forward in a small office space in downtown Ann Arbor this week. Instead of a new car or fancy self-driving tech update, Ford's big news was, basically, an AI-powered database. Standing next to a big 3D model of the city, Ford's vice president of mobility, marketing and growth, Brett Wheatley, announced the Ford City Insights platform. It uses AI and data from various sources--among them traffic cameras, parking garages, and police reports--to analyze everything from where collisions are most likely to happen to which roads would be best served by microtransit shuttles or scooters. The City Insights platform is made up of four main sectors: safety, parking, transit, and a 3D model that makes sense of the other three.


IBM's AI Machine Learns to Debate Humans

#artificialintelligence

Both sides then delivered a four-minute opening statement, a four-minute rebuttal, and a two-minute summary. Project Debater pushes the frontiers of AI to facilitate intelligent debate so we can build well-informed arguments and make better decisions. Te system digests massive texts, constructs a well-structured speech on a given topic, delivers it with clarity and purpose, and rebuts its opponent. Eventually, Project Debater will help people reason by providing compelling, evidence-based arguments and limiting the influence of emotion, bias, or ambiguity. Project Debater made an opening argument that supported the statement with facts, including the points that space exploration benefits human kind because it can help advance scientific discoveries and it inspires young people to think beyond themselves.


Traffic data is abundant, Techies find ways to make it both valuable and fun - Mobility Lab

@machinelearnbot

Traffic experts met last week at Spaces NoMA for the fourth Playing with Traffic event of Transportation Techies. A handful presented their latest work in a rapid-fire show-and-tell of the wide array of open-source mapping and imaging that can now inform how streets are planned for both current users and future technology. Mapillary's Janine Yoong explained how combining computer vision – using digital images to train computers to understand objects – with human collaboration can inform the development of autonomous vehicles. Yoong and her team hope to use street-view images from across the internet to help driverless cars better categorize items that they "see" while also creating fresher, more accurate, and complete maps that can help computers understand their location. With this, Mapillary pulls images of streetscapes from around the world, including remote arctic research bases, that can train AV programs by processing as many objects and situations as possible.