A Virtual Environment with Multi-Robot Navigation, Analytics, and Decision Support for Critical Incident Investigation Artificial Intelligence

Accidents and attacks that involve chemical, biological, radiological/nuclear or explosive (CBRNE) substances are rare, but can be of high consequence. Since the investigation of such events is not anybody's routine work, a range of AI techniques can reduce investigators' cognitive load and support decision-making, including: planning the assessment of the scene; ongoing evaluation and updating of risks; control of autonomous vehicles for collecting images and sensor data; reviewing images/videos for items of interest; identification of anomalies; and retrieval of relevant documentation. Because of the rare and high-risk nature of these events, realistic simulations can support the development and evaluation of AI-based tools. We have developed realistic models of CBRNE scenarios and implemented an initial set of tools.

Enabling Pedestrian Safety using Computer Vision Techniques: A Case Study of the 2018 Uber Inc. Self-driving Car Crash Artificial Intelligence

Human lives are important. The decision to allow self-driving vehicles operate on our roads carries great weight. This has been a hot topic of debate between policy-makers, technologists and public safety institutions. The recent Uber Inc. self-driving car crash, resulting in the death of a pedestrian, has strengthened the argument that autonomous vehicle technology is still not ready for deployment on public roads. In this work, we analyze the Uber car crash and shed light on the question, "Could the Uber Car Crash have been avoided?". We apply state-of-the-art Computer Vision models to this highly practical scenario. More generally, our experimental results are an evaluation of various image enhancement and object recognition techniques for enabling pedestrian safety in low-lighting conditions using the Uber crash as a case study.

Artificial Intelligence: Science fiction to science fact - Connected Magazine


Artificial intelligence is quickly growing in importance in the'smart building' sector. Paul Skelton looks at the road ahead for a complex technology. When Mark Chung received an unexpectedly high $500 monthly electricity bill, he turned to his utility for help and answers. However, despite'smart' meters being installed in his home, they were no help. So Mark – an electrical engineer trained at Stanford University – took matters into his own hands.

Perspectives on the Validation and Verification of Machine Learning Systems in the Context of Highly Automated Vehicles

AAAI Conferences

Algorithms incorporating learned functionality play an increasingly important role for highly automated vehicles. Their impressive performance within environmental perception and other tasks central to automated driving comes at the price of a hitherto unsolved functional verification problem within safety analysis. We propose to combine statistical guarantee statements about the generalisation ability of learning algorithms with the functional architecture as well as constraints about the dynamics and ontology of the physical world, yielding an integrated formulation of the safety verification problem of functional architectures comprising artificial intelligence components. Its formulation as a probabilistic constraint system enables calculation of low risk manoeuvres. We illustrate the proposed scheme on a simple automotive scenario featuring unreliable environmental perception.

vehicle-detection by JunshengFu


Anaconda is used for managing my dependencies. You can download the weight from here and save it to the weights folder. If the SVM classifier exist, load it directly. Otherwise, I started by reading in all the vehicle and non-vehicle images, around 8000 images in each category. These datasets are comprised of images taken from the GTI vehicle image database and KITTI vision benchmark suite.

A General Pipeline for 3D Detection of Vehicles Machine Learning

Autonomous driving requires 3D perception of vehicles and other objects in the in environment. Much of the current methods support 2D vehicle detection. This paper proposes a flexible pipeline to adopt any 2D detection network and fuse it with a 3D point cloud to generate 3D information with minimum changes of the 2D detection networks. To identify the 3D box, an effective model fitting algorithm is developed based on generalised car models and score maps. A two-stage convolutional neural network (CNN) is proposed to refine the detected 3D box. This pipeline is tested on the KITTI dataset using two different 2D detection networks. The 3D detection results based on these two networks are similar, demonstrating the flexibility of the proposed pipeline. The results rank second among the 3D detection algorithms, indicating its competencies in 3D detection.

Google's artificial intelligence built an AI that outperforms any made by humans


In May 2017, researchers at Google Brain announced the creation of AutoML, an artificial intelligence (AI) that's capable of generating its own AIs. More recently, they decided to present AutoML with its biggest challenge to date, and the AI that can build AI created a "child" that outperformed all of its human-made counterparts.

Machine Learning is the Solution to the Big Data Problem Caused by the IoT 7wData


Big Data has already made fundamental changes to the way businesses operate. There are huge advantages for companies who can derive value from their data, but these opportunities come with challenges, too. For some, this is the challenge of acquiring data from new sources. For others, it is the task of building a scalable infrastructure that can manage the data in aggregate. For a brave few, it means extracting value from the data by implementing advanced analytic techniques and tools.

Find Your Own Way: Weakly-Supervised Segmentation of Path Proposals for Urban Autonomy Artificial Intelligence

We present a weakly-supervised approach to segmenting proposed drivable paths in images with the goal of autonomous driving in complex urban environments. Using recorded routes from a data collection vehicle, our proposed method generates vast quantities of labelled images containing proposed paths and obstacles without requiring manual annotation, which we then use to train a deep semantic segmentation network. With the trained network we can segment proposed paths and obstacles at run-time using a vehicle equipped with only a monocular camera without relying on explicit modelling of road or lane markings. We evaluate our method on the large-scale KITTI and Oxford RobotCar datasets and demonstrate reliable path proposal and obstacle segmentation in a wide variety of environments under a range of lighting, weather and traffic conditions. We illustrate how the method can generalise to multiple path proposals at intersections and outline plans to incorporate the system into a framework for autonomous urban driving.