3D object detection is a core perceptual challenge for robotics and autonomous driving. However, the class-taxonomies in modern autonomous driving datasets are significantly smaller than many influential 2D detection datasets. In this work, we address the long-tail problem by leveraging both the large class-taxonomies of modern 2D datasets and the robustness of state-of-the-art 2D detection methods. We proceed to mine a large, unlabeled dataset of images and LiDAR, and estimate 3D object bounding cuboids, seeded from an off-the-shelf 2D instance segmentation model. Critically, we constrain this ill-posed 2D-to-3D mapping by using high-definition maps and object size priors. The result of the mining process is 3D cuboids with varying confidence. This mining process is itself a 3D object detector, although not especially accurate when evaluated as such. However, we then train a 3D object detection model on these cuboids, consistent with other recent observations in the deep learning literature, we find that the resulting model is fairly robust to the noisy supervision that our mining process provides. We mine a collection of 1151 unlabeled, multimodal driving logs from an autonomous vehicle and use the discovered objects to train a LiDAR-based object detector. We show that detector performance increases as we mine more unlabeled data. With our full, unlabeled dataset, our method performs competitively with fully supervised methods, even exceeding the performance for certain object categories, without any human 3D annotations.
Dynamic dispatching is one of the core problems for operation optimization in traditional industries such as mining, as it is about how to smartly allocate the right resources to the right place at the right time. Conventionally, the industry relies on heuristics or even human intuitions which are often short-sighted and sub-optimal solutions. Leveraging the power of AI and Internet of Things (IoT), data-driven automation is reshaping this area. However, facing its own challenges such as large-scale and heterogenous trucks running in a highly dynamic environment, it can barely adopt methods developed in other domains (e.g., ride-sharing). In this paper, we propose a novel Deep Reinforcement Learning approach to solve the dynamic dispatching problem in mining. We first develop an event-based mining simulator with parameters calibrated in real mines. Then we propose an experience-sharing Deep Q Network with a novel abstract state/action representation to learn memories from heterogeneous agents altogether and realizes learning in a centralized way. We demonstrate that the proposed methods significantly outperform the most widely adopted approaches in the industry by $5.56\%$ in terms of productivity. The proposed approach has great potential in a broader range of industries (e.g., manufacturing, logistics) which have a large-scale of heterogenous equipment working in a highly dynamic environment, as a general framework for dynamic resource allocation.
The deployment of artificial intelligence (AI) in mining is enabling companies to improve their efficiency and productivity, which is crucial to their profitability. The mining industry is pivotal to the world's economy. The mining industry's top companies had a total revenue of approximately 683 billion U.S. dollars in 2018. Implementation of AI in mining activities can help push the industry even further forward by reducing the operating costs and simplifying the mining processes. A majority of mining companies still depend on traditional mining practices.
Mercury Systems, Inc. today unveiled the EnterpriseSeries RES AI rugged rackmount server line, bringing High Performance Computing (HPC) capabilities to aerospace, defense and other mission-critical applications at the edge. "The proliferation of sensors, ever-growing data loads and the evolution of complex deep learning neural networks continues to increase computational demands, driving the need for supercomputing infrastructure closer to the edge," said Scott Orton, Vice President and General Manager of Mercury's Trusted Mission Solutions group. "Through close collaboration with technology leaders such as NVIDIA and Intel, we've developed reliable parallel computing systems that accelerate demanding artificial intelligence (AI), signal intelligence (SIGINT), and sensor fusion applications where it's needed the most." Why it Matters: Evolving compute-intensive AI, virtualization, big data analytics, SIGINT, autonomous vehicle, Electronic Warfare (EW) and sensor fusion applications require data center supercomputing capabilities closer to the source of data origin. Delivering HPC capabilities to the edge presents challenges as every application has its own security, performance, footprint, budget and reliability requirements.
Decades of research in artificial intelligence (AI) have produced formidable technologies that are providing immense benefit to industry, government, and society. AI systems can now translate across multiple languages, identify objects in images and video, streamline manufacturing processes, and control cars. The deployment of AI systems has not only created a trillion-dollar industry that is projected to quadruple in three years, but has also exposed the need to make AI systems fair, explainable, trustworthy, and secure. Future AI systems will rightfully be expected to reason effectively about the world in which they (and people) operate, handling complex tasks and responsibilities effectively and ethically, engaging in meaningful communication, and improving their awareness through experience. Achieving the full potential of AI technologies poses research challenges that require a radical transformation of the AI research enterprise, facilitated by significant and sustained investment. These are the major recommendations of a recent community effort coordinated by the Computing Community Consortium and the Association for the Advancement of Artificial Intelligence to formulate a Roadmap for AI research and development over the next two decades.
Recent studies have also found that a strong correlation existsbetween viewing patterns of workers, captured using eye-tracking devices, and their hazard recognition performance. Therefore, it is important to analyze the viewing patterns of workers togain a better understanding of their hazard recognition performance. This paper proposes a method that can automatically map the gaze fixations collected using a wearable eye-tracker to the predefined areas of interests. The proposed method detects these areas or objects (i.e., hazards) of interests through a computer vision-based segmentation technique and transfer learning. The mapped fixation data is then used to analyze the viewing behaviors of workers and compute their attention distribution. The proposed method is implemented on an under construction road as a case study to evaluate the performance of the proposed method. Keywords: Hazard recognition, road construction safety, transfer learning, eye-tracking, machine vision 1 INTRODUCTION With an average of nine fatalities every day, construction is one of the most dangerous industries for which to work (1).
The following is the text of a talk I gave in San Francisco on December 1st, 2016. The audience was readers of my newsletter, Exponential View. You can sign up here. This is a long (7,500 word) transcript of the talk. You can scan it to see the slides and accompanying exhibits if that is easier. Or even read it in more than one sitting…. Exponential View has a purpose. In between all the emojis and all the spelling mistakes, this is what it's about: This is me on my first day at school back when I was in Zambia in sub-Saharan Africa. On the right is my friend Rehan, who I reconnected recently through Facebook. He is now known as Dr. Freeze and he does non-invasive body sculpting in Orange County. So I can get you a good rate. But I think it's important, this starting point is important. We often are inspired from where we come from and what the hell was I doing in Zambia? My dad was trained as economist and accountant, well he is retired now, but then he was an economist and was down in Zambia building the kind of institutions that we take for granted in countries like the U.S. and the U.K. to make the country function. Zambia had just got independence from the U.K. It needed a deeper civil service, it was having to build its legal system, create its system of distribution and so on. So I got an early exposure to the importance of economic institutions for making societies wealthier and making them work. While I was down in Zambia, which is a land-locked country and doesn't have great access to the sea and this is the 1970s, so we didn't have a vast range of toys.
The relentless increase in computing power and the accumulation of big data over the years has sparked intense interest in machine learning and its associated techniques. The new SAS Visual Data Mining and Machine Learning software will feed this need for smarter analytics. Advanced analytics offer insight to businesses, but machine learning and deep learning algorithms take it deeper, revealing insights that were previously out of reach. For example, machine learning use can include facial recognition in security systems, speech recognition in customer service applications, accurate product recommendations in e-commerce, self-driving cars and medical diagnostics. "SAS Visual Data Mining and Machine Learning shatters barriers related to data volume and variety, limited analytical depth and computational bottlenecks.
The successful candidates will join the Data Mining & Machine Learning Group and contribute to a new research project, ROCSAFE (see below) funded by the European Union's Horizon 2020 Programme. The research is likely to involve one of: (1) advances in temporal Bayesian reasoning for decision support; (2) routing of autonomous vehicles for optimal collection of multi-resolution image and sensor data; (3) context-aware decision support driven by sensor data analytics.