Goto

Collaborating Authors

Results


AI perspectives in Smart Cities and Communities to enable road vehicle automation and smart traffic control

arXiv.org Artificial Intelligence

Smart Cities and Communities (SCC) constitute a new paradigm in urban development. SCC ideates on a data-centered society aiming at improving efficiency by automating and optimizing activities and utilities. Information and communication technology along with internet of things enables data collection and with the help of artificial intelligence (AI) situation awareness can be obtained to feed the SCC actors with enriched knowledge. This paper describes AI perspectives in SCC and gives an overview of AI-based technologies used in traffic to enable road vehicle automation and smart traffic control. Perception, Smart Traffic Control and Driver Modelling are described along with open research challenges and standardization to help introduce advanced driver assistance systems and automated vehicle functionality in traffic. To fully realize the potential of SCC, to create a holistic view on a city level, the availability of data from different stakeholders is need. Further, though AI technologies provide accurate predictions and classifications there is an ambiguity regarding the correctness of their outputs. This can make it difficult for the human operator to trust the system. Today there are no methods that can be used to match function requirements with the level of detail in data annotation in order to train an accurate model. Another challenge related to trust is explainability, while the models have difficulties explaining how they come to a certain conclusion it is difficult for humans to trust it.


Edge AI, is this the end of Cloud?

#artificialintelligence

These days, companies are using cloud services to receive and process the data they gather from sensors, cameras, and services. However, the amount of data is getting so massive that sending them and managing them is becoming increasingly expansive. This is where Edge AI comes in, a combination of Edge Computing and Artificial Intelligence. Edge AI is a system of AI-equipped chips that are on board multiple devices. These devices can be installed and set up much closer to the sources of data. Although these chips process with less processing power and maybe slower action, they can provide invaluable services in terms of receiving and processing the data.


Enhancing Object Detection for Autonomous Driving by Optimizing Anchor Generation and Addressing Class Imbalance

arXiv.org Artificial Intelligence

Object detection has been one of the most active topics in computer vision for the past years. Recent works have mainly focused on pushing the state-of-the-art in the general-purpose COCO benchmark. However, the use of such detection frameworks in specific applications such as autonomous driving is yet an area to be addressed. This study presents an enhanced 2D object detector based on Faster R-CNN that is better suited for the context of autonomous vehicles. Two main aspects are improved: the anchor generation procedure and the performance drop in minority classes. The default uniform anchor configuration is not suitable in this scenario due to the perspective projection of the vehicle cameras. Therefore, we propose a perspective-aware methodology that divides the image into key regions via clustering and uses evolutionary algorithms to optimize the base anchors for each of them. Furthermore, we add a module that enhances the precision of the second-stage header network by including the spatial information of the candidate regions proposed in the first stage. We also explore different re-weighting strategies to address the foreground-foreground class imbalance, showing that the use of a reduced version of focal loss can significantly improve the detection of difficult and underrepresented objects in two-stage detectors. Finally, we design an ensemble model to combine the strengths of the different learning strategies. Our proposal is evaluated with the Waymo Open Dataset, which is the most extensive and diverse up to date. The results demonstrate an average accuracy improvement of 6.13% mAP when using the best single model, and of 9.69% mAP with the ensemble. The proposed modifications over the Faster R-CNN do not increase computational cost and can easily be extended to optimize other anchor-based detection frameworks.


Revisiting Self-Supervised Monocular Depth Estimation

arXiv.org Artificial Intelligence

Self-supervised learning of depth map prediction and motion estimation from monocular video sequences is of vital importance -- since it realizes a broad range of tasks in robotics and autonomous vehicles. A large number of research efforts have enhanced the performance by tackling illumination variation, occlusions, and dynamic objects, to name a few. However, each of those efforts targets individual goals and endures as separate works. Moreover, most of previous works have adopted the same CNN architecture, not reaping architectural benefits. Therefore, the need to investigate the inter-dependency of the previous methods and the effect of architectural factors remains. To achieve these objectives, we revisit numerous previously proposed self-supervised methods for joint learning of depth and motion, perform a comprehensive empirical study, and unveil multiple crucial insights. Furthermore, we remarkably enhance the performance as a result of our study -- outperforming previous state-of-the-art performance.


White Paper Machine Learning in Certified Systems

arXiv.org Artificial Intelligence

Machine Learning (ML) seems to be one of the most promising solution to automate partially or completely some of the complex tasks currently realized by humans, such as driving vehicles, recognizing voice, etc. It is also an opportunity to implement and embed new capabilities out of the reach of classical implementation techniques. However, ML techniques introduce new potential risks. Therefore, they have only been applied in systems where their benefits are considered worth the increase of risk. In practice, ML techniques raise multiple challenges that could prevent their use in systems submitted to certification constraints. But what are the actual challenges? Can they be overcome by selecting appropriate ML techniques, or by adopting new engineering or certification practices? These are some of the questions addressed by the ML Certification 3 Workgroup (WG) set-up by the Institut de Recherche Technologique Saint Exup\'ery de Toulouse (IRT), as part of the DEEL Project.


Training artificial intelligence through synthetic data

#artificialintelligence

AI companies are generating synthetic data to train machine learning systems.Why it matters: Using computer-generated data to train AI systems can help address privacy concerns and cut down on bias while meeting the needs of models that operate in highly specific environments.Stay on top of the latest market trends and economic insights with Axios Markets. Subscribe for freeHow it works: A synthetic data set is artificially created, rather than scraped from the real world.For a computer vision system being trained on facial recognition, that might mean a dataset of artificially generated human faces in lieu of online photos of real people pulled off the internet — often without their explicit consent. "This allows you to train systems in a completely virtual domain," says Yashar Behzadi, the CEO of Synthesis AI, which generates synthetic data for computer vision models.Details: Synthetic data has been used for some time in robotics and autonomous vehicles, which need to be trained with highly specific data — like the precise 3D position of an object — that can be expensive or difficult to pull from the real world.But as concerns about AI bias and privacy grow, synthetic data makes it possible to generate data sets that can be molded to specification, allowing AI researchers to counter the bias that can be built into the real world."If we want to be robust against skin color or skin tone or demographics, any element that may not be well-represented, you can just model your distribution to equally representing each of those categories," says Behzadi.Yes, but: The real world contains outliers that synthetic data generators may not think to cover, which could leave models unprepared for certain situations.And it's still up to the generators of synthetic data to ensure that their datasets are fairer than what might be picked up in the real world. The bottom line: Synthetic data can be even better than the real thing, but only if it's designed the right way. Like this article? Get more from Axios and subscribe to Axios Markets for free.


The AI Index 2021 Annual Report

arXiv.org Artificial Intelligence

Welcome to the fourth edition of the AI Index Report. This year we significantly expanded the amount of data available in the report, worked with a broader set of external organizations to calibrate our data, and deepened our connections with the Stanford Institute for Human-Centered Artificial Intelligence (HAI). The AI Index Report tracks, collates, distills, and visualizes data related to artificial intelligence. Its mission is to provide unbiased, rigorously vetted, and globally sourced data for policymakers, researchers, executives, journalists, and the general public to develop intuitions about the complex field of AI. The report aims to be the most credible and authoritative source for data and insights about AI in the world.


Computer Vision: Python OCR & Object Detection Quick Starter

#artificialintelligence

Free Coupon Discount - Computer Vision: Python OCR & Object Detection Quick Starter Quick Starter for Optical Character Recognition, Image Recognition Object Detection and Object Recognition using Python Created by Abhilash Nelson Students also bought Deep Learning Prerequisites: Logistic Regression in Python Deep Learning: Convolutional Neural Networks in Python Deep Learning A-Z: Hands-On Artificial Neural Networks The Complete Self-Driving Car Course - Applied Deep Learning The Complete Neural Networks Bootcamp: Theory, Applications Preview this Udemy Course GET COUPON CODE Description Hi There! welcome to my new course'Optical Character Recognition and Object Recognition Quick Start with Python'. This is the third course from my Computer Vision series. Image Recognition, Object Detection, Object Recognition and also Optical Character Recognition are among the most used applications of Computer Vision. Using these techniques, the computer will be able to recognize and classify either the whole image, or multiple objects inside a single image predicting the class of the objects with the percentage accuracy score. Using OCR, it can also recognize and convert text in the images to machine readable format like text or a document.


Image Classification using CNN for Traffic Signs in Pakistan

arXiv.org Artificial Intelligence

The autonomous automotive industry is one of the largest and most conventional projects worldwide, with many technology companies effectively designing and orienting their products towards automobile safety and accuracy. These products are performing very well over the roads in developed countries. But can fail in the first minute in an underdeveloped country because there is much difference between a developed country environment and an underdeveloped country environment. The following study proposed to train these Artificial intelligence models in environment space in an underdeveloped country like Pakistan. The proposed approach on image classification uses convolutional neural networks for image classification for the model. For model pre-training German traffic signs data set was selected then fine-tuned on Pakistan's dataset. The experimental setup showed the best results and accuracy from the previously conducted experiments. In this work to increase the accuracy, more dataset was collected to increase the size of images in every class in the data set. In the future, a low number of classes are required to be further increased where more images for traffic signs are required to be collected to get more accuracy on the training of the model over traffic signs of Pakistan's most used and popular roads motorway and national highway, whose traffic signs color, size, and shapes are different from common traffic signs.


On the Philosophical, Cognitive and Mathematical Foundations of Symbiotic Autonomous Systems (SAS)

arXiv.org Artificial Intelligence

Symbiotic Autonomous Systems (SAS) are advanced intelligent and cognitive systems exhibiting autonomous collective intelligence enabled by coherent symbiosis of human-machine interactions in hybrid societies. Basic research in the emerging field of SAS has triggered advanced general AI technologies functioning without human intervention or hybrid symbiotic systems synergizing humans and intelligent machines into coherent cognitive systems. This work presents a theoretical framework of SAS underpinned by the latest advances in intelligence, cognition, computer, and system sciences. SAS are characterized by the composition of autonomous and symbiotic systems that adopt bio-brain-social-inspired and heterogeneously synergized structures and autonomous behaviors. This paper explores their cognitive and mathematical foundations. The challenge to seamless human-machine interactions in a hybrid environment is addressed. SAS-based collective intelligence is explored in order to augment human capability by autonomous machine intelligence towards the next generation of general AI, autonomous computers, and trustworthy mission-critical intelligent systems. Emerging paradigms and engineering applications of SAS are elaborated via an autonomous knowledge learning system that symbiotically works between humans and cognitive robots.