Goto

Collaborating Authors

Results


Top 100 Artificial Intelligence Companies in the World

#artificialintelligence

Artificial Intelligence (AI) is not just a buzzword, but a crucial part of the technology landscape. AI is changing every industry and business function, which results in increased interest in its applications, subdomains and related fields. This makes AI companies the top leaders driving the technology swift. AI helps us to optimise and automate crucial business processes, gather essential data and transform the world, one step at a time. From Google and Amazon to Apple and Microsoft, every major tech company is dedicating resources to breakthroughs in artificial intelligence. As big enterprises are busy acquiring or merging with other emerging inventions, small AI companies are also working hard to develop their own intelligent technology and services. By leveraging artificial intelligence, organizations get an innovative edge in the digital age. AI consults are also working to provide companies with expertise that can help them grow. In this digital era, AI is also a significant place for investment. AI companies are constantly developing the latest products to provide the simplest solutions. Henceforth, Analytics Insight brings you the list of top 100 AI companies that are leading the technology drive towards a better tomorrow. AEye develops advanced vision hardware, software, and algorithms that act as the eyes and visual cortex of autonomous vehicles. AEye is an artificial perception pioneer and creator of iDAR, a new form of intelligent data collection that acts as the eyes and visual cortex of autonomous vehicles. Since its demonstration of its solid state LiDAR scanner in 2013, AEye has pioneered breakthroughs in intelligent sensing. Their mission was to acquire the most information with the fewest ones and zeros. This would allow AEye to drive the automotive industry into the next realm of autonomy. Algorithmia invented the AI Layer.


Interactive Visualization System that Helps Students Better Understand and Learn CNNs

#artificialintelligence

This research summary is just one of many that are distributed weekly on the AI scholar newsletter. To start receiving the weekly newsletter, sign up here. Artificial intelligence (AI) has grown tremendously in just a few years ushering us into the AI era. We now have self-driving cars, contemporary chatbots, high-end robots, recommender systems, advanced diagnostics systems, and more. Almost every research field is now using AI.


Developing Future Human-Centered Smart Cities: Critical Analysis of Smart City Security, Interpretability, and Ethical Challenges

arXiv.org Artificial Intelligence

As we make tremendous advances in machine learning and artificial intelligence technosciences, there is a renewed understanding in the AI community that we must ensure that humans being are at the center of our deliberations so that we don't end in technology-induced dystopias. As strongly argued by Green in his book Smart Enough City, the incorporation of technology in city environs does not automatically translate into prosperity, wellbeing, urban livability, or social justice. There is a great need to deliberate on the future of the cities worth living and designing. There are philosophical and ethical questions involved along with various challenges that relate to the security, safety, and interpretability of AI algorithms that will form the technological bedrock of future cities. Several research institutes on human centered AI have been established at top international universities. Globally there are calls for technology to be made more humane and human-compatible. For example, Stuart Russell has a book called Human Compatible AI. The Center for Humane Technology advocates for regulators and technology companies to avoid business models and product features that contribute to social problems such as extremism, polarization, misinformation, and Internet addiction. In this paper, we analyze and explore key challenges including security, robustness, interpretability, and ethical challenges to a successful deployment of AI or ML in human-centric applications, with a particular emphasis on the convergence of these challenges. We provide a detailed review of existing literature on these key challenges and analyze how one of these challenges may lead to others or help in solving other challenges. The paper also advises on the current limitations, pitfalls, and future directions of research in these domains, and how it can fill the current gaps and lead to better solutions.


Closeness and Uncertainty Aware Adversarial Examples Detection in Adversarial Machine Learning

arXiv.org Artificial Intelligence

Machine learning (ML) applications are transforming our everyday lives and the artificial intelligence technology is becoming an integral part of our civilization. As the artificial intelligence technology advances, it becomes a key component of many sophisticated tasks that have direct effect on humans. In the last few years, deep neural networks (DNNs) achieved state-of-the-art performances on different number of supervised learning tasks, which led them to become widely used in many fields such as medical diagnosis, computer vision, machine translation, speech recognition and autonomous vehicles [1, 2, 3, 4]. However, there exist serious concerns on how to make deep neural networks an integral part of our lives while ensuring utmost security and reliability. Although DNNs have proven their usefulness in the real-world applications for many complex problems, they have thus far failed to overcome the challenges faced by deliberately manipulated data, which are known as adversarial inputs [5].


Adversarial images and attacks with Keras and TensorFlow - PyImageSearch

#artificialintelligence

In this tutorial, you will learn how to break deep learning models using image-based adversarial attacks. We will implement our adversarial attacks using the Keras and TensorFlow deep learning libraries. Imagine it's twenty years from now. Nearly all cars and trucks on the road have been replaced with autonomous vehicles, powered by Artificial Intelligence, deep learning, and computer vision -- every turn, lane switch, acceleration, and brake is powered by a deep neural network. Now, imagine you're on the highway. You're sitting in the "driver's seat" (is it really a "driver's seat" if the car is doing the driving?) while your spouse is in the passenger seat, and your kids are in the back. Looking ahead, you see a large sticker plastered on the lane your car is driving in.


AutoSelect: Automatic and Dynamic Detection Selection for 3D Multi-Object Tracking

arXiv.org Artificial Intelligence

3D multi-object tracking is an important component in robotic perception systems such as self-driving vehicles. Recent work follows a tracking-by-detection pipeline, which aims to match past tracklets with detections in the current frame. To avoid matching with false positive detections, prior work filters out detections with low confidence scores via a threshold. However, finding a proper threshold is non-trivial, which requires extensive manual search via ablation study. Also, this threshold is sensitive to many factors such as target object category so we need to re-search the threshold if these factors change. To ease this process, we propose to automatically select high-quality detections and remove the efforts needed for manual threshold search. Also, prior work often uses a single threshold per data sequence, which is sub-optimal in particular frames or for certain objects. Instead, we dynamically search threshold per frame or per object to further boost performance. Through experiments on KITTI and nuScenes, our method can filter out $45.7\%$ false positives while maintaining the recall, achieving new S.O.T.A. performance and removing the need for manually threshold tuning.


An Empirical Review of Adversarial Defenses

arXiv.org Artificial Intelligence

From face recognition systems installed in phones to self-driving cars, the field of AI is witnessing rapid transformations and is being integrated into our everyday lives at an incredible pace. Any major failure in these system's predictions could be devastating, leaking sensitive information or even costing lives (as in the case of self-driving cars). However, deep neural networks, which form the basis of such systems, are highly susceptible to a specific type of attack, called adversarial attacks. A hacker can, even with bare minimum computation, generate adversarial examples (images or data points that belong to another class, but consistently fool the model to get misclassified as genuine) and crumble the basis of such algorithms. In this paper, we compile and test numerous approaches to defend against such adversarial attacks. Out of the ones explored, we found two effective techniques, namely Dropout and Denoising Autoencoders, and show their success in preventing such attacks from fooling the model. We demonstrate that these techniques are also resistant to both higher noise levels as well as different kinds of adversarial attacks (although not tested against all). We also develop a framework for deciding the suitable defense technique to use against attacks, based on the nature of the application and resource constraints of the Deep Neural Network.


Driving Behavior Explanation with Multi-level Fusion

arXiv.org Artificial Intelligence

In this era of active development of autonomous vehicles, it becomes crucial to provide driving systems with the capacity to explain their decisions. In this work, we focus on generating high-level driving explanations as the vehicle drives. We present BEEF, for BEhavior Explanation with Fusion, a deep architecture which explains the behavior of a trajectory prediction model. Supervised by annotations of human driving decisions justifications, BEEF learns to fuse features from multiple levels. Leveraging recent advances in the multi-modal fusion literature, BEEF is carefully designed to model the correlations between high-level decisions features and mid-level perceptual features. The flexibility and efficiency of our approach are validated with extensive experiments on the HDD and BDD-X datasets.


New – Profile Your Machine Learning Training Jobs With Amazon SageMaker Debugger

#artificialintelligence

Today, I'm extremely happy to announce that Amazon SageMaker Debugger can now profile machine learning models, making it much easier to identify and fix training issues caused by hardware resource usage. Despite its impressive performance on a wide range of business problems, machine learning (ML) remains a bit of a mysterious topic. Getting things right is an alchemy of science, craftsmanship (some would say wizardry), and sometimes luck. In particular, model training is a complex process whose outcome depends on the quality of your dataset, your algorithm, its parameters, and the infrastructure you're training on. As ML models become ever larger and more complex (I'm looking at you, deep learning), one growing issue is the amount of infrastructure required to train them.


Learning from Experience for Rapid Generation of Local Car Maneuvers

arXiv.org Artificial Intelligence

Being able to rapidly respond to the changing scenes and traffic situations by generating feasible local paths is of pivotal importance for car autonomy. We propose to train a deep neural network (DNN) to plan feasible and nearly-optimal paths for kinematically constrained vehicles in small constant time. Our DNN model is trained using a novel weakly supervised approach and a gradient-based policy search. On real and simulated scenes and a large set of local planning problems, we demonstrate that our approach outperforms the existing planners with respect to the number of successfully completed tasks. While the path generation time is about 40 ms, the generated paths are smooth and comparable to those obtained from conventional path planners.