A good dataset serves as the backbone of an Artificial Intelligence system. Data assists in various ways as it helps understand how the system is performing, understand meaning insights and others. At the premier annual Computer Vision and Pattern Recognition conference (CVPR 2020), several datasets have been open-sourced in order to help the community achieve higher accuracies and insights. Below here we have listed the top 10 Computer Vision datasets that are open-sourced at the CVPR 2020 conference. About: FaceScape is a large-scale detailed 3D face dataset that includes 18,760 textured 3D face models, which are captured from 938 subjects and each with 20 specific expressions.
-- This work addresses the problem of semantic scene understanding under foggy road conditions. Although marked progress has been made in semantic scene understanding over the recent years, it is mainly concentrated on clear weather outdoor scenes. Extending semantic segmentation methods to adverse weather conditions like fog is crucially important for outdoor applications such as self-driving cars. In this paper, we propose a novel method, which uses purely synthetic data to improve the performance on unseen real-world foggy scenes captured in the streets of Zurich and its surroundings. Our results highlight the potential and power of photo-realistic synthetic images for training and especially fine-tuning deep neural nets. Our contributions are threefold, 1) we created a purely synthetic, high-quality foggy dataset of 25,000 unique outdoor scenes, that we call Foggy Synscapes and plan to release publicly 2) we show that with this data we outperform previous approaches on real-world foggy test data 3) we show that a combination of our data and previously used data can even further improve the performance on real-world foggy data. The last years have seen tremendous progress in tasks relevant to autonomous driving .
Decades of research in artificial intelligence (AI) have produced formidable technologies that are providing immense benefit to industry, government, and society. AI systems can now translate across multiple languages, identify objects in images and video, streamline manufacturing processes, and control cars. The deployment of AI systems has not only created a trillion-dollar industry that is projected to quadruple in three years, but has also exposed the need to make AI systems fair, explainable, trustworthy, and secure. Future AI systems will rightfully be expected to reason effectively about the world in which they (and people) operate, handling complex tasks and responsibilities effectively and ethically, engaging in meaningful communication, and improving their awareness through experience. Achieving the full potential of AI technologies poses research challenges that require a radical transformation of the AI research enterprise, facilitated by significant and sustained investment. These are the major recommendations of a recent community effort coordinated by the Computing Community Consortium and the Association for the Advancement of Artificial Intelligence to formulate a Roadmap for AI research and development over the next two decades.
Event cameras are bio-inspired sensors that work radically different from traditional cameras. Instead of capturing images at a fixed rate, they measure per-pixel brightness changes asynchronously. This results in a stream of events, which encode the time, location and sign of the brightness changes. Event cameras posses outstanding properties compared to traditional cameras: very high dynamic range (140 dB vs. 60 dB), high temporal resolution (in the order of microseconds), low power consumption, and do not suffer from motion blur. Hence, event cameras have a large potential for robotics and computer vision in challenging scenarios for traditional cameras, such as high speed and high dynamic range. However, novel methods are required to process the unconventional output of these sensors in order to unlock their potential. This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras. We present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow, etc.) to high-level vision (reconstruction, segmentation, recognition). We also discuss the techniques developed to process events, including learning-based techniques, as well as specialized processors for these novel sensors, such as spiking neural networks. Additionally, we highlight the challenges that remain to be tackled and the opportunities that lie ahead in the search for a more efficient, bio-inspired way for machines to perceive and interact with the world.
The new iPad Pro is a significant upgrade, adding an all-screen front, Face ID, smaller dimensions and weight, a new Pencil and Smart Keyboard, different connectivity and, oh yes, no headphone jack. I've been living with the new model in its larger, 12.9in-display version since jsut after it was announced last Tuesday. Almost everything here relates to the smaller edition too, apart from the size, of course. Not only does it remove the Home button at the bottom of the screen, it squeaks the screen out to the edges, just like on the iPhone XS. The larger proportions of a tablet compared to a phone mean that there's room for the TrueDepth sensor and camera needed for the facial recognition system, Face ID, in the slightly wider bezel that runs evenly round the edge. Compared to the iPhone XS this bezel is noticeably wider but proportionally, it still looks narrow.
Apple has unveiled a completely redesigned iPad Pro, with a new look and entirely new insides. The most obvious changes to the tablet are the disappearance of the home button and a vast reduction in the bezels that sweep around the front, so that the screen can take up almost all of the front of the tablet. In place of the home button's Touch ID sensor, previously used to unlock it, has come the Face ID facial recognition technology that first arrived with the iPhone X. The new iPads borrow heavily from that phone, in both its design and the features contained within. The I.F.O. is fuelled by eight electric engines, which is able to push the flying object to an estimated top speed of about 120mph.
Given the ability to directly manipulate image pixels in the digital input space, an adversary can easily generate imperceptible perturbations to fool a Deep Neural Network (DNN) image classifier, as demonstrated in prior work. In this work, we tackle the more challenging problem of crafting physical adversarial perturbations to fool image-based object detectors like Faster R-CNN. Attacking an object detector is more difficult than attacking an image classifier, as it needs to mislead the classification results in multiple bounding boxes with different scales. Extending the digital attack to the physical world adds another layer of difficulty, because it requires the perturbation to be robust enough to survive real-world distortions due to different viewing distances and angles, lighting conditions, and camera limitations. We show that the Expectation over Transformation technique, which was originally proposed to enhance the robustness of adversarial perturbations in image classification, can be successfully adapted to the object detection setting. Our approach can generate adversarially perturbed stop signs that are consistently mis-detected by Faster R-CNN as other objects, posing a potential threat to autonomous vehicles and other safety-critical computer vision systems.
John Chambers, Chairman of Cisco System 3. Welcome to the Digital Economy Compass 3 Less talking, more facts – our idea behind creating the Digital Economy Compass. It contains facts, trends and key players, covering the entire digital economy. This very first edition will provide everything you need to know about the digital economy. In a global comparison, broadband speed is fastest in East Asia and Scandinavia Source: Akamai Technologies Note: Figure refers to Q3-2016; countries covered: broadband ranking – 147 1: Megabits per second 10 Average broadband speed in Top10 and selected countries (in Mbps1) South Korea 1 Hong Kong 2 Norway 3 23.6 18.2 Sweden 4 20.1 18.4 20.0 19.7 The sevenfold increase in global mobile data traffic is mainly driven by online video streaming Source: Cisco System; figures include only cellular mobile traffic (Wi-Fi or small cell from dual-mode devices are excluded) Note: Other mobile devices include Tablets, mobile PCs, M2M (incl. Pokémon Go was the most popular mobile game for iPhone users across the globe In cooperation with 19 Most downloaded iPhone apps per category and country in 2016 Categories covered: Shopping, Music, News, Gaming, Social Networks U.S. PandoraAmazon CNN Pokémon Go Messenger China Kugou Music Taobao Toutiao WeChat King of Glory Germany SpotifyKleinan zeigen Spiegel Online Pokémon Go WhatsApp U.K. SpotifyeBay BBC News Pokémon Go WhatsApp France DeezerWish Le Monde Pokémon Go Messenger Connectivity 21. "If you make customers unhappy in the physical world, they might each tell 6 friends. If you make customers unhappy on the Internet, they can each tell 6,000 friends."
The world is evolving at thunder-speed, and an increasing number of machines are being designed to catch up with humans in many activities. Ever since IBM's Artificial Intelligence (AI) Deep Blue defeated the world Chess Champion in 1997 a lot has been going on. Self-driving cars, close-to-perfect face-recognition software, stock investment geniuses, chatbots and even AI doctors are close to be (or already are) part of our daily lives. One has just to see how Watson, also an IBM AI robot "destroyed" the other (human) players in a game of Jeopardy in 2011. A few years ago, AI boundaries were given by the codes written in the robot's programming.