If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
The human face has been a topic of interest for deep learning engineers for quite some time now. Understanding the human face not only helps in facial recognition but finds applications in facial morphing, head pose detection and virtual makeovers. If you are a regular user of social media apps like Instagram or Snapchat, have you wondered how the filters fit perfectly for each face? Though every face on the planet is unique, these filters seem to magically align on your nose, lips and eyes. These filters or face-swapping applications make use of facial landmarks.
For a few seconds, it seems real. Then, on the horizon, the landscape gives way to rugged coastline, and, as the plane flies closer, we glimpse the rippling waves glinting in the evening sun. In real life, I have not seen the ocean for five months and, although I'm just sitting in my kitchen watching a virtual presentation of a video game, I feel a surge of emotion. When the latest instalment in Microsoft's decades-old Flight Simulator series was first shown at the E3 video game event last year, it drew gasps from the audience. Using two petabytes of geographic data culled from Bing Maps, together with cutting-edge, machine learning algorithms running on the company's Azure cloud computing network, the game presents a near-photorealistic depiction of the entire planet.
Robots can learn how to find things faster by learning how different objects around the house are related, according to work from the University of Michigan. A new model provides robots with a visual search strategy that can teach them to look for a coffee pot nearby if they're already in sight of a refrigerator, in one of the paper's examples. The work, led by Prof. Chad Jenkins and CSE Ph.D. student Zhen Zeng, was recognized at the 2020 International Conference on Robotics and Automation with a Best Paper Award in Cognitive Robotics. A common aim of roboticists is to give machines the ability to navigate in realistic settings--for example, the disordered, imperfect households we spend our days in. These settings can be chaotic, with no two exactly the same, and robots in search of specific objects they've never seen before will need to pick them out of the noise.
One of the most important concepts in facial analysis using images, is to define our region of interest (ROI), we must define in our image a specific part where we will filter or perform some operation. For example, if we need to filter the license plate of a car, our ROI is only on the license plate. The street, the body of the car and anything else that is present in the image is just a supporting part in this operation. In our example, we will use the opencv library, which already has supported to partition our image and help us identify our ROI. In our project we will use the ready-made classifier known as: Haar cascade classifier. This specific classifier will always work with gray images.
We propose a convolutional neural network for localising the centres of the optic disc (OD) and fovea in ultra-wide field of view scanning laser ophthalmoscope (UWFoV-SLO) images of the retina. Images captured in both reflectance and autofluorescence (AF) modes, and central pole and eyesteered gazes, were used. The method achieved an OD localisation accuracy of 99.4% within one OD radius, and fovea localisation accuracy of 99.1% within one OD radius on a test set comprising of 1790 images. The performance of fovea localisation in AF images was comparable to the variation between human annotators at this task. The laterality of the image (whether the image is of the left or right eye) was inferred from the OD and fovea coordinates with an accuracy of 99.9%.
Diagnosing problems in Internet-scale services remains particularly difficult and costly for both content providers and ISPs. Because the Internet is decentralized, the cause of such problems might lie anywhere between an end-user's device and the service datacenters. Further, the set of possible problems and causes is not known in advance, making it impossible in practice to train a classifier with all combinations of problems, causes and locations. In this paper, we explore how different machine learning techniques can be used for Internet-scale root cause analysis using measurements taken from end-user devices. We show how to build generic models that (i) are agnostic to the underlying network topology, (ii) do not require to define the full set of possible causes during training, and (iii) can be quickly adapted to diagnose new services. Our solution, DiagNet, adapts concepts from image processing research to handle network and system metrics. We evaluate DiagNet with a multi-cloud deployment of online services with injected faults and emulated clients with automated browsers. We demonstrate promising root cause analysis capabilities, with a recall of 73.9% including causes only being introduced at inference time.
It's hard keeping your kids entertained during the coronavirus quarantine, but here are some ways parents figured out how to make it fun. Purchases you make through our links may earn us a commission. Our colleague Mike Snider from the USA TODAY Money team is here to share some news about how you can access Minecraft content for free. Microsoft wants to help students keep flexing their mental muscles even if they aren't in the classroom, with many schools closed during the coronavirus crisis. So kids and parents can explore some free "Minecraft" challenges, made available for free today through June 30 in the Minecraft Marketplace, found within the game played by more than 90 million each month.
Many cooperative multi-agent problems require agents to learn individual tasks while contributing to the collective success of the group. This is a challenging task for current state-of-the-art multi-agent reinforcement algorithms that are designed to either maximize the global reward of the team or the individual local rewards. The problem is exacerbated when either of the rewards is sparse leading to unstable learning. To address this problem, we present Decomposed Multi-Agent Deep Deterministic Policy Gradient (DE-MADDPG): a novel cooperative multi-agent reinforcement learning framework that simultaneously learns to maximize the global and local rewards. We evaluate our solution on the challenging defensive escort team problem and show that our solution achieves a significantly better and more stable performance than the direct adaptation of the MADDPG algorithm.
Face and hand tracking in the browser with MediaPipe and TensorFlow.js - Today we're excited to release two new packages: facemesh and handpose for tracking key landmarks on faces and hands respectively. This release has been a collaborative effort between the MediaPipe and TensorFlow.js Originally published by Ann Yuan and Andrey Vakunov, Software Engineers at Google at blog.tensorflow.org Today we're excited to release two new packages: facemesh and handpose for tracking key landmarks on faces and hands respectively. This release has been a collaborative effort between the MediaPipe and TensorFlow.js
Federated Learning enables visual models to be trained on-device, bringing advantages for user privacy (data need never leave the device), but challenges in terms of data diversity and quality. Whilst typical models in the datacenter are trained using data that are independent and identically distributed (IID), data at source are typically far from IID. Furthermore, differing quantities of data are typically available at each device (imbalance). In this work, we characterize the effect these real-world data distributions have on distributed learning, using as a benchmark the standard Federated Averaging (FedAvg) algorithm. To do so, we introduce two new large-scale datasets for species and landmark classification, with realistic per-user data splits that simulate real-world edge learning scenarios. We also develop two new algorithms (FedVC, FedIR) that intelligently resample and reweight over the client pool, bringing large improvements in accuracy and stability in training.