Ethics of artificial intelligence critical to its success - AI Forum


The ethics of artificial intelligence will be critical to the success of AI going forward, a Microsoft leader and a keynote speaker at the AI Day event in Auckland next week says. Steve Guggenheimer, corporate vice president of Microsoft's AI Business, says that given AI has the potential to reshape not just industries and governments, but society as a whole. "Working on the ethics of the use of AI, from the beginning, in key areas like transparency, accountability, privacy and bias will be crucial to the success of AI going forward. "There is a strong focus on the ethical implications of the AI systems that are being built and deployed." The European Commission's group on ethics in science and new technologies recently warned that existing efforts to develop solutions to the ethical, societal and legal challenges AI presents are a'patchwork of disparate initiatives'.

Epic Games shows off amazing real-time digital human with Siren demo


Epic Games, CubicMotion, 3Lateral, Tencent, and Vicon took a big step toward creating believable digital humans today with the debut of Siren, a demo of a woman rendered in real-time using Epic's Unreal Engine 4 technology. The move is a step toward transforming both films and games using digital humans who look and act like the real thing. The tech, shown off at Epic's event at the Game Developers Conference in San Francisco, is available for licensing for game or film makers. Cubic Motion's computer vision technology empowered producers to conveniently and instantaneously create digital facial animation, saving the time and cost of digitally animating it by hand. "Everything you saw was running in the Unreal Engine at 60 frames per second," said Epic Games chief technology officer Kim Libreri, during a press briefing on Wednesday morning at GDC. "Creating believable digital characters that you can interact with and direct in real-time is one of the most exciting things that has happened in the computer graphics industry in recent years."

What Makes Good Synthetic Training Data for Learning Disparity and Optical Flow Estimation? Machine Learning

The finding that very large networks can be trained efficiently and reliably has led to a paradigm shift in computer vision from engineered solutions to learning formulations. As a result, the research challenge shifts from devising algorithms to creating suitable and abundant training data for supervised learning. How to efficiently create such training data? The dominant data acquisition method in visual recognition is based on web data and manual annotation. Yet, for many computer vision problems, such as stereo or optical flow estimation, this approach is not feasible because humans cannot manually enter a pixel-accurate flow field. In this paper, we promote the use of synthetically generated data for the purpose of training deep networks on such tasks.We suggest multiple ways to generate such data and evaluate the influence of dataset properties on the performance and generalization properties of the resulting networks. We also demonstrate the benefit of learning schedules that use different types of data at selected stages of the training process.

'Face stealing' cap uses infrared to fool facial recognition systems

Daily Mail

A baseball cap that can fool facial recognition systems into think you're someone else has been developed by scientists. The face-stealing hat projects infrared light - which is invisible to the naked eye - onto your face to trick AI camera systems, which can see the spectrum. Researchers said the technology can not only obscure your identity but also'impersonate a different person to pass facial recognition-based authentication.' A baseball cap that can fool facial recognition systems into think you're someone else has been developed. They added that the face-stealing lights could easily be'hidden in an umbrella and possibly even hair or a wig.' Writing in the pre-publish journal ArXiv, the joint US and Chinese team, led by Dr Zhe Zhou of Fudan University in Shanghai, said: 'We propose a kind of brand new attack against face recognition systems, which is realised by illuminating the subject using infrared.



This allows for more fine-grained information about the extent of the object within the box. To train an instance segmentation model, a groundtruth mask must be supplied for every groundtruth bounding box. In additional to the proto fields listed in the section titled Using your own dataset, one must also supply image/object/mask, which can either be a repeated list of single-channel encoded PNG strings, or a single dense 3D binary tensor where masks corresponding to each object are stacked along the first dimension. Each is described in more detail below. Instance segmentation masks can be supplied as serialized PNG images.

What Machine Learning Isn't


The biggest challenge to AI adoption is expectation. Going about integrating machine learning with the right set of expectations will lead to a much more successful outcome than being misled about what AI can do for you. I've been integrating it into businesses for over 3 years and I've seen it save companies time and money in many different areas. But things can go south pretty quickly if you think you're getting one thing and actually getting another. There are lots of great use cases for machine learning, and you can read more about some examples of those use cases here and here.

Google is Designing An Advanced Hand Gesture Recognition Sensor


The Soli Sensor, being developed as part of a project of Google Advanced Technology and Projects group, is a low-power radar designed to use less energy and detect hand gestures on a sub-millimeter level. It operates in the 60-GHz ISM band using electromagnetic waves. The sensor detects a series of motions that are part of Soli's Virtual Tools Gestures: virtual slider, virtual button and even virtual button. The pluses for the chip are that it requires less energy, has no moving parts, can function regardless of the light conditions, and when developed further in the future, could be used in a number of products: wearables, IoT devices, phones and cars. Ivan Poupyrev, Poject Soli founder, said about the goal of the project, "The hand is the ultimate input device.

Text Recognition for Video in Microsoft Video Indexer


In Video Indexer, we have the capability for recognizing display text in videos. This blog explains some of the techniques we used to extract the best quality data. To start, take a look at the sequence of frames below. Did you manage to recognize the text in the images? It is highly reasonable that you did, without even noticing.

Raspberry Pi with a side of AI: These powerful new boards come with NPUs


Video: How to set up your Raspberry Pi 3 Model B . The Raspberry Pi Foundation just released the Raspberry Pi 3 Model B with a zippier CPU and faster network connections, but what the hugely popular $35 board hasn't yet gained is a neural processing unit (NPU). NPUs are helping manufacturers of lesser-known boards speed up computer-vision applications, such as image and object recognition, and offer enterprise and manufacturers a more powerful platform for building everything from smart building applications to autonomous vehicles. The new boards have dedicated NPUs from Huawei and Rockchip. The Rock960 meanwhile comes with Rockchip's souped-up RK 3399Pro, a processor that target's Google's TensorFlow Lite framework for building AI services on iOS and Android devices.

Most people can identify others' emotions based on their face color

Daily Mail

People can read your emotions even if your facial movements don't give them away, a new report has found. Researchers constructed computer algorithms, based on the new findings, that can recognize human emotions by analyzing facial color patterns. New research suggests that humans can read other humans' moods based on facial colors alone - but that AI can do this more accurately than people can. Cognitive scientist and Ohio State professor Aleix Martinez explained how the report informs our understanding of the connection between our feelings and our anatomy. Professor Martinez said: 'We identified patterns of facial coloring that are unique to every emotion we studied.