Image Understanding



[P] Hand Gesture Recognition with Python, OpenCV and Keras Demo • r/MachineLearning

@machinelearnbot

I'll be posting all code and relevant files soon, this demo is part of a tutorial series I'm doing at my university. I'll probably do a twitch stream and eventually YouTube playlist if people like it. Edit: to answer the question I believe in this particular demo I used KCF to track. For gestures I used a convolutional neural net which is both overkill and not the fastest solution, but part of the tutorial is machine learning.


How image recognition and AI is transforming the lives of blind people

#artificialintelligence

A demo of the Orcam MyEye 2.0 was one of the highlights at the AbilityNet/RNIB TechShare Pro event in November. This small device, an update to the MyEye released in 2013, clips onto any pair of glasses and provides discrete audio feedback about the world around the wearer. It uses state-of-the-art image recognition to read signs and documents as well as recognise people and does not require internet connection. It's just one of many apps and devices that are using the power of artificial intelligence (AI) to transform the lives of people who are blind or have sight loss. Last week, we took a look Microsoft's updated free app Seeing AI and its amazing new features for people who are blind or have sight loss, including colour recognition and handwriting recognition.


Three-Dimensional

AI Magazine

The growing field of three-dimensional (3-D) computer vision-programs that can interpret the world from sensor data-is the topic of Three-Dimensional Computer Vision by Yoshiaki Shirai (Springer-Verlag, Berlin, 1987, 297 pp., $95.00) The term "three-dimensional" is used to distinguish the field from two-dimensional pattern (2-D) recognition, such as character recognition or the recognition of silhouettes. The 3-D scene-understanding problem is made difficult by shadows, uneven lighting, texture, and objects that occlude other objects The sensors used include those that obtain a greylevel or color-intensity image of a scene, methods that project a sheet of light on an object to reveal its 3-D structure, and distance-measuring devices that provide a "range image" in which the value of each picture element represents a distance from the sensor to a point in the scene Such range sensors are important because they are not affected by lighting conditions and shadows. This book, not to be confused with Takeo Kanade's Three-Dimensional Machine Vision (Kluwer Academic Publishers, 1987), describes the fundamental technology of 3-D computer vision for various applications The first four chapters are devoted to basic methods of computer vision. This is followed by chapters on image feature extraction (edge analysis, edge linking and following, and region methods) and image feature description (representing lines, segmenting a sequence of points, fitting line equations, and converting between lines and regions). Once these preliminaries are completed, the author concentrates on the 3-D world.


RealScape--Metropolitan Fixed Assets Change Judgment by Pixel-by-Pixel Stereo Processing of Aerial Photographs

AI Magazine

Recently, Tokyo terminated its traditional visual-identification work, which had been used for 20 years, and shifted to a new automated system. This article introduces the Fixed Assets Change Judgment (FACJ) system and its core tool, RealScape. RealScape automatically detects changes in the height and color of buildings based on three-dimensional analysis of aerial photographs. It employs a unique pixelby-pixel stereo processing method and enables a foot-level precision for each building. RealScape automatically detects changes in the height and color of buildings based on three-dimensional analysis of aerial photographs. The three-dimensional analysis employs a pixel-by-pixel stereo processing method that calculates the height of each pixel in aerial photographs and thus enables precise difference detection between previous and current aerial photographs. Since then, it has been used at its tax bureau every year to calculate the municipality's fixed-asset tax. After the success in Tokyo, other major city governments, including Osaka and Sapporo, have followed suit. The Japanese fixed-property tax is imposed by municipalities on the owners of land, buildings, and depreciation assets (all hereinafter referred to as "fixed assets") on January 1 of every year by calculating the tax sum according to current asset values.


[P] I made a tool to deploy Keras image recognition models to the web • r/MachineLearning

@machinelearnbot

I took a deep learning course last year, and found it was a pain to write a web app, stand up servers in the cloud, register domain names, etc. So I built something that "webapp-ifies" Keras image recognition models and deploys it to the web. All you need to do is upload a trained Keras model. Things I'll be improving next (I had to start somewhere):


Fasttrack AI workshop

#artificialintelligence

The OpenPOWER workshop on PowerAI hosted by the NHCE on 19th of December 2017. The Program, led and managed by Ganesan Narayanasamy introduced a wide range of specialist topics ranging from IBM powerAI, deep learning, machine learning, tensorFlow frameworks, Image classification with example. In this session an introduction to OpenPower foundation was delivered.This included an overview of the cooperation of over 300 institutions ranging from academia to industry as well as a more in depth look at some of the success and developments currently underway within the OpenPOWER framework. The Oak Ridge Leadership Computing Facility provides the open scientific community access to America's fastest, most powerful supercomputer and is a key member of the OpenPOWER Founation. Also included an outline of the conventionally used qubit technologies as well as an indication of the current status of the quantum computer projects underway at some of the lead player institutions including IBM, Microsoft and NASA.


Use Python to collect image tags using AWS' Reverse Image Search Engine, Rekognition

@machinelearnbot

This blog post discusses how to turn your images into text describing what is in them so you can later perform analysis on their contents and topics, all right out of a Jupyter Notebook. An example of when this would be useful is if you are given thousands of tweets, and want to know if the image media has any effect on engagement. Lucky for us, instead of writing our own image recognition tool, the engineers at Amazon, Google, and Microsoft completed this task and made their APIs accessible. Here we'll be using Rekognition, Amazon's deep learning-based image and video analysis tool. This blog serves as an example for how to extract information using different Rekognition operations and is not a replacement for reading the documentation.


Poly On Visual Recognition in Small Markets

#artificialintelligence

Timing is everything, as the saying goes. It's also the story of a startup named Poly, and its visual identification software, says its CEO, Alberto Rizzoli. Two years ago, Rizzoli began developing an AI platform capable of seeing an object – or person – and "visually" identifying it, with a high degree of accuracy. The use case, then, was the ability to enable smartphones to be more accessible for the visually impaired. And while today, that may seem like the most logical sort of development project to pursue, it was something that Rizzoli said was considered "an esoteric interest," given its very specific use and the way in which Poly approached its development.


Fooling Google's image-recognition AI 1000x faster

#artificialintelligence

By attacking even black-box systems w/hidden information, MIT CSAIL students show that hackers can break the most advanced AIs that may someday appear in TSA security lines and self-driving cars. Groups like the TSA are even considering using them to detect suspicious objects in security lines. But neural networks can easily be fooled into thinking that, say, a photo of a turtle is actually a gun. This can have major consequences: imagine if, simply by changing a few pixels, a bitter ex-boyfriend could put private photos up on Facebook, or a terrorist could disguise a bomb to evade detection. According to a team from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL), such hacks are even easier to pull off than we thought.