"Computers have been getting better and better at seeing movement on video. How is it that they read lips, follow a dancing girl or copy an actor making faces?"
– from Andrew Blake. Introduction to Active Contours and Visual Dynamics. Visual Dynamics Group, Department of Engineering Science, University of Oxford
American IT services provider Unisys has picked up a pair of Australian government contracts. The first is to design and implement the Enterprise Biometric Identification Services (EBIS) system that will be used by the Department of Home Affairs to conduct biometric matching on people entering Australia. "The new EBIS system will be used by the department to match face images and fingerprints of people wishing to travel to Australia, including visa and citizenship applicants, against biometric watch lists to identify people of security, law enforcement, or immigration interest, while simultaneously facilitating the processing of legitimate travellers," Unisys said in a statement. The company said the system will be designed for the next decade. For its part, Assistant Minister for Home Affairs Alex Hawke said the system would "vastly improve" Australia's biometric storage and processing capabilities, and consolidate the biometrics collected through visa and detention programs with data collected at SmartGates.
On my last tutorial exploring OpenCV, we learned AUTOMATIC VISION OBJECT TRACKING. This project was done with this fantastic "Open Source Computer Vision Library", the OpenCV. On this tutorial, we will be focusing on Raspberry Pi (so, Raspbian as OS) and Python, but I also tested the code on My Mac and it also works fine. OpenCV was designed for computational efficiency and with a strong focus on real-time applications. I am using a Raspberry Pi V3 updated to the last version of Raspbian (Stretch), so the best way to have OpenCV installed, is to follow the excellent tutorial developed by Adrian Rosebrock: Raspbian Stretch: Install OpenCV 3 Python on your Raspberry Pi.
What was once possible only via offline cloud computing servers is now built into the cameras themselves. Thus one should expect future security cameras to include not only the video feed but also meta data about objects and people in the scene they are scanning. WHY IT MATTERS: Artificial Intelligence is getting into devices everywhere. This is a review of the state of the art with AI-powered security cameras as seen at the CES consumer electronic show in Vegas recently.
The current wave of generative machine learning models for image synthesis are impressively powerful. The Generative Adversarial Networks (GANs) algorithm in particular has become popular due to resulting in near photo-realistic images. While research into GANs is scientifically and aesthetically intriguing, what is still quite unclear is which real-world tasks are out there where powerful generative models could turn out to be indispensable. Among those mentioned in research discussions one area is often missed, likely because for most of us it sounds like an obscure parascience: Reconstructing what is happening in a human visual system – what someone is seeing or imagining. This is a small, but real neuroscience research area, and you can confidently call it a variant of brain reading.
According to Nathan Mondragon, finding the right employee is all about looking at the little things. Tens of thousands of little things, as it turns out. Mondragon is the head psychologist at Hirevue, a company that offers software that screens job candidates using algorithms and artificial intelligence (AI). Hirevue's flagship product, used by global giants such as Unilever and Goldman Sachs, asks candidates to answer standard interview questions in front of a camera. Meanwhile its software, like a team of hawk-eyed psychologists hiding behind a mirror, makes note of thousands of barely perceptible changes in posture, facial expression, vocal tone and word choice.
Trouble is brewing among the students of the Girls Domestic Science School, a well-known private institution at Hitotsubashi, Kanda, which enjoys a good reputation in educational circles and has contributed greatly to the advancement of female education, the courses including sewing, embroidery and foreign-style cooking. The school recently received a monetary donation amounting to ¥13,000 from Mr. Kamesaburo Yamashita, the well-known "narikin" of Kobe, who has amassed a big fortune though the sale of steamers. Several days ago the girls school referred to had a visit from an aged lady, who was alleged to have been sent by Mr. Yamashita, the patron of the school, on the mission of selecting a prospective bride for the son or nephew of the narikin. The old lady was treated by the school faculty with marked respect, and as though she came with the object of inspection, the true purpose of her visit being hidden as far as possible. Madame Haruko Hatoyama, the widow of the late Dr. Hatoyama, ex-minister of justice and dean of Waseda who is the superintendent of the teaching staff of the school, ordered the class to stop the lesson and gave the visitor the privilege of leisurely examining the personal beauty of the girl students of the graduating class of a certain course.
In recent days, more and more Facebook users started seeing a notification about how the social network uses its facial recognition technology. When Facebook first implemented the tech in 2013, it limited its use to suggesting tags in photos. In December, though, the company announced that it would expand face recognition's scope to notify you when someone added a photo you were in, whether it was tagged or not. If that sounds like something you'd rather Facebook not do, it's easy enough to stop.
Today is the day Samsung will be unveiling its highly anticipated Galaxy S9, the company's latest flagship smartphone. As we discussed earlier this week, you can expect an improved, smarter camera on the handset, as well as an answer to Apple's Animojis, animated emojis that will use face recognition technology to make your phone more interactive when you message friends or family. We'll learn all about the Galaxy S9 in the next few hours, so stay tuned to this post to keep up with the action as it happens. The event kicks off at 12PM ET/6PM Barcelona time.