A photo from a summer camp posted to the camp's website so parents can view them. Venture capital-backed Waldo Photos has been selling the service to identify specific children in the flood of photos provided daily to parents by many sleep-away camps. Camps working with the Austin, Texas-based company give parents a private code to sign up. When the camp uploads photos taken during activities to its website, Waldo's facial recognition software scans for matches in the parent-provided headshots. Once it finds a match, the Waldo system (as in "Where's Waldo?") then automatically texts the photos to the child's parents.
Artificial intelligence may put an end to a long-running industry: human trafficking. The average age a minor enters the sex trade in the U.S. is 12 to 14 years old–many of the victims being runaway girls who were sexually abused. Thankfully, Attorney Generals in the U.S. and Mexico are planning to implement a new system that will help to locate victims of human trafficking. Trust Stamp, an Atlanta-based startup, will be providing the'meat and potatoes' of the life-saving technology. According to the company website, "[Trust Stamp] creates proprietary artificial intelligence solutions; researching and leveraging facial biometric science and wide-scale data mining to deliver insightful identity & trust predictions while identifying and defending against fraudulent identity attacks."
Florida has stopped testing Amazon's facial recognition program after rights groups raised concerns that the service could be used in ways that could violate civil liberties. Orlando ended a pilot program last week after its contract with Amazon.com Inc to use its Rekognition service expired. 'Partnering with innovative companies to test new technology - while also ensuring we uphold privacy laws and in no way violate the rights of others - is critical to us as we work to further keep our community safe,' the city and the Orlando Police Department said in a joint statement Monday. Orlando was one of several U.S. jurisdictions that Amazon has pitched its service to since unveiling it in late 2016 as a way to detect offensive content and secure public safety.
Google has unveiled a set of principles for ethical AI development and deployment, and announced that it will not allow its AI software to be used in weapons or for "unreasonable surveillance". In a detailed blog post, CEO Sundar Pichai said that Google would not develop technologies that cause, or are likely to cause, harm. "Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints," he explained. Google will not allow its technologies to be used in weapons or in "other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people", he said. Also on the no-go list are "technologies that gather or use information for surveillance, violating internationally accepted norms", and those "whose purpose contravenes widely accepted principles of international law and human rights".
Despite its widespread adoption, Artificial Intelligence still has a long way to go in terms of diversity and inclusion. It's a subject close to our hearts as a company, and quite frankly, something that should be celebrated and shouted about given all the doom and gloom we're so often bombarded with in today's media. From healthcare, and sustainable cities, to climate change and industry, investment in AI is making an impact in many areas. Applications of machine learning and deep learning help shape the trajectories of our daily lives, so much so that we are barely even aware of it. However, all of this do-gooding aside, one of the biggest obstacles in AI programming is that of the inherent bias that exists within it.
Residents of Shenzhen don't dare jaywalk. Since April 2017, this city in China's Guangdong province has deployed a rather intense technique to deter jaywalking. Anyone who crosses against the light will find their face, name, and part of their government ID number displayed on a large LED screen above the intersection, thanks to facial recognition devices all over the city. If that feels invasive, you don't even know the half of it. Now, Motherboard reports that a Chinese artificial intelligence company is partnering the system with mobile carriers, so that offenders receive a text message with a fine as soon as they are caught.
WASHINGTON – Apple will let you unlock the iPhone X with your face -- a move likely to bring facial recognition to the masses, along with concerns over how the technology may be used for nefarious purposes. Apple's newest device, set to go on sale on Friday, is designed to be unlocked with a facial scan with a number of privacy safeguards -- as the data will only be stored on the phone and not in any databases. Unlocking one's phone with a face scan may offer added convenience and security for iPhone users, according to Apple, which claims its "neural engine" for FaceID cannot be tricked by a photo or hacker. While other devices have offered facial recognition, Apple is the first to pack the technology allowing for a three-dimensional scan into a hand-held phone. But despite Apple's safeguards, privacy activists fear the widespread use of facial recognition would "normalize" the technology and open the door to broader use by law enforcement, marketers or others of a largely unregulated tool.
About a week ago, Stanford University researchers posted online a study on the latest dystopian AI: They'd made a machine learning algorithm that essentially works as gaydar. After training the algorithm with tens of thousands of photographs from a dating site, the algorithm could, for example, guess if a white man in a photograph was gay with 81 percent accuracy. They wanted to protect gay people. "[Our] findings expose a threat to the privacy and safety of gay men and women," wrote Michal Kosinski and Yilun Wang in the paper. They built the bomb so they could alert the public about its dangers.