Results


Facial recognition helps mom and dad see kids' camp photos, raises privacy concerns for some

USATODAY

A photo from a summer camp posted to the camp's website so parents can view them. Venture capital-backed Waldo Photos has been selling the service to identify specific children in the flood of photos provided daily to parents by many sleep-away camps. Camps working with the Austin, Texas-based company give parents a private code to sign up. When the camp uploads photos taken during activities to its website, Waldo's facial recognition software scans for matches in the parent-provided headshots. Once it finds a match, the Waldo system (as in "Where's Waldo?") then automatically texts the photos to the child's parents.


Finding the Vulnerable with Biometrics, Artificial Intelligence: Atlanta's Trust Stamp to aid in locating those lost to human trafficking - Swanson Reed - Specialist R&D Tax Advisors

#artificialintelligence

Artificial intelligence may put an end to a long-running industry: human trafficking. The average age a minor enters the sex trade in the U.S. is 12 to 14 years old–many of the victims being runaway girls who were sexually abused. Thankfully, Attorney Generals in the U.S. and Mexico are planning to implement a new system that will help to locate victims of human trafficking. Trust Stamp, an Atlanta-based startup, will be providing the'meat and potatoes' of the life-saving technology. According to the company website, "[Trust Stamp] creates proprietary artificial intelligence solutions; researching and leveraging facial biometric science and wide-scale data mining to deliver insightful identity & trust predictions while identifying and defending against fraudulent identity attacks."


Orlando ends Amazon facial recognition program over privacy concerns

Daily Mail

Florida has stopped testing Amazon's facial recognition program after rights groups raised concerns that the service could be used in ways that could violate civil liberties. Orlando ended a pilot program last week after its contract with Amazon.com Inc to use its Rekognition service expired. 'Partnering with innovative companies to test new technology - while also ensuring we uphold privacy laws and in no way violate the rights of others - is critical to us as we work to further keep our community safe,' the city and the Orlando Police Department said in a joint statement Monday. Orlando was one of several U.S. jurisdictions that Amazon has pitched its service to since unveiling it in late 2016 as a way to detect offensive content and secure public safety.


Understanding Self-Narration of Personally Experienced Racism on Reddit

AAAI Conferences

We identify and classify users’ self-narration of racial discrimination and corresponding community support in social media. We developed natural language models first to distinguish self-narration of racial discrimination in Reddit threads, and then to identify which types of support are provided and valued in subsequent replies. Our classifiers can detect the self-narration of personally experienced racism in online textual accounts with 83% accuracy and can recognize four types of supportive actions in replies with up to 88% accuracy. Descriptively, our models identify types of racism experienced and the racist concepts (e.g., sexism, appearance or accent related) most experienced by people of different races. Finally, we show that commiseration is the most valued form of social support.


Google won't develop AI weapons, announces new ethical strategy Internet of Business

#artificialintelligence

Google has unveiled a set of principles for ethical AI development and deployment, and announced that it will not allow its AI software to be used in weapons or for "unreasonable surveillance". In a detailed blog post, CEO Sundar Pichai said that Google would not develop technologies that cause, or are likely to cause, harm. "Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints," he explained. Google will not allow its technologies to be used in weapons or in "other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people", he said. Also on the no-go list are "technologies that gather or use information for surveillance, violating internationally accepted norms", and those "whose purpose contravenes widely accepted principles of international law and human rights".


The Importance of Decoding Unconscious Bias in AI Big Cloud Recruitment

#artificialintelligence

Despite its widespread adoption, Artificial Intelligence still has a long way to go in terms of diversity and inclusion. It's a subject close to our hearts as a company, and quite frankly, something that should be celebrated and shouted about given all the doom and gloom we're so often bombarded with in today's media. From healthcare, and sustainable cities, to climate change and industry, investment in AI is making an impact in many areas. Applications of machine learning and deep learning help shape the trajectories of our daily lives, so much so that we are barely even aware of it. However, all of this do-gooding aside, one of the biggest obstacles in AI programming is that of the inherent bias that exists within it.


If you jaywalk in China, facial recognition means you'll walk away with a fine

#artificialintelligence

Residents of Shenzhen don't dare jaywalk. Since April 2017, this city in China's Guangdong province has deployed a rather intense technique to deter jaywalking. Anyone who crosses against the light will find their face, name, and part of their government ID number displayed on a large LED screen above the intersection, thanks to facial recognition devices all over the city. If that feels invasive, you don't even know the half of it. Now, Motherboard reports that a Chinese artificial intelligence company is partnering the system with mobile carriers, so that offenders receive a text message with a fine as soon as they are caught.


You weren't supposed to actually implement it, Google

#artificialintelligence

Last month, I wrote a blog post warning about how, if you follow popular trends in NLP, you can easily accidentally make a classifier that is pretty racist. To demonstrate this, I included the very simple code, as a "cautionary tutorial".


Apple iPhone X's FaceID Technology: What It Could Mean For Civil Liberties

International Business Times

Apple's new facial recognition software to unlock their new iPhone X has raised questions about privacy and the susceptibility of the technology to hacking attacks. Apple's iPhone X is set to go on sale on Nov. 3.


New iPhone brings face recognition --and fears -- to the masses

The Japan Times

WASHINGTON – Apple will let you unlock the iPhone X with your face -- a move likely to bring facial recognition to the masses, along with concerns over how the technology may be used for nefarious purposes. Apple's newest device, set to go on sale on Friday, is designed to be unlocked with a facial scan with a number of privacy safeguards -- as the data will only be stored on the phone and not in any databases. Unlocking one's phone with a face scan may offer added convenience and security for iPhone users, according to Apple, which claims its "neural engine" for FaceID cannot be tricked by a photo or hacker. While other devices have offered facial recognition, Apple is the first to pack the technology allowing for a three-dimensional scan into a hand-held phone. But despite Apple's safeguards, privacy activists fear the widespread use of facial recognition would "normalize" the technology and open the door to broader use by law enforcement, marketers or others of a largely unregulated tool.