Results


Microsoft improves facial recognition software following backlash

Daily Mail

Microsoft has updated it's facial recognition technology in an attempt to make it less'racist'. It follows a study published in March that criticised the technology for being able to more accurately recognise the gender of people with lighter skin tones. The system was found to perform best on males with lighter skin and worst on females with darker skin. The problem largely comes down to the data being used to train the AI system not containing enough images of people with darker skin tones. Experts from the computing firm say their tweaks have significantly reduced these errors, by up to 20 times for people with darker faces.


Facial recognition AI built into police body cameras could lead to FALSE ARRESTS, experts warn

Daily Mail

Body cameras worn by police in the US could soon have in-built facial recognition software, sparking'serious concerns' among civil liberties groups. The controversial technology, branded'categorically unethical', would automatically scan and identify every single person law enforcement interacts with. It is intended to help officers track down suspects more effectively, but experts are worried it could lead to false arrests and suffer from inbuilt racial and other biases. If developed, the equipment could become a regular sight on the streets of cities across the world. The manufacturer behind the move has now brought together a panel of experts to discuss the implications of the'Minority Report' style technology.


Facial recognition may be coming to a police body camera near you

Washington Post

The country's biggest seller of police body cameras on Thursday convened a corporate board devoted to the ethics and expansion of artificial intelligence, a major new step toward offering controversial facial-recognition technology to police forces nationwide. Axon, the maker of Taser electroshock weapons and the wearable body cameras now used by most major American city police departments, has voiced interest in pursuing face recognition for its body-worn cameras. The technology could allow officers to scan and recognize the faces of potentially everyone they see while on patrol. A growing number of surveillance firms and tech start-ups are racing to integrate face recognition and other AI capabilities into real-time video. The board's first meeting will likely presage an imminent showdown over the rapidly developing technology.


MIT Researcher: AI Has a Race Problem, and We Need to Fix It

#artificialintelligence

The next generation of AI is poisoned with bias against dark skin, Joy Buolamwini says. Artificial intelligence is increasingly affecting our lives in ways most of us haven't even thought about. Even if we don't have emotional androids plotting revenge on humankind (yet), we're surrounded more and more by computers trained to look us over and make life-changing decisions about us. Some of the brightest minds in technology--including a hive of them clustered around Boston--are tinkering with machines designed to decide what kinds of ads we see, whether we get flagged by the police, whether we get a job, or even how long we spend behind bars. But they have a very big problem: Many of these systems don't work properly, or at all, for people with dark skin.


Generic Machine Learning Inference on Heterogenous Treatment Effects in Randomized Experiments

arXiv.org Machine Learning

We propose strategies to estimate and make inference on key features of heterogeneous effects in randomized experiments. These key features include best linear predictors of the effects using machine learning proxies, average effects sorted by impact groups, and average characteristics of most and least impacted units. The approach is valid in high dimensional settings, where the effects are proxied by machine learning methods. We post-process these proxies into the estimates of the key features. Our approach is generic, it can be used in conjunction with penalized methods, deep and shallow neural networks, canonical and new random forests, boosted trees, and ensemble methods. Our approach is agnostic and does not make unrealistic or hard-to-check assumptions; we don't require conditions for consistency of the ML methods. Estimation and inference relies on repeated data splitting to avoid overfitting and achieve validity. For inference, we take medians of p-values and medians of confidence intervals, resulting from many different data splits, and then adjust their nominal level to guarantee uniform validity. This variational inference method is shown to be uniformly valid and quantifies the uncertainty coming from both parameter estimation and data splitting. The inference method could be of substantial independent interest in many machine learning applications. An empirical application to the impact of micro-credit on economic development illustrates the use of the approach in randomized experiments. An additional application to the impact of the gender discrimination on wages illustrates the potential use of the approach in observational studies, where machine learning methods can be used to condition flexibly on very high-dimensional controls.


AI could help government agencies find the optimum places for refugees to relocate

#artificialintelligence

In 2016, an estimated 65.6 million people across the globe were forced from their homes by everything from war to human rights violations.


New iPhone brings face recognition (and fears) to masses

Daily Mail

Apple will let you unlock the iPhone X with your face - a move likely to bring facial recognition to the masses. But along with the roll out of the technology, are concerns over how it could be used. Despite Apple's safeguards, privacy activists fear the widespread use of facial recognition would'normalise' the technology. This could open the door to broader use by law enforcement, marketers or others of a largely unregulated tool, creating a'surveillance technology that is abused'. Facial recognition could open the door to broader use by law enforcement, marketers or others of a largely unregulated tool, creating a'surveillance technology that is abused', experts have warned.


Apple iPhone X's FaceID Technology: What It Could Mean For Civil Liberties

International Business Times

Apple's new facial recognition software to unlock their new iPhone X has raised questions about privacy and the susceptibility of the technology to hacking attacks. Apple's iPhone X is set to go on sale on Nov. 3.


New iPhone brings face recognition --and fears -- to the masses

The Japan Times

WASHINGTON – Apple will let you unlock the iPhone X with your face -- a move likely to bring facial recognition to the masses, along with concerns over how the technology may be used for nefarious purposes. Apple's newest device, set to go on sale on Friday, is designed to be unlocked with a facial scan with a number of privacy safeguards -- as the data will only be stored on the phone and not in any databases. Unlocking one's phone with a face scan may offer added convenience and security for iPhone users, according to Apple, which claims its "neural engine" for FaceID cannot be tricked by a photo or hacker. While other devices have offered facial recognition, Apple is the first to pack the technology allowing for a three-dimensional scan into a hand-held phone. But despite Apple's safeguards, privacy activists fear the widespread use of facial recognition would "normalize" the technology and open the door to broader use by law enforcement, marketers or others of a largely unregulated tool.


AI Research Is in Desperate Need of an Ethical Watchdog

#artificialintelligence

About a week ago, Stanford University researchers posted online a study on the latest dystopian AI: They'd made a machine learning algorithm that essentially works as gaydar. After training the algorithm with tens of thousands of photographs from a dating site, the algorithm could, for example, guess if a white man in a photograph was gay with 81 percent accuracy. They wanted to protect gay people. "[Our] findings expose a threat to the privacy and safety of gay men and women," wrote Michal Kosinski and Yilun Wang in the paper. They built the bomb so they could alert the public about its dangers.