Let's talk about the good things that happened this year. Yes, 2020 has been a relentless nightmare that's unspooled at rapidly shifting speeds -- and it's showing no signs of magically abating as the clock strikes 12 this New Year's Eve. But you, who by some combination of luck or fate are still thinking and breathing, know this already. What you may be less aware of, however, is that despite the undeniable pain and tragedy 2020 has wrought, there are developments worth celebrating. While each passing year seemingly brings with it news of further digital indignities thrust upon your life, 2020 witnessed genuine progress when it comes to protecting your privacy.
As we make tremendous advances in machine learning and artificial intelligence technosciences, there is a renewed understanding in the AI community that we must ensure that humans being are at the center of our deliberations so that we don't end in technology-induced dystopias. As strongly argued by Green in his book Smart Enough City, the incorporation of technology in city environs does not automatically translate into prosperity, wellbeing, urban livability, or social justice. There is a great need to deliberate on the future of the cities worth living and designing. There are philosophical and ethical questions involved along with various challenges that relate to the security, safety, and interpretability of AI algorithms that will form the technological bedrock of future cities. Several research institutes on human centered AI have been established at top international universities. Globally there are calls for technology to be made more humane and human-compatible. For example, Stuart Russell has a book called Human Compatible AI. The Center for Humane Technology advocates for regulators and technology companies to avoid business models and product features that contribute to social problems such as extremism, polarization, misinformation, and Internet addiction. In this paper, we analyze and explore key challenges including security, robustness, interpretability, and ethical challenges to a successful deployment of AI or ML in human-centric applications, with a particular emphasis on the convergence of these challenges. We provide a detailed review of existing literature on these key challenges and analyze how one of these challenges may lead to others or help in solving other challenges. The paper also advises on the current limitations, pitfalls, and future directions of research in these domains, and how it can fill the current gaps and lead to better solutions.
Facebook allegedly violated Illinois state law by using consumers' facial features to improve its photo-tagging software. Nearly one and a half million Illinois residents have filed claims to part of a $650 million privacy settlement offered by Facebook. According to NBC Chicago, the law firm responsible for the social media lawsuit said that 1.42 million Illinois residents have already filed claims. Eligible claimants could receive awards ranging between $200 and $400. The lawsuit, says NBC, alleged that Facebook broke Illinois' "strict biometric privacy law."
Clearview says its software lets authorities plug in photos of people suspected of involvement in crimes and search for other images of their faces from the internet. The company has compiled a massive database of photos by scraping websites, including social-media platforms. Some of the platforms have accused Clearview's scraping efforts of violating their terms of service. Facebook Inc., Twitter Inc. and Microsoft Corp.'s LinkedIn are among those that have sent the startup cease-and-desist orders. Civil libertarians have raised concerns broadly about the use of facial-recognition by law enforcement, and specifically about Clearview.
Previously, facial recognition technology was reserved for the movies and was a thing of fiction. However, much like other biometric solutions that have seen improvement and progress, facial recognition technology also steadily became a reality. Over the past decade, it has not only been developed and perfected; it is being deployed around the world as well. However, not as rapidly as other biometric technologies did – which include fingerprint, iris recognition, hand geometry, and DNA. Before we discuss the history and gradual evolution of facial recognition technology, there is a need to have an understanding of how this technology works and why there was a need for it in the first place?
Infer Genetic Disease From Your Face - DeepGestalt can accurately identify some rare genetic disorders using a photograph of a patient's face. This could lead to payers and employers potentially analyzing facial images and discriminating against individuals who have pre-existing conditions or developing medical complications.
"Before we launched skin tone ranges in 2018, nearly sixty percent of the top 100 search terms for skin-related searches involved a tone, such as dark, pale, and olive tones, showing people wanted a way to customize their searching," wrote Nadia Fawaz, Pinterest's technical lead for fairness in AI, in an email to Fast Company. Now, the company has unveiled a new version of its technology designed to more accurately detect skin tones across a broader class of images, quadrupling the number of beauty and fashion pinned items where its algorithms can spot a skin tone. Pinterest's skin-tone AI is used in a range of features letting people use the platform to find custom looks for themselves, including a recently unveiled augmented-reality Try On tool that lets people use their smartphone's camera to see how various lip colors look on them. That's good for Pinterest users who are using the platform to shop--and, of course, for Pinterest advertisers participating in the Try On program. The company claims millions of users come each month seeking beauty ideas.
The social media company had in July raised its settlement offer by $100 million to $650 million in relation to the lawsuit, in which Illinois users accused it of violating the U.S. state's Biometric Information Privacy Act. The revised settlement agreement resolved the court's concerns, leading to the preliminary approval of the class action settlement, Judge James Donato wrote in an order filed in the U.S. District Court for the Northern District of California. "Preliminary approval of the amended stipulation of class action settlement, Dkt. No. 468, is granted, and a final approval hearing is set for January 7, 2021," the judge said in the eight-page order. Facebook allegedly violated the state's law through its "Tag Suggestions" feature, which allowed users to recognize their Facebook friends from previously uploaded photos, according to the lawsuit, which began in 2015.
Do we give importance to the privacy of our profile photos publicly available around social media? Have we ever bothered about privacy when we share innumerable photos of friends and family members on Facebook or Instagram? But why should we pay importance to the privacy protection of photos in the first place? We should because our publicly available photos could be utilized for unauthorized facial recognition and that can invade our private lives. There is little doubt that facial recognition is a serious threat to privacy.