Collaborating Authors


Microsoft starts testing 'Update Stack Packages' to help streamline Windows updating process


Microsoft has been expanding the number of different ways it can update Windows components for a while now. Today, October 14, officials announced they're beginning to test yet another vehicle for this, which they've christened "Update Stack Packages." Microsoft introduced Update Stack Packages as part of the announcement of a new Windows 11 build for the Dev Channel. That build, Windows 11 build 22478, includes a number of fixes plus new emoji built using the Fluent design language. Today's test build also allows users to log into their PCs using facial recognition via Windows Hello on an external monitor if that monitor has a camera attached that supports it.

Clearview scraped '10bn' selfies for facial recognition


In brief Clearview AI says it has scraped more than 10 billion photographs from people's public social media accounts for its controversial facial-recognition tool. The startup's CEO Hoan Ton-That also told Wired his engineers were working on new features to make blurry images sharper and to make it possible to recognize people even if they were wearing masks. Its software, often peddled to law enforcement agencies, provides face matching – you show it a still from CCTV, it finds the online profiles of that person – and the larger its database, the more faces it can identify. The latest steps show Clearview has ignored pressure from Facebook, Google, YouTube, and Twitter, which urged the upstart to stop downloading people's selfies last year. Clearview said it also only operates in the US.

The People Powering AI Decisions


While AI can perform incredibly well with tasks that have clear parameters, such as a game of chess, humans are still better at making the tough calls and dealing with unpredictable situations. Gray shares the example of Uber wanting to verify its drivers' identities with a current selfie matched against a photo on file. A machine trained in facial recognition can match faces fairly reliably, but it can't compare to a human eye when it comes to added variables -- a mask or a new beard, for instance. Humans, therefore, remain at the core of things like removing objectionable content from Facebook or interpreting special instructions in your GrubHub order. Gray suggests there are millions of people doing this "ghost work," but we don't actually have firm numbers.

Black women, AI, and overcoming historical patterns of abuse


The Transform Technology Summits start October 13th with Low-Code/No Code: Enabling Enterprise Agility. After a 2019 research paper demonstrated that commercially available facial analysis tools fail to work for women with dark skin, AWS executives went on the attack. Instead of offering up more equitable performance results or allowing the federal government to assess their algorithm like other companies with facial recognition tech have done, AWS executives attempted to discredit study coauthors Joy Buolamwini and Deb Raji in multiple blog posts. More than 70 respected AI researchers rebuked this attack, defended the study, and called on Amazon to stop selling the technology to police, a position the company temporarily adopted last year after the death of George Floyd. But according to the Abuse and Misogynoir Playbook, published earlier this year by a trio of MIT researchers, Amazon's attempt to smear two Black women AI researchers and discredit their work follows a set of tactics that have been used against Black women for centuries.

AI-at-Scale Hinges on Gaining a 'Social License'


In January 2020, an unknown American facial recognition software company, Clearview AI, was thrust into the limelight. It had quietly flown under the radar until The New York Times reported that businesses, law enforcement agencies, universities, and individuals had been purchasing its sophisticated facial recognition software, whose algorithm could match human faces to a database of over 3 billion images the company had collected from the internet. The article renewed the global debate about the use of AI-based facial recognition technology by governments and law enforcement agencies. Many people called for a ban on the use of the Clearview AI technology because the startup had created its database by mining social media websites and the internet for photographs but hadn't obtained permission to index individuals' faces. Twitter almost immediately sent the company a cease-and-delete letter, and YouTube and Facebook followed suit.

AI Tool Tracks the Time Politicians Spend on Their Phones


If you have been frustrated by the lack of interest your local representative shows during their work hours, here's a way to flag it now. Belgium-based developer, Dries Depoorter has created an artificial intelligence (AI) tool that calculates how much time are politicians distracted by their phones during meetings. Called the Flemish Scrollers, the tool is written in Python and uses machine learning and face recognition technologies. The law of the land requires that all meetings of the Flemish government be in the public domain. The government broadcasts it live on its YouTube channel.

Clearview AI Has New Tools to Identify You in Photos


Clearview AI has stoked controversy by scraping the web for photos and applying facial recognition to give police and others an unprecedented ability to peer into our lives. Now the company's CEO wants to use artificial intelligence to make Clearview's surveillance tool even more powerful. It may make it more dangerous and error-prone as well. Clearview has collected billions of photos from across websites that include Facebook, Instagram, and Twitter and uses AI to identify a particular person in images. Police and government agents have used the company's face database to help identify suspects in photos by tying them to online profiles.

Gradient Update #9: Bias Bounties and Hierarchical Architectures for Computer Vision


Welcome to the ninth update from the Gradient! If you were referred by a friend, subscribe and follow us on Twitter! This news edition's story is Sharing learnings from the first algorithmic bias bounty challenge. Summary Twitter's algorithmic bias bounty challenge, the first of its kind, recently concluded. While users had previously found the algorithm had a racial bias, the bounty uncovered a number of other biases and potential harms.

Andrea Rios Escudel on LinkedIn: #innovation #artificialintelligence


If we want the decision-makers to understand the ethical issues of facial recognition, we need to turn the table:) 'The Flemish Scrollers' is software automatically tagging distracted Belgian politicians when they use their phone on the daily live streams. Every meeting of the Flemish government in Belgium is live-streamed on a YouTube channel. When a Livestream starts the software is searching for phones and tries to identify a distracted politician. This is done with the help of AI and face recognition. For ML/ AI/ Data Science learning materials, please check my previous posts.

How Companies are Using Artificial Intelligence?


Artificial Intelligence (AI) is boosting business efficiency and productivity by automating procedures and operations that previously required human intervention. AI can also understand data on a level that no human has ever been able to. This skill has the potential to provide significant business benefits. Every function, business, and sector may benefit from AI. There are both general and industry-specific applications in this category.