In April of 2021, the U.S. Federal Trade Commission -- in its "Aiming for truth, fairness, and equity in your company's use of AI" report -- issued a clear warning to tech industry players employing artificial intelligence: "Hold yourself accountable, or be ready for the FTC to do it for you." Likewise, the European Commission has proposed new AI rules to protect citizens from AI-based discrimination. These warnings, and impending regulations, are warranted. Machine learning (ML), a common type of AI, mimics patterns, attitudes and behaviors that exist in our imperfect world, and as a result, it often codifies inherent biases and systemic racism. Unconscious biases are particularly difficult to overcome, because they, by definition, exist without human awareness.
In 2017, the National Bureau of Economic Research conducted a large study about age discrimination in hiring that confirms the prevalence of gendered ageism. "Based on evidence from over 40,000 job applications, we find robust evidence of age discrimination in hiring against older women, especially those near retirement age." The call back rate for older women compared to their younger female counterparts was significantly lower despite the fact that the only difference in the resumes was their age. The evaluation of resumes like many other processes in business today is managed by technology, specifically artificial intelligence. Artificial intelligence (AI) is the simulation of human intelligence processes by machines, especially computer systems.
A. Lawfulness: AI applications will be developed and used in accordance with national and international law, including international humanitarian law and human rights law, as applicable. B. Responsibility and Accountability: AI applications will be developed and used with appropriate levels of judgment and care; clear human responsibility shall apply in order to ensure accountability. C. Explainability and Traceability: AI applications will be appropriately understandable and transparent, including through the use of review methodologies, sources, and procedures. This includes verification, assessment and validation mechanisms at either a NATO and/or national level. D. Reliability: AI applications will have explicit, well-defined use cases.
Home Depot and Best Buy have pulled the products of Chinese tech surveillance makers linked to human rights abuses from their shelves, according to TechCrunch. Both US retail giants have stopped selling products from Lorex and Ezviz, while Lowe's no longer carries products by the former. Lorex is a subsidiary of Dahua Technology, whereas Ezviz is a surveillance tech brand owned by Hikvision. As TechCrunch explains, the US government added Dahua and Hikvision to its economic blacklist in 2019 for their role in the mass surveillance of Uighur Muslims in the province of Xinjiang. Earlier this year, Los Angeles Times published a report detailing how the facial recognition software developed by Lorex owner Dahua was being shopped to law enforcement as a way to identify Uighurs.
We've all been in situations where we had to make tough ethical decisions. Why not dodge that pesky responsibility by outsourcing the choice to a machine learning algorithm? That's the idea behind Ask Delphi, a machine-learning model from the Allen Institute for AI. You type in a situation (like "donating to charity") or a question ("is it okay to cheat on my spouse?"),
Nearly two-thirds of Americans want the U.S to regulate the development and use of artificial intelligence in the next year or sooner -- with half saying that regulation should have begun yesterday, according to a Morning Consult poll. Another 13% say that regulation should start in the next year. "You can thread this together," Austin Carson, founder of new nonprofit group SeedAI and former government relations lead for Nvidia, said in an email. "Half or more Americans want to address all of these things, split pretty evenly along ideological lines." The poll, which SeedAI commissioned, backs up earlier findings that while U.S. adults support investment in the development of AI, they want clear rules around that development.
Every time a dramatic, unforeseen political event happens, there follows a left-field fixation that some out-of-control technology created it. Whenever this fear about big tech comes around we are told that something new, even more toxic, has infiltrated our public discourse, triggering hatred towards politicians and public figures, conspiracy theories about Covid and even major political events like Brexit. The concern over anonymity online becomes a particular worry – as if ending it will somehow, like throwing a blanket at a raging house fire, subdue our fevered state. You may remember that during the summer's onslaught of racist abuse towards black players in the England football team, instead of reckoning with the fact that racism still haunts this country, we busied ourselves with bluster about how "cowards" online would be silenced if we only just demanded they identify themselves. We resort to this explanation, that shadowy social media somehow stimulate our worst impulses, despite there being little evidence that most abuse is from unidentifiable sources.