Goto

Collaborating Authors

Results


How AI can empower communities and strengthen democracy

#artificialintelligence

Each Fourth of July for the past five years I've written about AI with the potential to positively impact democratic societies. I return to this question with the hope of shining a light on technology that can strengthen communities, protect privacy and freedoms, or otherwise support the public good. This series is grounded in the principle that artificial intelligence can is capable of not just value extraction, but individual and societal empowerment. While AI solutions often propagate bias, they can also be used to detect that bias. As Dr. Safiya Noble has pointed out, artificial intelligence is one of the critical human rights issues of our lifetimes.


Global Big Data Conference

#artificialintelligence

On Tuesday, a number of AI researchers, ethicists, data scientists, and social scientists released a blog post arguing that academic researchers should stop pursuing research that endeavors to predict the likelihood that an individual will commit a criminal act, as based upon variables like crime statistics and facial scans. The blog post was authored by the Coalition for Critical Technology, who argued that the utilization of such algorithms perpetuates a cycle of prejudice against minorities. Many studies of the efficacy of face recognition and predictive policing algorithms find that the algorithms tend to judge minorities more harshly, which the authors of the blog post argue is due to the inequities in the criminal justice system. The justice system produces biased data, and therefore the algorithms trained on this data propagate those biases, the Coalition for Critical Technology argues. The coalition argues that the very notion of "criminality" is often based on race, and therefore research done on these technologies assumes the neutrality of the algorithms when in truth no such neutrality exists.


The Impact of Artificial Intelligence on Human Rights

#artificialintelligence

Adopting AI can affect not just your workers but how you deal with privacy and discrimination issues. As humans become more reliant on machines to make processes more efficient and inform their decisions, the potential for a conflict between artificial intelligence and human rights has emerged. If left unchecked, artificial intelligence can create inequality and can even be used to actively deny human rights across the globe. However, if used optimally, AI can enhance human rights, increase shared prosperity, and create a better future for us all. It is ultimately up to businesses to carefully consider the opportunities new technologies provide and how they can best leverage these opportunities while being conscious of the impact on human rights.


The Impact of Artificial Intelligence on Human Rights

#artificialintelligence

Adopting AI can affect not just your workers but how you deal with privacy and discrimination issues. As humans become more reliant on machines to make processes more efficient and inform their decisions, the potential for a conflict between artificial intelligence and human rights has emerged. If left unchecked, artificial intelligence can create inequality and can even be used to actively deny human rights across the globe. However, if used optimally, AI can enhance human rights, increase shared prosperity, and create a better future for us all. It is ultimately up to businesses to carefully consider the opportunities new technologies provide and how they can best leverage these opportunities while being conscious of the impact on human rights.


Council Post: Building Ethical And Responsible AI Systems Does Not Start With Technology Teams

#artificialintelligence

Chief Technology Officer at Integrity Management Services, Inc., where she is leading cutting edge technology solutions (AI) for clients. In his book Talking with Strangers, Malcolm Gladwell discusses an AI experiment that looked at 554,689 bail hearings conducted by New York City judges. As one online publication noted, "Of the more than 400,000 people released, over 40% either failed to appear at their subsequent trials or were arrested for another crime." However, decisions recommended by the machine learning algorithm on whom to detain or release would have resulted in 25% fewer crimes. This is an example of an AI system that is less biased than a human.


Yann LeCun Quits Twitter Amid Acrimonious Exchanges on AI Bias

#artificialintelligence

This is an updated version. Turing Award Winner and Facebook Chief AI Scientist Yann LeCun has announced his exit from popular social networking platform Twitter after getting involved in a long and often acrimonious dispute regarding racial biases in AI. Unlike most other artificial intelligence researchers, LeCun has often aired his political views on social media platforms, and has previously engaged in public feuds with colleagues such as Gary Marcus. This time however LeCun's penchant for debate saw him run afoul of what he termed "the linguistic codes of modern social justice." It all started on June 20 with a tweet regarding the new Duke University PULSE AI photo recreation model that had depixelated a low-resolution input image of Barack Obama into a photo of a white male.


Artificial Intelligence: The time for ethics is over

#artificialintelligence

Organising ethical debates has long been an efficient way for industry to delay and avoid hard regulation. Europe now needs strong, enforceable rights for its citizens, writes Green MEP Alexandra Geese. If the rules are too weak, there is a too great a risk that our rights and freedoms will be undermined: This currently applies to all applications of artificial intelligence, which up to now have only been based on non-binding ethical principles and values. In this legislation, Europe has the chance to adopt a legal framework for AI with clear rules. We need strong instruments to protect our fundamental rights and democracy.


Montreal AI Ethics Institute suggests ways to counter bias in AI models

#artificialintelligence

The Montreal AI Ethics Institute, a nonprofit research organization dedicated to defining humanity's place in an algorithm-driven world, today published the inaugural edition of its State of AI Ethics report. The 128-page multidisciplinary paper, which covers a set of areas spanning agency and responsibility, security and risk, and jobs and labor, aims to bring attention to key developments in the field of AI this past quarter. The State of AI Ethics first addresses the problem of bias in ranking and recommendation algorithms, like those used by Amazon to match customers with products they're likely to purchase. The authors note that while there are efforts to apply the notion of diversity to these systems, they usually consider the problem from an algorithmic perspective and strip it of cultural and contextual social meanings. "Demographic parity and equalized odds are some examples of this approach that apply the notion of social choice to score the diversity of data," the report reads.


Artificial Intelligence Is Poised to Take More Than Unskilled Jobs

#artificialintelligence

Recently, Microsoft announced that it was terminating dozens of journalists and editorial workers at its Microsoft News and MSN organizations. Instead, the company said, it will rely on artificial intelligence to curate and edit news and content that is presented on MSN.com, inside Microsoft's Edge browser, and in the company's Microsoft News apps. Explaining the decision, Microsoft issued a statement to the Verge. The statement reads: "Like all companies, we evaluate our business on a regular basis. This can result in increased investment in some places and, from time to time, re-deployment in others. These decisions are not the result of the current pandemic."


AI decisions: Do we deserve an explanation? - Futurity

#artificialintelligence

First, the European Union's General Data Protection Regulation (GDPR) provides that people have a right to "meaningful information" about the logic behind automated decisions using their data. This law, in an interesting and potentially radical way, seems to mandate that any automated decision-making that people are subject to should be explainable to the person affected. That got me wondering: What does that mean? How do we implement that? And what does explanation really mean here?