Collaborating Authors

civil rights & constitutional law

The Ethics of AI and Emotional Intelligence - The Partnership on AI


The experimental use of AI spread across sectors and moved beyond the internet into the physical world. Stores used AI perceptions of shoppers' moods and interest to display personalized public ads. Schools used AI to quantify student joy and engagement in the classroom. Employers used AI to evaluate job applicants' moods and emotional reactions in automated video interviews and to monitor employees' facial expressions in customer service positions. It was a year notable for increasing criticism and governance of AI related to emotion and affect.

Australia will use robot boats to find asylum seekers at sea

New Scientist

Australia is deploying a fleet of uncrewed robot boats to patrol its waters and monitor weather and wildlife. They will also flag boats potentially transporting asylum seekers, a plan that has concerned human rights groups. The 5-metre-long vessels, known as Bluebottles after an Australian jellyfish, look like miniature sailing yachts. They use a combination of wind, wave and solar power to maintain a steady 5-knot speed in all conditions. Sydney-based Ocius Technology delivered the prototype in 2017 and Australia's Ministry of Defence has now awarded an AU$5.5 million (£3m) …

The problems AI has today go back centuries


In March of 2015, protests broke out at the University of Cape Town in South Africa over the campus statue of British colonialist Cecil Rhodes. Rhodes, a mining magnate who had gifted the land on which the university was built, had committed genocide against Africans and laid the foundations for apartheid. Under the rallying banner of "Rhodes Must Fall," students demanded that the statue be removed. Their protests sparked a global movement to eradicate the colonial legacies that endure in education. The events also provoked Shakir Mohamed, a South African AI researcher at DeepMind, to reflect on what colonial legacies might exist in his research as well.

AI-powered tool aims to help reduce bias and racially charged language on websites


Website accessibility tech provider UserWay has released an AI-powered tool designed to help organizations ensure their websites are free from discriminatory, biased, and racially charged language. The tool, Content Moderator, flags content for review, and nothing is deleted or removed without approval from site administrators, according to UserWay. UserWay's customers are using its AI-powered accessibility widget, an advanced AI-based compliance-as-a-service (CaaS) technology that ensures brands provide an accessible digital experience that meets strict governmental and ADA regulations, the company said. "Focusing on digital racism and bias is long past due, and our team is eager to contribute to the conversation positively," UserWay founder and CEO Allon Mason said in a statement. In June, Google announced that it would be reevaluating what it considers acceptable language, Mason noted. So far, Google has changed terms including "blacklist" to "blocked list," "whitelist" to "allowed list," and "master-slave" to "primary/secondary," among others, he said.

How Machine Learning is Influencing Diversity & Inclusion - InformationWeek


Our society is in a technological paradox. Life events for many people are increasingly influenced by algorithmic decisions, yet we are discovering how those very essential algorithms discriminate. Because of that paradox, IT management is in an unparalleled position to select human intervention that addresses diversity and inclusion with a team and equitable algorithms that are accountable to a diverse society. IT managers face this paradox today due to the increased application of machine learning operations (MLOps). MLOps rely on IT teams to help manage the pipelines created.

A new AI claims it can help remove racism on the web. So I put it to work


I tend to believe technology can't solve every problem. Commentary: Please join our sister sites in fundraising to help address racism. Why, it's not even managed to solve the vast problems caused by technology. Yet when I received an email headlined: "AI to remove racism," how could I not open it? After all, AI has already removed so many things.

DeepMind and Oxford University researchers on how to 'decolonize' AI


Sometimes it's tempting to think of every technological advancement as the brave first step on new shores, a fresh chance to shape the future rationally. In reality, every new tool enters the same old world with its same unresolved issues. In a moment where society is collectively reckoning with just how deep the roots of racism reach, a new paper from researchers at DeepMind -- the AI lab and sister company to Google -- and the University of Oxford presents a vision to "decolonize" artificial intelligence. The aim is to keep society's ugly prejudices from being reproduced and amplified by today's powerful machine learning systems. The paper, published this month in the journal Philosophy & Technology, has at heart the idea that you have to understand historical context to understand why technology can be biased.

Systemic Racism is Strengthened by Data Science.


Left alone, algorithms will count a black defendant's race as a strike against them; yet, several data scientists in the community are supporting calls to turn off the safeguards and unleash the hells of computerized prejudice. Put yourself in the computer's "shoes" for a second; imagine yourself sitting across a person being evaluated for a loan or a job. When they ask you how you make your decision, you inform them, "Well for one, we docked you because you're black." In what logical sense should this sort of comment be tolerated. If humans are reprimanded for making such ignorant comments, why should a computer be allowed to? This simple understanding does not exist amongst a significant percentage of the larger data science, machine learning, and even political community.

GPT-3 Creative Fiction


What if I told a story here, how would that story start?" Thus, the summarization prompt: "My second grader asked me what this passage means: …" When a given prompt isn't working and GPT-3 keeps pivoting into other modes of completion, that may mean that one hasn't constrained it enough by imitating a correct output, and one needs to go further; writing the first few words or sentence of the target output may be necessary.

In a GPT-3 World, Anonymity Prevents Free Speech


What does it mean to have freedom of speech? Naively, it means that you have the right to express ideas without fear of governmental retaliation or censorship. Free speech is valuable when you are communicating with others: abstractly, freedom of speech means the right to distribute information to an audience. If you frame freedom of speech not in terms of what comes out of your mouth, but in terms of the interaction between yourself and another party,1 then edge cases rapidly emerge. For example, suppose that you are on the street, lawfully raising a protest sign supporting X.