Goto

Collaborating Authors

 raji


We Don't Actually Know If AI Is Taking Over Everything

The Atlantic - Technology

Since the release of ChatGPT last year, I've heard some version of the same thing over and over again: What is going on? The rush of chatbots and endless "AI-powered" apps has made starkly clear that this technology is poised to upend everything--or, at least, something. Yet even the AI experts are struggling with a dizzying feeling that for all the talk of its transformative potential, so much about this technology is veiled in secrecy. More and more of this technology, once developed through open research, has become almost completely hidden within corporations that are opaque about what their AI models are capable of and how they are made. Transparency isn't legally required, and the secrecy is causing problems: Earlier this year, The Atlantic revealed that Meta and others had used nearly 200,000 books to train their AI models without the compensation or consent of the authors.


An inside look at Congress's first AI regulation forum

MIT Technology Review

The AI Insight Forums were announced a few months ago by Senate Majority Leader Chuck Schumer as part of his "SAFE Innovation" initiative, which is really a set of principles for AI legislation in the United States. The invite list was heavily skewed toward Big Tech execs, including CEOs of AI companies, though a few civil society and AI ethics researchers were included too. Coverage of the meeting thus far has put a particular emphasis on the reportedly unanimous agreement about the need for AI regulation, and on issues raised by Elon Musk and others about the "civilizational risks" created by AI. (This tracker from Tech Policy Press is pretty handy if you want to know more.) But to really dig below the surface, I caught up with one of the other attendees, Inioluwa Deborah Raji, who gave me an inside look at how the first meeting went, the pernicious myths she needed to debunk, and where disagreements could be felt in the room. Raji is a researcher at the University of California, Berkeley, and a fellow at Mozilla.


Meet the Humans Trying to Keep Us Safe From AI

WIRED

A year ago, the idea of holding a meaningful conversation with a computer was the stuff of science fiction. But since OpenAI's ChatGPT launched last November, life has started to feel more like a techno-thriller with a fast-moving plot. Chatbots and other generative AI tools are beginning to profoundly change how people live and work. But whether this plot turns out to be uplifting or dystopian will depend on who helps write it. Thankfully, just as artificial intelligence is evolving, so is the cast of people who are building and studying it.


Google is poisoning its reputation with AI researchers

#artificialintelligence

Google has worked for years to position itself as a responsible steward of AI. Its research lab hires respected academics, publishes groundbreaking papers, and steers the agenda at the field's biggest conferences. But now its reputation has been badly, perhaps irreversibly damaged, just as the company is struggling to put a politically palatable face on its empire of data. The company's decision to fire Timnit Gebru and Margaret Mitchell -- two of its top AI ethics researchers, who happened to be examining the downsides of technology integral to Google's search products -- has triggered waves of protest. Academics have registered their discontent in various ways.


Bias in facial recognition isn't hard to discover, but it's hard to get rid of

#artificialintelligence

Joy Buolamwini is a researcher at the MIT Media Lab who pioneered research into bias that's built into artificial intelligence and facial recognition. And the way she came to this work is almost a little too on the nose. As a graduate student at MIT, she created a mirror that would project aspirational images onto her face, like a lion or tennis star Serena Williams. But the facial-recognition software she installed wouldn't work on her Black face, until she literally put on a white mask. Buolamwini is featured in a documentary called "Coded Bias," airing tonight on PBS.


The new weapon in the fight against biased algorithms: Bug bounties

#artificialintelligence

When it comes to detecting bias in algorithms, researchers are trying to learn from the information security field – and particularly, from the bug bounty-hunting hackers who comb through software code to identify potential security vulnerabilities. The parallels between the work of these security researchers and the hunt for possible flaws in AI models, in fact, is at the heart of the work carried out by Deborah Raji, a research fellow in algorithmic harms for the Mozilla Foundation. Presenting the research she has been carrying out with advocacy group the Algorithmic Justice League (AJL) during the annual Mozilla Festival, Raji explained how along with her team, she has been studying bug bounty programs to see how they could be applied to the detection of a different type of nuisance: algorithmic bias. SEE: An IT pro's guide to robotic process automation (free PDF) (TechRepublic) Bug bounties, which reward hackers for discovering vulnerabilities in software code before malicious actors exploit them, have become an integral part of the information security field. Major companies such as Google, Facebook or Microsoft now all run bug bounty programs; the number of these hackers is multiplying, and so are the financial rewards that corporations are ready to pay to fix software problems before malicious hackers find them.


This is how we lost control of our faces

MIT Technology Review

Deborah Raji, a fellow at nonprofit Mozilla, and Genevieve Fried, who advises members of the US Congress on algorithmic accountability, examined over 130 facial-recognition data sets compiled over 43 years. They found that researchers, driven by the exploding data requirements of deep learning, gradually abandoned asking for people's consent. This has led more and more of people's personal photos to be incorporated into systems of surveillance without their knowledge. It has also led to far messier data sets: they may unintentionally include photos of minors, use racist and sexist labels, or have inconsistent quality and lighting. The trend could help explain the growing number of cases in which facial-recognition systems have failed with troubling consequences, such as the false arrests of two Black men in the Detroit area last year.


Google AI ethics co-lead Timnit Gebru says she was fired over an email

#artificialintelligence

Timnit Gebru, one of the best-known AI researchers today and co-lead of an AI ethics team at Google, no longer works at the company. Details are still being gathered, but according to Gebru, she was fired Wednesday for sending an email to "non-management employees that is inconsistent with the expectations of a Google manager." VentureBeat reached out to Gebru, a Google spokesperson, and Google AI chief Jeff Dean for comment. This story will be updated if we hear back. I was fired by @JeffDean for my email to Brain women and Allies.


Facial recognition needs auditing and ethics standards to be safe, AI Now bias critic argues

#artificialintelligence

The artificial intelligence community needs to begin developing the vocabulary to define and clearly explain the harms the technology can cause, in order to reign in abuses with facial biometrics, AI Now Institute Technology Fellow Deb Raji argues in a TWIML AI podcast. The podcast on "How External Auditing is Changing the Facial Recognition Landscape with Deb Raji," hosted by Sam Charrington, who asks about the genesis of the audits Raji and colleagues have performed of biometric facial recognition systems, industry response, and the ethical way forward. Raji describes her journey through academia and an internship with Clarifai to taking up the cause of algorithmic bias and connecting with Joy Buolamwini after watching her TedTalk. The work Raji did with others in the community gained prominence with Gender Shades, and concepts that emerged from that and similar projects have been built into engineering practices at Google. Facial recognition is characterized as "very immature technology," which was exposed as not working by the Gender Shades study. "It really sort of stemmed from this desire to…identify the problem in a consistent way and communicate it in a consistent way," Raji says of the early work delineating the problem of demographic differentials in facial recognition.


Study by U of T alumna sheds light on gender gap in AI field

#artificialintelligence

A study led by University of Toronto alumna Kimberly Ren is among the first to quantify predictors that could lead women towards, or away from, pursuing careers in machine learning and artificial intelligence, or AI. Women currently make up 22 per cent of global AI professionals, with that proportion oscillating between 21 per cent and 23 per cent over a four-year trend, according to a 2018 report by the World Economic Forum. "The talent gap isn't closing," says Ren, who recently graduated from the Faculty of Applied Science & Engineering and was awarded the Best Paper Award at the American Society for Engineering Education Conference for her fourth-year thesis project. She led the study under the supervision of Alison Olechowski, an assistant professor in the department of mechanical and industrial engineering. "What I hope this research does is find some reasoning behind this gap, so that we can increase the persistence of women in the field going forward."