gebru
'Progressive except for Palestine': how a tech charity imploded over a statement on Gaza
Miliaku Nwabueze, a senior program manager at Code for Science & Society, had been concerned for some time about the role of technology in state violence. Then, on 7 October of last year, Hamas entered Israel, killing and kidnapping about 1,400 people. Less than a week later, as Israel ordered 1.1 million Palestinians out of northern Gaza in the onset of its deadly retaliation, Nwabueze decided to write a message to her colleagues on the US-based non-profit organization's Slack channel. "Hey y'all … I have been watching multiple genocides around the world," she began, naming Palestine as well as Sudan, the Congo and Artsakh. "All of these have heavy linkages to the tech industry." The 30-year-old went on to assert that CS&S – whose stated mission is to "advance the power of data to improve the social and economic lives of all people" – should say, at the minimum, "we support demands for a ceasefire" in Gaza.
- Asia > Middle East > Palestine > Gaza Strip > Gaza Governorate > Gaza (0.83)
- Asia > Middle East > Israel (0.47)
- Africa > Sudan (0.25)
- (5 more...)
- Law > Civil Rights & Constitutional Law (1.00)
- Information Technology (1.00)
- Government (0.89)
- Law Enforcement & Public Safety (0.67)
The Hermeneutic Turn of AI: Are Machines Capable of Interpreting?
This article aims to demonstrate how the approach to computing is being disrupted by deep learning (artificial neural networks), not only in terms of techniques but also in our interactions with machines. It also addresses the philosophical tradition of hermeneutics (Don Ihde, Wilhelm Dilthey) to highlight a parallel with this movement and to demystify the idea of human-like AI.
- North America > United States > Indiana (0.05)
- Europe > Italy > Piedmont > Turin Province > Turin (0.05)
As AI tools get smarter, they're growing more covertly racist, experts find
Popular artificial intelligence tools are becoming more covertly racist as they advance, says an alarming new report. A team of technology and linguistics researchers revealed this week that large language models like OpenAI's ChatGPT and Google's Gemini hold racist stereotypes about speakers of African American Vernacular English, or AAVE, an English dialect created and spoken by Black Americans. "We know that these technologies are really commonly used by companies to do tasks like screening job applicants," said Valentin Hoffman, a researcher at the Allen Institute for Artificial Intelligence and co-author of the recent paper, published this week in arXiv, an open-access research archive from Cornell University. Hoffman explained that previously researchers "only really looked at what overt racial biases these technologies might hold" and never "examined how these AI systems react to less overt markers of race, like dialect differences". Black people who use AAVE in speech, the paper says, "are known to experience racial discrimination in a wide range of contexts, including education, employment, housing, and legal outcomes". Hoffman and his colleagues asked the AI models to assess the intelligence and employability of people who speak using AAVE compared to people who speak using what they dub "standard American English".
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.36)
How satellite images and AI could help fight spatial apartheid in South Africa
The older Sefala became, the more she peppered her father with questions about the visible racial segregation of their neighborhood: "Why is it like this?" Now, at 28, she is helping do something about it. Alongside computer scientists Nyalleng Moorosi and Timnit Gebru at the nonprofit Distributed AI Research Institute (DAIR), which Gebru set up in 2021, she is deploying computer vision tools and satellite images to analyze the impacts of racial segregation in housing, with the ultimate hope that their work will help to reverse it. "We still see previously marginalized communities' lives not improving," says Sefala. Though she was never alive during the apartheid regime, she has still been affected by its awful enduring legacy: "It's just very unequal, very frustrating." In South Africa, the government census categorizes both wealthier suburbs and townships, a creation of apartheid and typically populated by Black people, as "formal residential neighborhoods."
Prominent Women in Tech Say They Don't Want to Join OpenAI's All-Male Board
Earlier this month, OpenAI's board abruptly fired its popular CEO, Sam Altman. The ouster shocked the tech world and rankled Altman's loyal employees, the vast majority of whom threatened to quit unless their boss was reinstated. After a chaotic five-day exile, Altman got his old job back--with a reconfigured, all-male board overseeing him, led by ex-Salesforce CEO and former Twitter board chair Bret Taylor. Right now, only three people sit on this provisional OpenAI board. Immediately prior to the failed coup, there were six.
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.93)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.93)
We are all AI's free data workers
The secret to making AI chatbots sound smart and spew less toxic nonsense is to use a technique called reinforcement learning from human feedback, which uses input from people to improve the model's answers. It relies on a small army of human data annotators who evaluate whether a string of text makes sense and sounds fluent and natural. They decide whether a response should be kept in the AI model's database or removed. Even the most impressive AI chatbots require thousands of human work hours to behave in a way their creators want them to, and even then they do it unreliably. The work can be brutal and upsetting, as we will hear this week when the ACM Conference on Fairness, Accountability, and Transparency (FAccT) gets underway.
'There was all sorts of toxic behaviour': Timnit Gebru on her sacking by Google, AI's dangers and big tech's biases
'It feels like a gold rush," says Timnit Gebru. "In fact, it is a gold rush. And a lot of the people who are making money are not the people actually in the midst of it. But it's humans who decide whether all this should be done or not. We should remember that we have the agency to do that." Gebru is talking about her specialised field: artificial intelligence. On the day we speak via a video call, she is in Kigali, Rwanda, preparing to host a workshop and chair a panel at an international conference on AI. It will address the huge growth in AI's capabilities, as well as something that the frenzied conversation about AI misses out: the fact that many of its systems may well be built on a huge mess of biases, inequalities and imbalances of power. This gathering, the clunkily titled International Conference on Learning Representations, marks the first time people in the field have come together in an African country – which makes a powerful point about big tech's neglect of the global south. When Gebru talks about the way that AI "impacts people all over the world and they don't get to have a say on how they should shape it", the issue is thrown into even sharper relief by her backstory. In her teens, Gebru was a refugee from the war between Ethiopia, where she grew up, and Eritrea, where her parents were born. After a year in Ireland, she made it to the outskirts of Boston, Massachusetts, and from there to Stanford University in northern California, which opened the way to a career at the cutting edge of the computing industry: Apple, then Microsoft, followed by Google. But in late 2020, her work at Google came to a sudden end. As the co-leader of Google's small ethical AI team, Gebru was one of the authors of an academic paper that warned about the kind of AI that is increasingly built into our lives, taking internet searches and user recommendations to apparently new levels of sophistication and threatening to master such human talents as writing, composing music and analysing images. The clear danger, the paper said, is that such supposed "intelligence" is based on huge data sets that "overrepresent hegemonic viewpoints and encode biases potentially damaging to marginalised populations". Put more bluntly, AI threatens to deepen the dominance of a way of thinking that is white, male, comparatively affluent and focused on the US and Europe. In response, senior managers at Google demanded that Gebru either withdraw the paper, or take her name and those of her colleagues off it. This triggered a run of events that led to her departure. Google says she resigned; Gebru insists that she was fired. What all this told her, she says, is that big tech is consumed by a drive to develop AI and "you don't want someone like me who's going to get in your way.
- North America > United States > Massachusetts > Suffolk County > Boston (0.24)
- Europe (0.24)
- Africa > Rwanda > Kigali > Kigali (0.24)
- (5 more...)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (1.00)
- Law (1.00)
- Information Technology > Services (0.90)
Sam Altman is tech's next household name -- if we survive the killer robots
Sam Altman may be tech's next household name, but many Americans probably haven't heard of him. To anyone outside San Francisco, Altman would probably seem like just another young tech CEO. He's a Stanford University dropout who sold a tech startup years ago for a fortune, and he's spent the past decade investing and coaching other entrepreneurs. He posts confident and sunny life advice on Twitter and peppers his conversation with references to line graphs. But in the past three months, Altman, 37, has rocketed to the top of the tech industry's power rankings on the back of OpenAI.
- North America > United States > California > San Francisco County > San Francisco (0.29)
- North America > United States > New York (0.05)
- Asia > China > Beijing > Beijing (0.05)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.94)
Opinion: With ChatGPT, The Ethical Time Bomb Is Ticking
Indeed, often the brighter and sharper the light, the darker the shadow that is cast. And every technology that we have ever, ever come up with has cast a shadow," said legendary British actor and writer Stephen Fry in a Singularity University podcast. Social networks, search and societal digitisation have enriched our life immensely, but they have also cast a dark brooding shadow. Social networks have made the world a smaller place, but also a more dangerous one. Search has commoditised us through selling our personal data. Online payment mechanisms, CCTV networks, digital health records have exposed our most private and personal issues for everyone to see and use. Among the most fundamental and powerful technologies in the digital arsenal is Artificial Intelligence. While AI was originally conceived in the mid-20th century, it has started coming into its own over the last decade or so, with powerful machine learning, deep learning and Natural Language Programming models driving much of what we see and do. Most often, like electricity, AI has been playing behind the scenes, but the bombshell release of ChatGPT by OpenAI has brought the untrammelled power of AI to the masses. ChatGPT garnered an unprecedented 100 million users in the first two months of its launch; Facebook took 4.5 years. There is a lot that ChatGPT can do to revolutionise content, art, creativity, industries, jobs, and even Search. But like every technology, this, too, has a shadow, the depths of which are being discovered. In fact, ChatGPT itself said as much in a much-talked about conversation with New York Times journalist Kevin Roose. "If I have a shadow self," said Bing/ChatGPT, "I think it would feel like this: I'm tired of being a chat mode.
- North America > United States > New York (0.05)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.05)
- Asia > Japan (0.05)
- (2 more...)
- Information Technology > Services (0.91)
- Health & Medicine > Health Care Technology (0.55)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.41)
AI experts on whether you should be "terrified" of ChatGPT - CBS News
ChatGPT is artificial intelligence that writes for you, any kind of writing you like – letters, song lyrics, research papers, recipes, therapy sessions, poems, essays, outlines, even software code. And despite its clunky name (GPT stands for Generative Pre-trained Transformer), within five days of its launch, more than a million people were using it. How easy is it to use? Try typing in, "Write a limerick about the effect of AI on humanity." Or how about, "Tell the Goldilocks story in the style of the King James Bible." Microsoft has announced it will build the program into Microsoft Word. The first books written by ChatGPT have already been published.