bryson
One Person, One Bot
This short paper puts forward a vision for a new democratic model enabled by the recent technological advances in agentic AI. It therefore opens with drawing a clear and concise picture of the model, and only later addresses related proposals and research directions, and concerns regarding feasibility and safety. It ends with a note on the timeliness of this idea and on optimism. The model proposed is that of assigning each citizen an AI Agent that would serve as their political delegate, enabling the return to direct democracy. The paper examines this models relation to existing research, its potential setbacks and feasibility and argues for its further development.
- North America > United States > California (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Government (1.00)
- Information Technology > Security & Privacy (0.93)
- Banking & Finance (0.69)
- Law (0.68)
ML-EAT: A Multilevel Embedding Association Test for Interpretable and Transparent Social Science
Wolfe, Robert, Hiniker, Alexis, Howe, Bill
This research introduces the Multilevel Embedding Association Test (ML-EAT), a method designed for interpretable and transparent measurement of intrinsic bias in language technologies. The ML-EAT addresses issues of ambiguity and difficulty in interpreting the traditional EAT measurement by quantifying bias at three levels of increasing granularity: the differential association between two target concepts with two attribute concepts; the individual effect size of each target concept with two attribute concepts; and the association between each individual target concept and each individual attribute concept. Using the ML-EAT, this research defines a taxonomy of EAT patterns describing the nine possible outcomes of an embedding association test, each of which is associated with a unique EAT-Map, a novel four-quadrant visualization for interpreting the ML-EAT. Empirical analysis of static and diachronic word embeddings, GPT-2 language models, and a CLIP language-and-image model shows that EAT patterns add otherwise unobservable information about the component biases that make up an EAT; reveal the effects of prompting in zero-shot models; and can also identify situations when cosine similarity is an ineffective metric, rendering an EAT unreliable. Our work contributes a method for rendering bias more observable and interpretable, improving the transparency of computational investigations into human minds and societies.
- North America > United States (0.14)
- Africa > Kenya (0.04)
- Africa > Eswatini > Manzini > Manzini (0.04)
- Research Report > Experimental Study (0.93)
- Research Report > New Finding (0.68)
- Law (0.93)
- Health & Medicine > Therapeutic Area (0.68)
- Media (0.67)
How Co-Regulation Became the Parenting Buzzword of the Day
On a recent evening, my children and I were watching "The Iron Giant," the animated cult classic about a robot from outer space who, in 1957, crash-lands in the woods outside a small town in Maine, befriends a young boy, and wages battle against both a murderously stupid G-man and his own robo-programming as a sentient weapon of war. The boy, named Hogarth, and his mother, Annie, get by on her income as a diner waitress, and, late one night, she comes home from a draining double shift to find her son missing. Frantic with worry, Annie drives around until she locates Hogarth at the edge of the woods--on his own and perfectly fine--where he manically chatters at her about the big metal alien he claims to have spotted nearby. Then she catches herself and, with effort, takes on a low, steadier voice. "I'm not in the mood," she says.
- North America > United States > Maine (0.25)
- North America > United States > Pennsylvania > Westmoreland County (0.05)
Digital 'immortality' is coming and we're not ready for it
In the 1990 fantasy drama - Truly, Madly, Deeply, lead character Nina, (Juliet Stevenson), is grieving the recent death of her boyfriend Jamie (Alan Rickman). Sensing her profound sadness, Jamie returns as a ghost to help her process her loss. If you've seen the film, you'll know that his reappearance forces her to question her memory of him and, in turn, accept that maybe he wasn't as perfect as she'd remembered. Here in 2023, a new wave of AI-based "grief tech" offers us all the chance to spend time with loved ones after their death -- in varying forms. But unlike Jamie (who benevolently misleads Nina), we're being asked to let artificial intelligence serve up a version of those we survive.
- Information Technology > Security & Privacy (0.48)
- Media (0.35)
The AI Outlook
Five global AI experts weigh in on the challenges they anticipate the technology will face, and potentially overcome, this year. Advances in artificial intelligence (AI) continue to happen at an extraordinary pace. News of breakthroughs in machine learning, computer vision, data science, and machine-human interaction come almost daily. Growth is massive, and it is impacting all sectors. According to a new forecast from the technology research company Gartner, worldwide artificial intelligence (AI) software revenue is forecast to total $62.5 billion in 2022, an increase of 21.3% from 2021.
- North America > United States > Massachusetts (0.05)
- Europe > United Kingdom (0.05)
- Europe > Ukraine (0.05)
- (4 more...)
- Government (0.49)
- Education (0.49)
- Information Technology > Security & Privacy (0.48)
- Energy (0.48)
US-EU agreement on artificial intelligence seen as a swipe at China – but little else for now
The US and EU are talking up the significance of their new pact on artificial intelligence, but a closer inspection indicates the two sides still have precious little common when it comes to regulating the technology – except a desire to take the moral high ground against China. The long-awaited agreement was reached when the Trade and Technology Council met for the first time on 29 September in Pittsburgh, with Brussels and Washington vowing to make sure AI systems are "innovative and trustworthy" and "respect universal human rights and shared democratic values". The EU and US will "seek to develop a mutual understanding on the principles underlining trustworthy and responsible AI," the agreement says. But exactly what this means in practice remains to be fleshed out. While both sides said they have noted each other's domestic regulatory proposals on AI, there is no mention of coordinating their approaches.
- Asia > China (0.78)
- North America > United States (0.72)
- Oceania > Australia (0.05)
- (4 more...)
The importance of having accountability in AI ethics
AI ethics expert Joanna J Bryson spoke to Siliconrepublic.com about the challenges of regulating AI and why more work needs to be done. As AI becomes a bigger part of society, the ethics around the technology require more discussion, with everything from privacy and discrimination to human safety needing consideration. There have been several examples in recent years highlighting ethical problems with AI, including an MIT image library to train AI that contained racist and misogynistic terms and the controversial credit score system in China. In recent years, the EU has made conscious steps towards addressing some of these issues, laying the groundwork for proper regulation for the technology. Its most recent proposals revealed plans to classify different AI applications depending on their risks.
- Asia > China (0.25)
- North America > United States > Illinois > Cook County > Chicago (0.05)
- Europe > Germany (0.05)
AIs that read sentences can also spot virus mutations
In a study published in Science today, Berger and her colleagues pull several of these strands together and use NLP to predict mutations that allow viruses to avoid being detected by antibodies in the human immune system, a process known as viral immune escape. The basic idea is that the interpretation of a virus by an immune system is analogous to the interpretation of a sentence by a human. "It's a neat paper, building off the momentum of previous work," says Ali Madani, a scientist at Salesforce, who is using NLP to predict protein sequences. Berger's team uses two different linguistic concepts: grammar and semantics (or meaning). The genetic or evolutionary fitness of a virus--characteristics such as how good it is at infecting a host--can be interpreted in terms of grammatical correctness.
- Health & Medicine > Therapeutic Area > Infections and Infectious Diseases (1.00)
- Health & Medicine > Therapeutic Area > Immunology > HIV (0.42)
Discovering and Categorising Language Biases in Reddit
Ferrer, Xavier, van Nuenen, Tom, Such, Jose M., Criado, Natalia
We present a data-driven approach using word embeddings to discover and categorise language biases on the discussion platform Reddit. As spaces for isolated user communities, platforms such as Reddit are increasingly connected to issues of racism, sexism and other forms of discrimination. Hence, there is a need to monitor the language of these groups. One of the most promising AI approaches to trace linguistic biases in large textual datasets involves word embeddings, which transform text into high-dimensional dense vectors and capture semantic relations between words. Yet, previous studies require predefined sets of potential biases to study, e.g., whether gender is more or less associated with particular types of jobs. This makes these approaches unfit to deal with smaller and community-centric datasets such as those on Reddit, which contain smaller vocabularies and slang, as well as biases that may be particular to that community. This paper proposes a data-driven approach to automatically discover language biases encoded in the vocabulary of online discourse communities on Reddit. In our approach, protected attributes are connected to evaluative words found in the data, which are then categorised through a semantic analysis system. We verify the effectiveness of our method by comparing the biases we discover in the Google News dataset with those found in previous literature. We then successfully discover gender bias, religion bias, and ethnic bias in different Reddit communities. We conclude by discussing potential application scenarios and limitations of this data-driven bias discovery method.
Tech Titans Declare AI Ethics Concerns
The biggest tech companies want you to know that they're taking special care to ensure that their use of artificial intelligence to sift through mountains of data, analyze faces or build virtual assistants doesn't spill over to the dark side. But their efforts to assuage concerns that their machines may be used for nefarious ends have not been universally embraced. Some skeptics see it as mere window dressing by corporations more interested in profit than what's in society's best interests. "Ethical AI" has become a new corporate buzz phrase, slapped on internal review committees, fancy job titles, research projects and philanthropic initiatives. The moves are meant to address concerns over racial and gender bias emerging in facial recognition and other AI systems, as well as address anxieties about job losses to the technology and its use by law enforcement and the military.
- Information Technology (1.00)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (0.35)
- Law > Statutes (0.32)