Goto

Collaborating Authors

 impartiality


The BBC breached editorial guidelines over 1,500 times in Israel-Hamas conflict, report claims

FOX News

A new report found the British Broadcasting Corporation (BBC) guilty of violating its own editorial guidelines over a thousand times in its coverage of the Israel-Hamas war. According to The Telegraph, the report analyzed four months of BBC output on television, radio, online, podcasts and on social media during the height of the conflict and found a "deeply worrying pattern of bias" against Israel. British lawyer Trevor Asserson and a team of about 20 lawyers and 20 data scientists used artificial intelligence to analyze nine million words from the news outlet, starting the day of the October 7, 2023, terror attack. The researchers allegedly identified 1,553 instances where the BBC violated its own editorial guidelines on impartiality, accuracy, editorial values and public interest. Hundreds attend a protest called by the National Jewish Assembly, The Campaign Against Antisemitsim and the UK Lawyers for Israel at the BBC Broadcasting House on October 16, 2023, in London, England.


Manipulation and Peer Mechanisms: A Survey

Olckers, Matthew, Walsh, Toby

arXiv.org Artificial Intelligence

In peer mechanisms, the competitors for a prize also determine who wins. Each competitor may be asked to rank, grade, or nominate peers for the prize. Since the prize can be valuable, such as financial aid, course grades, or an award at a conference, competitors may be tempted to manipulate the mechanism. We survey approaches to prevent or discourage the manipulation of peer mechanisms. We conclude our survey by identifying several important research challenges.


Learning with Impartiality to Walk on the Pareto Frontier of Fairness, Privacy, and Utility

Yaghini, Mohammad, Liu, Patty, Boenisch, Franziska, Papernot, Nicolas

arXiv.org Artificial Intelligence

Deploying machine learning (ML) models often requires both fairness and privacy guarantees. Both of these objectives present unique trade-offs with the utility (e.g., accuracy) of the model. However, the mutual interactions between fairness, privacy, and utility are less well-understood. As a result, often only one objective is optimized, while the others are tuned as hyper-parameters. Because they implicitly prioritize certain objectives, such designs bias the model in pernicious, undetectable ways. To address this, we adopt impartiality as a principle: design of ML pipelines should not favor one objective over another. We propose impartially-specified models, which provide us with accurate Pareto frontiers that show the inherent trade-offs between the objectives. Extending two canonical ML frameworks for privacy-preserving learning, we provide two methods (FairDP-SGD and FairPATE) to train impartially-specified models and recover the Pareto frontier. Through theoretical privacy analysis and a comprehensive empirical study, we provide an answer to the question of where fairness mitigation should be integrated within a privacy-aware ML pipeline.


Zafar

AAAI Conferences

Discourse on social media platforms is often plagued by acute polarization, with different camps promoting different perspectives on the issue at hand--compare, for example, the differences in the liberal and conservative discourse on the U.S. immigration debate. A large body of research has studied this phenomenon by focusing on the affiliation of groups and individuals. We propose a new finer-grained perspective: studying the impartiality of individual messages. While the notion of message impartiality is quite intuitive, the lack of an objective definition and of a way to measure it directly has largely obstructed scientific examination. In this work we operationalize message impartiality in terms of how discernible the affiliation of its author is, and introduce a methodology for quantifying it automatically. Unlike a supervised machine learning approach, our method can be used in the context of emerging events where impartiality labels are not immediately available. Our framework enables us to study the effects of (im)partiality on social media discussions at scale. We show that this phenomenon is highly consequential, with partial messages being twice more likely to spread than impartial ones, even after controlling for author and topic. By taking this fine-grained approach to polarization, we also provide new insights into the temporal evolution of online discussions centered around major political and sporting events.


Council Post: How To Build Responsible AI, Step 2: Impartiality

#artificialintelligence

VP Data & AI at ECS, roles have included co-founder at a data analytics startup, VP AI at Booz Allen, and Global Analytics Lead at Accenture. As the influence of artificial intelligence grows, it is increasingly vital to design processes and systems to harness AI while counterbalancing risk. Our charge is to eliminate bias, codify objectives and represent values. Responsible AI ensures alignment to our standards spanning data, algorithms, operations, technology and Human Computer Interaction. I am examining the importance of each of these elements in a series of articles.


Can artificial intelligence help reduce disparities in medical care?

#artificialintelligence

Click here to read the Cover Story, "Pandemic spurs paradigm shift in artificial intelligence." Any question about the utility of a tool is best answered by giving it a go. Try the tool, compare it with others, change the design to improve it. One might indeed be able to drive a nail with a rock, pistol or hoe, but it should not take long to figure out that a hammer, a blob of metal on a stick, is better suited to driving nails. Computers and software are tools.


The Vatican and Big Tech, Pentagon release overlapping AI commandments

#artificialintelligence

It is hard to know what it means when a global religious figure, two iconic technology giants and the Pentagon all find themselves on the same side of an argument. The U.S. Department of Defense issued five principles Feb. 24 for its own use of artificial intelligence, including biometric systems like facial recognition. Systems need to be responsible, equitable, traceable, governable and reliable. Four days later, at the end of a Vatican workshop examining artificial intelligence ethics and law, Pope Francis, Microsoft Corp., IBM Corp. and other invited organizations called for "new forms of regulation" and six principles that overlap with the Defense Department's list. The document, titled Rome Call for AI Ethics and backed by the Pope, says every stage and aspect of artificial intelligence must adhere to ideals of transparency, inclusion, responsibility, impartiality, reliability, security and privacy.


Morning Wire: filing week, I-940, artificial intelligence - Washington State Wire

#artificialintelligence

Sometimes our Morning Wire gets out in the afternoon… Today is one of those days! But, we have lots for you today, including independent reporting and commentary from Olympia, Seattle, and Longview. As always, you can sign up here for our Daily Wire email that is out M-F at 7:00 am. On Friday, Governor Inslee announced his five-year plan to transform the state's mental health system. Beginning in 2023, Western and Eastern State Hospitals will become forensic-only facilities, housing patients who enter the mental health system through the courts.


A new AI "journalist" is rewriting the news to remove bias

#artificialintelligence

Want your news delivered with the icy indifference of a literal robot? You might want to bookmark the newly launched site Knowhere News. Knowhere is a startup that combines machine learning technologies and human journalists to deliver the facts on popular news stories. First, the site's artificial intelligence (AI) chooses a story based on what's popular on the internet right now. Once it picks a topic, it looks at more than a thousand news sources to gather details.


Ranking Wily People Who Rank Each Other

Kahng, Anson (Carnegie Mellon University) | Kotturi, Yasmine (Carnegie Mellon University) | Kulkarni, Chinmay (Carnegie Mellon University) | Kurokawa, David (Carnegie Mellon University) | Procaccia, Ariel D. (Carnegie Mellon University)

AAAI Conferences

We study rank aggregation algorithms that take as input the opinions of players over their peers, represented as rankings, and output a social ordering of the players (which reflects, e.g., relative contribution to a project or fit for a job). To prevent strategic behavior, these algorithms must be impartial, i.e., players should not be able to influence their own position in the output ranking. We design several randomized algorithms that are impartial and closely emulate given (non-impartial) rank aggregation rules in a rigorous sense. Experimental results further support the efficacy and practicability of our algorithms.