Collaborating Authors


Survey XII: What Is the Future of Ethical AI Design? – Imagining the Internet


Results released June 16, 2021 – Pew Research Center and Elon University's Imagining the Internet Center asked experts where they thought efforts aimed at ethical artificial intelligence design would stand in the year 2030. Some 602 technology innovators, developers, business and policy leaders, researchers and activists responded to this specific question. The Question – Regarding the application of AI Ethics by 2030: In recent years, there have been scores of convenings and even more papers generated proposing ethical frameworks for the application of artificial intelligence (AI). They cover a host of issues including transparency, justice and fairness, privacy, freedom and human autonomy, beneficence and non-maleficence, freedom, trust, sustainability and dignity. Our questions here seek your predictions about the possibilities for such efforts. By 2030, will most of the AI systems being used by organizations of all sorts employ ethical principles focused primarily on the public ...

Nicholas Goldberg: I oppose the gubernatorial recall. Does that make me a hypocrite?

Los Angeles Times

When I wrote recently that California's recall election process was terribly flawed and in need of serious reform, the angry messages came flowing in, calling me a hypocrite. The writers didn't believe for a second that I objected to the recall on principle -- they assumed that as a loyal Democrat, I was just shilling for Gov. Gavin Newsom. "You're in his pocket," said one dismissive tweet. Would I still be vehemently opposed to the recall if, instead of being used against a Democratic governor, it was targeting a Trump-supporting right-wing governor -- someone who, say, was unleashing the fossil fuel industry, hoping to do away with the minimum wage and fighting mask and vaccine mandates? Would I still feel the recall was a troubling, badly structured, overused, undemocratic tool that should be reformed or abolished?

AI Could Solve Partisan Gerrymandering, if Humans Can Agree on What's Fair - AI Trends


With the 2020 US Census results having been delivered to the states, now the process begins for using the population results to draw new Congressional districts. Gerrymandering, a practice intended to establish a political advantage by manipulating the boundaries of electoral districts, is expected to be practiced on a wide scale with Democrats having a slight margin of seats in the House of Representatives and Republicans seeking to close the gap in states where they hold a majority in the legislature. Today, more powerful redistricting software incorporating AI and machine learning is available, and it represents a double-edged sword. The pessimistic view is that the gerrymandering software will enable legislators to gerrymander with more precision than ever before, to ensure maximum advantages. This was called "political laser surgery" by David Thornburgh, president of the Committee of Seventy, an anti-corruption organization that considers the 2010 redistricting as one of the worst in the country's history, according to an account in the Columbia Political Review.

Leveraging AI and Big Data for Psychometric Profiling


Perhaps, the biggest beneficiary of big data is the field of AI. Together, both these technologies can take psychometric profiling to the next level. Studying the impact of AI and big data in psychometrics is crucial to make future improvements in the field. The sheer number of areas in which involving psychometric evaluation can make a difference is truly mind-boggling. From assessing job-seeking candidates during recruitment to contesting a nationwide election, from marketing to law enforcement, psychometric assessments play a big part in understanding the pulse of a large crowd or the defining character traits of an individual.

Knowledge Graphs: Powerful Structures Making Sense Of Data - AI Summary


And in both cases, the end goal of their knowledge graphs is similar--to add value to the vast amount of data out there such that it can be utilised more meaningfully and intelligently in a real-world context, ultimately producing much smarter user experiences. "The need to fit products into tabular structures limits their ability to flex to real-world needs," Capco noted in its June 2020 publication "Knowledge Graphs: Building Smarter Financial Services". And by enabling linkages between data items that would have otherwise remained disparate and siloed off from each other, moreover, knowledge graphs could represent crucial technology for helping to solve some of the world's most pressing and complex data-related challenges. The singular, centralised nature of such control can also elicit many serious privacy concerns for users, as was the case with Facebook and its notorious data-harvesting activities with Cambridge Analytica prior to the 2016 US presidential election. The knowledge graph also allows supply-chain entities to "granularly define who has access to what data--i.e., data can be made fully public, shared with specific supply chain partners, or completely private".

Big Tech is fueling an AI "arms race": It could be terrifying -- or just a giant scam


Early in the 2020 presidential campaign, Democratic candidates Pete Buttigieg and Andrew Yang tried to build political momentum around the claim that the United States is losing ground in a new arms race with China -- not over nuclear missiles or conventional arms but artificial intelligence, or AI. Around the same time, former President Trump launched the American AI Initiative, which sought to marshal AI technologies against "adversarial nations for the security of our economy and our nation," as Trump's top technology adviser put it. Buttigieg, Yang and Trump may have agreed about little else, but they appeared to go along with the nonpartisan think tanks and public policy organizations –– many of them funded by weapons contractors –– that have worked to promote the supposedly alarming possibility that China and Russia may be "beating" the U.S. in defense applications for AI. Hawkish or "centrist" research organizations like the Center for New American Security (CNAS), the Brookings Institution and the Heritage Foundation, despite their policy and ideological differences in many areas, have argued that America must ratchet up spending on AI research and development, lest it lose its place as No. 1. Just last week, the National Security Commission on Artificial Intelligence (NSCAI) published a sweeping 756-page report, culminating two years of work following the 2019 National Defense Authorization Act, asking Congress to authorize a $40 billion federal investment in AI research and development, which the NSCAI calls "a modest down payment."

The Contestation of Tech Ethics: A Sociotechnical Approach to Ethics and Technology in Action Artificial Intelligence

Recent controversies related to topics such as fake news, privacy, and algorithmic bias have prompted increased public scrutiny of digital technologies and soul-searching among many of the people associated with their development. In response, the tech industry, academia, civil society, and governments have rapidly increased their attention to "ethics" in the design and use of digital technologies ("tech ethics"). Yet almost as quickly as ethics discourse has proliferated across the world of digital technologies, the limitations of these approaches have also become apparent: tech ethics is vague and toothless, is subsumed into corporate logics and incentives, and has a myopic focus on individual engineers and technology design rather than on the structures and cultures of technology production. As a result of these limitations, many have grown skeptical of tech ethics and its proponents, charging them with "ethics-washing": promoting ethics research and discourse to defuse criticism and government regulation without committing to ethical behavior. By looking at how ethics has been taken up in both science and business in superficial and depoliticizing ways, I recast tech ethics as a terrain of contestation where the central fault line is not whether it is desirable to be ethical, but what "ethics" entails and who gets to define it. This framing highlights the significant limits of current approaches to tech ethics and the importance of studying the formulation and real-world effects of tech ethics. In order to identify and develop more rigorous strategies for reforming digital technologies and the social relations that they mediate, I describe a sociotechnical approach to tech ethics, one that reflexively applies many of tech ethics' own lessons regarding digital technologies to tech ethics itself.

Correcting public opinion trends through Bayesian data assimilation Artificial Intelligence

Measuring public opinion is a key focus during democratic elections, enabling candidates to gauge their popularity and alter their campaign strategies accordingly. Traditional survey polling remains the most popular estimation technique, despite its cost and time intensity, measurement errors, lack of real-time capabilities and lagged representation of public opinion. In recent years, Twitter opinion mining has attempted to combat these issues. Despite achieving promising results, it experiences its own set of shortcomings such as an unrepresentative sample population and a lack of long term stability. This paper aims to merge data from both these techniques using Bayesian data assimilation to arrive at a more accurate estimate of true public opinion for the Brexit referendum. This paper demonstrates the effectiveness of the proposed approach using Twitter opinion data and survey data from trusted pollsters. Firstly, the possible existence of a time gap of 16 days between the two data sets is identified. This gap is subsequently incorporated into a proposed assimilation architecture. This method was found to adequately incorporate information from both sources and measure a strong upward trend in Leave support leading up to the Brexit referendum. The proposed technique provides useful estimates of true opinion, which is essential to future opinion measurement and forecasting research.

AI's Future Doesn't Have to Be Dystopian


The direction of AI development is not preordained. It can be altered to increase human productivity, create jobs and shared prosperity, and protect and bolster democratic freedoms--if we modify our approach. The direction of AI development is not preordained. It can be altered to increase human productivity, create jobs and shared prosperity, and protect and bolster democratic freedoms--if we modify our approach. Artificial Intelligence (AI) is not likely to make humans redundant. Nor will it create superintelligence anytime soon. But like it or not, AI technologies and intelligent systems will make huge advances in the next two decades--revolutionizing medicine, entertainment, and transport; transforming jobs and markets; enabling many new products and tools; and vastly increasing the amount of information that governments and companies have about individuals. Should we cherish and look forward to these developments, or fear them? Current AI research is too narrowly focused on making advances in a limited set of domains and pays insufficient attention to its disruptive effects on the very fabric of society. There are reasons to be concerned. Current AI research is too narrowly focused on making advances in a limited set of domains and pays insufficient attention to its disruptive effects on the very fabric of society. If AI technology continues to develop along its current path, it is likely to create social upheaval for at least two reasons. For one, AI will affect the future of jobs. Our current trajectory automates work to an excessive degree while refusing to invest in human productivity; further advances will displace workers and fail to create new opportunities (and, in the process, miss out on AI's full potential to enhance productivity). For another, AI may undermine democracy and individual freedoms. Each of these directions is alarming, and the two together are ominous. Shared prosperity and democratic political participation do not just critically reinforce each other: they are the two backbones of our modern society.

The State of AI Ethics Report (January 2021) Artificial Intelligence

The 3rd edition of the Montreal AI Ethics Institute's The State of AI Ethics captures the most relevant developments in AI Ethics since October 2020. It aims to help anyone, from machine learning experts to human rights activists and policymakers, quickly digest and understand the field's ever-changing developments. Through research and article summaries, as well as expert commentary, this report distills the research and reporting surrounding various domains related to the ethics of AI, including: algorithmic injustice, discrimination, ethical AI, labor impacts, misinformation, privacy, risk and security, social media, and more. In addition, The State of AI Ethics includes exclusive content written by world-class AI Ethics experts from universities, research institutes, consulting firms, and governments. Unique to this report is "The Abuse and Misogynoir Playbook," written by Dr. Katlyn Tuner (Research Scientist, Space Enabled Research Group, MIT), Dr. Danielle Wood (Assistant Professor, Program in Media Arts and Sciences; Assistant Professor, Aeronautics and Astronautics; Lead, Space Enabled Research Group, MIT) and Dr. Catherine D'Ignazio (Assistant Professor, Urban Science and Planning; Director, Data + Feminism Lab, MIT). The piece (and accompanying infographic), is a deep-dive into the historical and systematic silencing, erasure, and revision of Black women's contributions to knowledge and scholarship in the United Stations, and globally. Exposing and countering this Playbook has become increasingly important following the firing of AI Ethics expert Dr. Timnit Gebru (and several of her supporters) at Google. This report should be used not only as a point of reference and insight on the latest thinking in the field of AI Ethics, but should also be used as a tool for introspection as we aim to foster a more nuanced conversation regarding the impacts of AI on the world.