Goto

Collaborating Authors

Results


Machine Learning, Ethics, and Open Source Licensing (Part I/II)

#artificialintelligence

The unprecedented interest, investment, and deployment of machine learning across many aspects of our lives in the past decade has come with a cost. Although there has been some movement towards moderating machine learning where it has been genuinely harmful, it's becoming increasingly clear that existing approaches suffer significant shortcomings. Nevertheless, there still exist new directions that hold potential for meaningfully addressing the harms of machine learning. In particular, new approaches to licensing the code and models that underlie these systems have the potential to create a meaningful impact on how they affect our world. This is Part I of a two-part essay.


Attack of the drones: the mystery of disappearing swarms in the US midwest

The Guardian

At twilight on New Year's Eve, 2020, Placido Montoya, 35, a plumber from Fort Morgan, Colorado, was driving to work. Ahead of him he noticed blinking lights in the sky. He'd heard rumours of mysterious drones, whispers in his local community, but now he was seeing them with his own eyes. In the early morning gloom, it was hard to make out how big the lights were and how many were hovering above him. But one thing was clear to Montoya: he needed to give chase.


Artificial Intelligence And Online Privacy: Blessing And A Curse - Liwaiwai

#artificialintelligence

Artificial Intelligence (AI) is a beautiful piece of technology made to seamlessly augment our everyday experience. It is widely utilized in everything starting from marketing to even traffic light moderation in cities like Pittsburg. However, swords have two edges and the AI is no different. There are a fair number of upsides as well as downsides that follow such technological advancements. One way or another, the technology is moving too quickly while the education about the risks and safeguards that are in place are falling behind for the vast majority of the population. The whole situation is as much of a blessing for humankind as it is a curse.


The AI Index 2021 Annual Report

arXiv.org Artificial Intelligence

Welcome to the fourth edition of the AI Index Report. This year we significantly expanded the amount of data available in the report, worked with a broader set of external organizations to calibrate our data, and deepened our connections with the Stanford Institute for Human-Centered Artificial Intelligence (HAI). The AI Index Report tracks, collates, distills, and visualizes data related to artificial intelligence. Its mission is to provide unbiased, rigorously vetted, and globally sourced data for policymakers, researchers, executives, journalists, and the general public to develop intuitions about the complex field of AI. The report aims to be the most credible and authoritative source for data and insights about AI in the world.


A dystopian robo-dog now patrols New York City. That's the last thing we need

#artificialintelligence

The New York police department has acquired a robotic police dog, known as Digidog, and has deployed it on the streets of Brooklyn, Queens and, most recently, the Bronx. At a time that activists in New York, and beyond, are calling for the defunding of police departments – for the sake of funding more vital services that address the root causes of crime and poverty – the NYPD's decision to pour money into a robot dog seems tone-deaf if not an outright provocation. As Congresswoman Alexandria Ocasio-Cortez, who represents parts of Queens and the Bronx, put it on Twitter: "Shout out to everyone who fought against community advocates who demanded these resources go to investments like school counseling instead. Now robotic surveillance ground drones are being deployed for testing on low-income communities of color with underresourced schools." There is more than enough evidence that law enforcement is lethally racially biased, and adding an intimidating non-human layer to it seems cruel.


Patterns, predictions, and actions: A story about machine learning

arXiv.org Machine Learning

This graduate textbook on machine learning tells a story of how patterns in data support predictions and consequential actions. Starting with the foundations of decision making, we cover representation, optimization, and generalization as the constituents of supervised learning. A chapter on datasets as benchmarks examines their histories and scientific bases. Self-contained introductions to causality, the practice of causal inference, sequential decision making, and reinforcement learning equip the reader with concepts and tools to reason about actions and their consequences. Throughout, the text discusses historical context and societal impact. We invite readers from all backgrounds; some experience with probability, calculus, and linear algebra suffices.


Outlining Traceability: A Principle for Operationalizing Accountability in Computing Systems

arXiv.org Artificial Intelligence

Accountability is widely understood as a goal for well governed computer systems, and is a sought-after value in many governance contexts. But how can it be achieved? Recent work on standards for governable artificial intelligence systems offers a related principle: traceability. Traceability requires establishing not only how a system worked but how it was created and for what purpose, in a way that explains why a system has particular dynamics or behaviors. It connects records of how the system was constructed and what the system did mechanically to the broader goals of governance, in a way that highlights human understanding of that mechanical operation and the decision processes underlying it. We examine the various ways in which the principle of traceability has been articulated in AI principles and other policy documents from around the world, distill from these a set of requirements on software systems driven by the principle, and systematize the technologies available to meet those requirements. From our map of requirements to supporting tools, techniques, and procedures, we identify gaps and needs separating what traceability requires from the toolbox available for practitioners. This map reframes existing discussions around accountability and transparency, using the principle of traceability to show how, when, and why transparency can be deployed to serve accountability goals and thereby improve the normative fidelity of systems and their development processes.


A Distributional Approach to Controlled Text Generation

arXiv.org Artificial Intelligence

We propose a Distributional Approach to address Controlled Text Generation from pre-trained Language Models (LMs). This view permits to define, in a single formal framework, "pointwise" and "distributional" constraints over the target LM -- to our knowledge, this is the first approach with such generality -- while minimizing KL divergence with the initial LM distribution. The optimal target distribution is then uniquely determined as an explicit EBM (Energy-Based Model) representation. From that optimal representation we then train the target controlled autoregressive LM through an adaptive distributional variant of Policy Gradient. We conduct a first set of experiments over pointwise constraints showing the advantages of our approach over a set of baselines, in terms of obtaining a controlled LM balancing constraint satisfaction with divergence from the initial LM (GPT-2). We then perform experiments over distributional constraints, a unique feature of our approach, demonstrating its potential as a remedy to the problem of Bias in Language Models. Through an ablation study we show the effectiveness of our adaptive technique for obtaining faster convergence.


Developing Future Human-Centered Smart Cities: Critical Analysis of Smart City Security, Interpretability, and Ethical Challenges

arXiv.org Artificial Intelligence

As we make tremendous advances in machine learning and artificial intelligence technosciences, there is a renewed understanding in the AI community that we must ensure that humans being are at the center of our deliberations so that we don't end in technology-induced dystopias. As strongly argued by Green in his book Smart Enough City, the incorporation of technology in city environs does not automatically translate into prosperity, wellbeing, urban livability, or social justice. There is a great need to deliberate on the future of the cities worth living and designing. There are philosophical and ethical questions involved along with various challenges that relate to the security, safety, and interpretability of AI algorithms that will form the technological bedrock of future cities. Several research institutes on human centered AI have been established at top international universities. Globally there are calls for technology to be made more humane and human-compatible. For example, Stuart Russell has a book called Human Compatible AI. The Center for Humane Technology advocates for regulators and technology companies to avoid business models and product features that contribute to social problems such as extremism, polarization, misinformation, and Internet addiction. In this paper, we analyze and explore key challenges including security, robustness, interpretability, and ethical challenges to a successful deployment of AI or ML in human-centric applications, with a particular emphasis on the convergence of these challenges. We provide a detailed review of existing literature on these key challenges and analyze how one of these challenges may lead to others or help in solving other challenges. The paper also advises on the current limitations, pitfalls, and future directions of research in these domains, and how it can fill the current gaps and lead to better solutions.


Why addressing bias in AI algorithms matters (Includes interview)

#artificialintelligence

To gain an insight into these and other essential 2021 trends for businesses, Digital Journal caught up with Robert Prigge, CEO of Jumio. Addressing bias in AI algorithms will be a top priority causing guidelines to be rolled out for machine learning support of ethnicity for facial recognition. Prigge explains: "Enterprises are becoming increasingly concerned about demographic bias in AI algorithms (race, age, gender) and its effect on their brand and potential to raise legal issues. Evaluating how vendors address demographic bias will become a top priority when selecting identity proofing solutions in 2021." Prigge adds: "According to Gartner, more than 95 percent of RFPs for document-centric identity proofing (comparing a government-issued ID to a selfie) will contain clear requirements regarding minimizing demographic bias by 2022, an increase from fewer than 15 percent today. Organizations will increasingly need to have clear answers to organizations who want to know how a vendor's AI "black box" was built, where the data originated from and how representative the training data is to the broader population being served."