Goto

Collaborating Authors

Results


AI can help trace language to violence

#artificialintelligence

Every day, militaristic and violent metaphors are used by journalists and political actors alike to communicate and mobilize action. These word choices may seem effective yet, these metaphors, imbued with violent imagery, can be dangerous. From a policy standpoint, they are also ineffective (and potentially harmful). One example is how the global "war on drugs" terminology victimized, stigmatized, and misplaced blame. As noted by others, as with any war, there are always civil rights abuses.


The pandemic and gender inequality: How AI is helping companies hire women

#artificialintelligence

Before COVID hit, women in the U.S. had made significant progress towards overcoming gender inequality. Representation was on the rise in male-dominated industries, and women were outnumbering men in the workforce for the first time since 2010. Unfortunately, 2020 would undo that short-lived victory. In December, the U.S. economy lost around 140,000 jobs -- all of which belonged to women. Beyond the U.S., women accounted for 54% of job losses worldwide, even though they only made up 39% of the global workforce. Before women suffer even greater gender inequality setbacks, we have to pick up the pace in achieving workplace equality and inclusivity.


The AI Index 2021 Annual Report

arXiv.org Artificial Intelligence

Welcome to the fourth edition of the AI Index Report. This year we significantly expanded the amount of data available in the report, worked with a broader set of external organizations to calibrate our data, and deepened our connections with the Stanford Institute for Human-Centered Artificial Intelligence (HAI). The AI Index Report tracks, collates, distills, and visualizes data related to artificial intelligence. Its mission is to provide unbiased, rigorously vetted, and globally sourced data for policymakers, researchers, executives, journalists, and the general public to develop intuitions about the complex field of AI. The report aims to be the most credible and authoritative source for data and insights about AI in the world.


Why Talking about ethics is not enough: a proposal for Fintech's AI ethics

arXiv.org Artificial Intelligence

As the potential applications of Artificial Intelligence (AI) in the financial sector increases, ethical issues become gradually latent. The distrust of individuals, social groups, and governments about the risks arising from Fintech's activities is growing. Due to this scenario, the preparation of recommendations and Ethics Guidelines is increasing and the risks of being chosen the principles and ethical values most appropriate to companies are high. Thus, this exploratory research aims to analyze the benefits of the application of the stakeholder theory and the idea of Social License to build an environment of trust and for the realization of ethical principles by Fintech. The formation of a Fintech association for the creation of a Social License will allow early-stage Fintech to participate from the beginning of its activities in the elaboration of a dynamic ethical code and with the participation of stakeholders.


Patterns, predictions, and actions: A story about machine learning

arXiv.org Machine Learning

This graduate textbook on machine learning tells a story of how patterns in data support predictions and consequential actions. Starting with the foundations of decision making, we cover representation, optimization, and generalization as the constituents of supervised learning. A chapter on datasets as benchmarks examines their histories and scientific bases. Self-contained introductions to causality, the practice of causal inference, sequential decision making, and reinforcement learning equip the reader with concepts and tools to reason about actions and their consequences. Throughout, the text discusses historical context and societal impact. We invite readers from all backgrounds; some experience with probability, calculus, and linear algebra suffices.


Variational Bayes survival analysis for unemployment modelling

arXiv.org Artificial Intelligence

Mathematical modelling of unemployment dynamics attempts to predict the probability of a job seeker finding a job as a function of time. This is typically achieved by using information in unemployment records. These records are right censored, making survival analysis a suitable approach for parameter estimation. The proposed model uses a deep artificial neural network (ANN) as a non-linear hazard function. Through embedding, high-cardinality categorical features are analysed efficiently. The posterior distribution of the ANN parameters are estimated using a variational Bayes method. The model is evaluated on a time-to-employment data set spanning from 2011 to 2020 provided by the Slovenian public employment service. It is used to determine the employment probability over time for each individual on the record. Similar models could be applied to other questions with multi-dimensional, high-cardinality categorical data including censored records. Such data is often encountered in personal records, for example in medical records.


Beyond traditional assumptions in fair machine learning

arXiv.org Artificial Intelligence

After challenging the validity of these assumptions in real-world applications, we propose ways to move forward when they are violated. First, we show that group fairness criteria purely based on statistical properties of observed data are fundamentally limited. Revisiting this limitation from a causal viewpoint we develop a more versatile conceptual framework, causal fairness criteria, and first algorithms to achieve them. We also provide tools to analyze how sensitive a believed-to-be causally fair algorithm is to misspecifications of the causal graph. Second, we overcome the assumption that sensitive data is readily available in practice. To this end we devise protocols based on secure multi-party computation to train, validate, and contest fair decision algorithms without requiring users to disclose their sensitive data or decision makers to disclose their models. Finally, we also accommodate the fact that outcome labels are often only observed when a certain decision has been made. We suggest a paradigm shift away from training predictive models towards directly learning decisions to relax the traditional assumption that labels can always be recorded. The main contribution of this thesis is the development of theoretically substantiated and practically feasible methods to move research on fair machine learning closer to real-world applications.


Re-imagining Algorithmic Fairness in India and Beyond

arXiv.org Artificial Intelligence

Conventional algorithmic fairness is West-centric, as seen in its sub-groups, values, and methods. In this paper, we de-center algorithmic fairness and analyse AI power in India. Based on 36 qualitative interviews and a discourse analysis of algorithmic deployments in India, we find that several assumptions of algorithmic fairness are challenged. We find that in India, data is not always reliable due to socio-economic factors, ML makers appear to follow double standards, and AI evokes unquestioning aspiration. We contend that localising model fairness alone can be window dressing in India, where the distance between models and oppressed communities is large. Instead, we re-imagine algorithmic fairness in India and provide a roadmap to re-contextualise data and models, empower oppressed communities, and enable Fair-ML ecosystems.


A Distributional Approach to Controlled Text Generation

arXiv.org Artificial Intelligence

We propose a Distributional Approach to address Controlled Text Generation from pre-trained Language Models (LMs). This view permits to define, in a single formal framework, "pointwise" and "distributional" constraints over the target LM -- to our knowledge, this is the first approach with such generality -- while minimizing KL divergence with the initial LM distribution. The optimal target distribution is then uniquely determined as an explicit EBM (Energy-Based Model) representation. From that optimal representation we then train the target controlled autoregressive LM through an adaptive distributional variant of Policy Gradient. We conduct a first set of experiments over pointwise constraints showing the advantages of our approach over a set of baselines, in terms of obtaining a controlled LM balancing constraint satisfaction with divergence from the initial LM (GPT-2). We then perform experiments over distributional constraints, a unique feature of our approach, demonstrating its potential as a remedy to the problem of Bias in Language Models. Through an ablation study we show the effectiveness of our adaptive technique for obtaining faster convergence.


Developing Future Human-Centered Smart Cities: Critical Analysis of Smart City Security, Interpretability, and Ethical Challenges

arXiv.org Artificial Intelligence

As we make tremendous advances in machine learning and artificial intelligence technosciences, there is a renewed understanding in the AI community that we must ensure that humans being are at the center of our deliberations so that we don't end in technology-induced dystopias. As strongly argued by Green in his book Smart Enough City, the incorporation of technology in city environs does not automatically translate into prosperity, wellbeing, urban livability, or social justice. There is a great need to deliberate on the future of the cities worth living and designing. There are philosophical and ethical questions involved along with various challenges that relate to the security, safety, and interpretability of AI algorithms that will form the technological bedrock of future cities. Several research institutes on human centered AI have been established at top international universities. Globally there are calls for technology to be made more humane and human-compatible. For example, Stuart Russell has a book called Human Compatible AI. The Center for Humane Technology advocates for regulators and technology companies to avoid business models and product features that contribute to social problems such as extremism, polarization, misinformation, and Internet addiction. In this paper, we analyze and explore key challenges including security, robustness, interpretability, and ethical challenges to a successful deployment of AI or ML in human-centric applications, with a particular emphasis on the convergence of these challenges. We provide a detailed review of existing literature on these key challenges and analyze how one of these challenges may lead to others or help in solving other challenges. The paper also advises on the current limitations, pitfalls, and future directions of research in these domains, and how it can fill the current gaps and lead to better solutions.