Goto

Collaborating Authors

 important decision


End-of-life decisions are difficult and distressing. Could AI help?

MIT Technology Review

End-of-life decisions can be extremely upsetting for surrogates, the people who have to make those calls on behalf of another person, says David Wendler, a bioethicist at the US National Institutes of Health. Wendler and his colleagues have been working on an idea for something that could make things easier: an artificial-intelligence-based tool that can help surrogates predict what patients themselves would want in any given situation. The tool hasn't been built yet. But Wendler plans to train it on a person's own medical data, personal messages, and social media posts. He hopes it could not only be more accurate at working out what the patient would want, but also alleviate the stress and emotional burden of difficult decision-making for family members.


Everything, Everywhere All in One Evaluation: Using Multiverse Analysis to Evaluate the Influence of Model Design Decisions on Algorithmic Fairness

Simson, Jan, Pfisterer, Florian, Kern, Christoph

arXiv.org Machine Learning

A vast number of systems across the world use algorithmic decision making (ADM) to (partially) automate decisions that have previously been made by humans. When designed well, these systems promise more objective decisions while saving large amounts of resources and freeing up human time. However, when ADM systems are not designed well, they can lead to unfair decisions which discriminate against societal groups. The downstream effects of ADMs critically depend on the decisions made during the systems' design and implementation, as biases in data can be mitigated or reinforced along the modeling pipeline. Many of these design decisions are made implicitly, without knowing exactly how they will influence the final system. It is therefore important to make explicit the decisions made during the design of ADM systems and understand how these decisions affect the fairness of the resulting system. To study this issue, we draw on insights from the field of psychology and introduce the method of multiverse analysis for algorithmic fairness. In our proposed method, we turn implicit design decisions into explicit ones and demonstrate their fairness implications. By combining decisions, we create a grid of all possible "universes" of decision combinations. For each of these universes, we compute metrics of fairness and performance. Using the resulting dataset, one can see how and which decisions impact fairness. We demonstrate how multiverse analyses can be used to better understand variability and robustness of algorithmic fairness using an exemplary case study of predicting public health coverage of vulnerable populations for potential interventions. Our results illustrate how decisions during the design of a machine learning system can have surprising effects on its fairness and how to detect these effects using multiverse analysis.


The Future of Human Agency

#artificialintelligence

This report covers results from the 15th "Future of the Internet" canvassing that Pew Research Center and Elon University's Imagining the Internet Center have conducted together to gather expert views about important digital issues. This is a nonscientific canvassing based on a nonrandom sample; this broad array of opinions about the potential influence of current trends may lead between 2022 and 2035 represents only the points of view of the individuals who responded to the queries. Pew Research Center and Elon's Imagining the Internet Center sampled from a database of experts to canvass from a wide range of fields, inviting entrepreneurs, professionals and policy people based in government bodies, nonprofits and foundations, technology businesses and think tanks, as well as interested academics and technology innovators. The predictions reported here came in response to a set of questions in an online canvassing conducted between June 29 and Aug. 8, 2022. In all, 540 technology innovators and developers, business and policy leaders, researchers and activists responded in some way to the question covered in this report. More on the methodology underlying this canvassing and the participants can be found in the section titled "About this canvassing of experts." Advances in the internet, artificial intelligence (AI) and online applications have allowed humans to vastly expand their capabilities and increase their capacity to tackle complex problems. These advances have given people the ability to instantly access and share knowledge and amplified their personal and collective power to understand and shape their surroundings. Today there is general agreement that smart machines, bots and systems powered mostly by machine learning and artificial intelligence will quickly increase in speed and sophistication between now and 2035.


Senior industry leaders need to learn about AI

#artificialintelligence

The company and law firm names shown above are generated automatically based on the text of the article. We are improving this feature as we continue to test and develop in beta. We welcome feedback, which you can provide using the feedback tab on the right of the page. October 22, 2021 - Imagine this. You are President of the United States.


IT Outsourcing, Consulting, Digital Transformation, Technology Business Solutions

#artificialintelligence

In the last blog, I reflected on how we promote and nurture an open innovation culture at Jade, including the organizational and business benefits that have resulted from this initiative. Now, I'd like to share with you one of the ideas that was born from open innovation. Let me give you some brief background first. It's no news that digital transformation has taken center stage in enterprises these days. Nearly eight out of 10 companies in the US are in the process of doing so, but fail to scale and sustain their digital transformation initiatives.1


The future of AI is here. Regulations? Not so much

#artificialintelligence

Imagine a world without environmental regulations or traffic laws, where unlicensed motorists drive as they please and factories pollute with impunity. Those were the facts of life in cities around the world as the industrial revolution took hold. And a few decades from now, we may look back on the emergence of AI as a similarly lawless era. With that in mind, governments in Canada and the European Union, among others, have been active in proposing regulations to protect consumers while the U.S. has largely remained silent -- until now. Computers are increasingly involved in the most important decisions affecting Americans' lives – whether or not someone can buy a home, get a job or even go to jail." This spring, Democratic senators Cory Booker and Ron Wyden proposed the first national AI ethics bill in the form of the Algorithmic Accountability Act. The bill aims to give regulators, and the public, greater insights into how AI systems make the decisions they do -- and what data is ...


Why Model Explainability is The Next Data Science Superpower

#artificialintelligence

I've interviewed many data scientists in the last 10 years, and model explainability techniques are my favorite topic to distinguish the very best data scientists from the average. Some people think machine learning models are black boxes, useful for making predictions but otherwise unintelligible; but the best data scientists know techniques to extract real-world insights from any model. Answering these questions is more broadly useful than many people realize. This inspired me to create Kaggle's model explainability micro-course. Whether you learn the techniques from Kaggle or from a comprehensive resource like Elements of Statistical Learning, these techniques will totally change how you build, validate and deploy machine learning models.


The "Biometric Mirror" judges you the way we've taught it to: with bias

#artificialintelligence

When we see someone for the first time, we make internal snap judgements about them. After looking at the person for just a few seconds, we might note their gender, race, and age or decide whether or not we think they're attractive, trustworthy, or kind. After actually getting to know the person, we might find out that our initial perception of them was wrong. Well, it's a very big deal when you consider how our assumptions could shape how the artificial intelligence (AI) of the future make increasingly important decisions. In an effort to illustrate this issue to the public, researchers from the University of Melbourne created Biometric Mirror.


Microsoft is creating an oracle for catching biased AI algorithms

#artificialintelligence

Microsoft is building a tool to automatically identify bias in a range of different AI algorithms. It is the boldest effort yet to automate the detection of unfairness that may creep into machine learning--and it could help businesses make use of AI without inadvertently discriminating against certain people. Big tech companies are racing to sell off-the-shelf machine-learning technology that can be accessed via the cloud. As more customers make use of these algorithms to automate important judgements and decisions, the issue of bias will become crucial. And since bias can easily creep into machine-learning models, ways to automate the detection of unfairness could become a valuable part of the AI toolkit.


Microsoft Says Building Tool to Spot Bias in AI Algorithms

#artificialintelligence

After Facebook announced its own tool to detect bias in an algorithm earlier this month, a new report suggests that Microsoft is also building a tool to automate the identification of bias in a range of different Artificial Intelligence (AI) algorithms. The Microsoft tool has the potential to help businesses make use of AI without inadvertently discriminating against certain groups of people, MIT Technology Review reported on Friday. Although Microsoft's new tool may not eliminate the problem of bias that may creep into Machine-Learning models altogether, it will help AI researchers catch more instances of unfairness, Rich Caruna, a senior researcher at Microsoft who is working on the bias-detection dashboard, was quoted as saying. "Of course, we can't expect perfection -- there's always going to be some bias undetected or that can't be eliminated -- the goal is to do as well as we can," he said. The issue of bias will become crucial as more customers make use of these algorithms to take important decisions.