mittelstadt
Can AI chatbots be reined in by a legal duty to tell the truth?
Can artificial intelligence be made to tell the truth? Probably not, but the developers of large language model (LLM) chatbots should be legally required to reduce the risk of errors, says a team of ethicists. "What we're just trying to do is create an incentive structure to get the companies to put a greater emphasis on truth or accuracy when they are creating the systems," says Brent Mittelstadt at the University of Oxford. How does ChatGPT work and do AI-powered chatbots "think" like us? LLM chatbots, such as ChatGPT, generate human-like responses to users' questions, based on statistical analysis of vast amounts of text. But although their answers usually appear convincing, they are also prone to errors – a flaw referred to as "hallucination".
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.25)
- Europe > Netherlands (0.05)
CEnt: An Entropy-based Model-agnostic Explainability Framework to Contrast Classifiers' Decisions
Zini, Julia El, Mansour, Mohammad, Awad, Mariette
Current interpretability methods focus on explaining a particular model's decision through present input features. Such methods do not inform the user of the sufficient conditions that alter these decisions when they are not desirable. Contrastive explanations circumvent this problem by providing explanations of the form "If the feature $X>x$, the output $Y$ would be different''. While different approaches are developed to find contrasts; these methods do not all deal with mutability and attainability constraints. In this work, we present a novel approach to locally contrast the prediction of any classifier. Our Contrastive Entropy-based explanation method, CEnt, approximates a model locally by a decision tree to compute entropy information of different feature splits. A graph, G, is then built where contrast nodes are found through a one-to-many shortest path search. Contrastive examples are generated from the shortest path to reflect feature splits that alter model decisions while maintaining lower entropy. We perform local sampling on manifold-like distances computed by variational auto-encoders to reflect data density. CEnt is the first non-gradient-based contrastive method generating diverse counterfactuals that do not necessarily exist in the training data while satisfying immutability (ex. race) and semi-immutability (ex. age can only change in an increasing direction). Empirical evaluation on four real-world numerical datasets demonstrates the ability of CEnt in generating counterfactuals that achieve better proximity rates than existing methods without compromising latency, feasibility, and attainability. We further extend CEnt to imagery data to derive visually appealing and useful contrasts between class labels on MNIST and Fashion MNIST datasets. Finally, we show how CEnt can serve as a tool to detect vulnerabilities of textual classifiers.
- Asia > Middle East > Lebanon > Beirut Governorate > Beirut (0.04)
- Asia > China (0.04)
AI ethics – how do we make "good" AI, and use AI ethically?
How we can make "good" artificial intelligence, what does it mean for a machine to be ethical, and how can we use AI ethically? Good in the Machine – 2019's SCINEMA International Science Film Festival entry – delves into these questions, the origins of our morality, and the interplay between artificial agency and our own moral compass. Read on to learn more about AI ethics. Given a swell of dire warnings about the future of artificial intelligence over the last few years, the field of AI ethics has become a hive of activity. These warnings come from a variety of experts such as Oxford University's Nick Bostrom, but also from more public figures such as Elon Musk and the late Stephen Hawking.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.25)
- Europe > Greece (0.05)
- Health & Medicine (0.70)
- Media > Film (0.49)
- Leisure & Entertainment (0.49)
RStudio AI Blog: Starting to think about AI Fairness
The topic of AI fairness metrics is as important to society as it is confusing. Confusing it is due to a number of reasons: terminological proliferation, abundance of formulae, and last not least the impression that everyone else seems to know what they're talking about. This text hopes to counteract some of that confusion by starting from a common-sense approach of contrasting two basic positions: On the one hand, the assumption that dataset features may be taken as reflecting the underlying concepts ML practitioners are interested in; on the other, that there inevitably is a gap between concept and measurement, a gap that may be bigger or smaller depending on what is being measured. In contrasting these fundamental views, we bring together concepts from ML, legal science, and political philosophy.
Facial-recognition research needs an ethical reckoning
Cameras using facial-recognition technology in King's Cross, London, were taken down in 2019 after concerns were raised that they had been installed without appropriate consent or involvement of the data regulator.Credit: James Veysey/Shutterstock Over the past 18 months, a number of universities and companies have been removing online data sets containing thousands -- or even millions -- of photographs of faces used to improve facial-recognition algorithms. The pictures are classified as public data, and their collection didn't seem to alarm institutional review boards (IRBs) and other research-ethics bodies. But none of the people in the photos had been asked for permission, and some were unhappy about the way their faces had been used. This problem has been brought to prominence by the work of Berlin-based artist and researcher Adam Harvey, who highlighted how public data sets are used by companies to hone surveillance-linked technology -- and by the journalists who reported on Harvey's work. Many researchers in the fields of computer science and artificial intelligence (AI), and those responsible for the relevant institutional ethical review processes, did not see any harm in using public data without consent.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.15)
- Asia > China (0.05)
On Generating Plausible Counterfactual and Semi-Factual Explanations for Deep Learning
Kenny, Eoin M., Keane, Mark T.
There is a growing concern that the recent progress made in AI, especially regarding the predictive competence of deep learning models, will be undermined by a failure to properly explain their operation and outputs. In response to this disquiet counterfactual explanations have become massively popular in eXplainable AI (XAI) due to their proposed computational psychological, and legal benefits. In contrast however, semifactuals, which are a similar way humans commonly explain their reasoning, have surprisingly received no attention. Most counterfactual methods address tabular rather than image data, partly due to the nondiscrete nature of the latter making good counterfactuals difficult to define. Additionally generating plausible looking explanations which lie on the data manifold is another issue which hampers progress. This paper advances a novel method for generating plausible counterfactuals (and semifactuals) for black box CNN classifiers doing computer vision. The present method, called PlausIble Exceptionality-based Contrastive Explanations (PIECE), modifies all exceptional features in a test image to be normal from the perspective of the counterfactual class (hence concretely defining a counterfactual). Two controlled experiments compare this method to others in the literature, showing that PIECE not only generates the most plausible counterfactuals on several measures, but also the best semifactuals.
- Europe > Ireland > Leinster > County Dublin > Dublin (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > United States > California > Alameda County > Livermore (0.04)
- (3 more...)
- Government (0.93)
- Health & Medicine > Therapeutic Area (0.46)
- Health & Medicine > Diagnostic Medicine (0.46)
AI's ethics problem: Abstractions everywhere but where are the rules?
Machines that make decisions about us: what could possibly go wrong? Essays, speeches, seminars pose that question year after year as artificial intelligence research makes stunning advances. Baked-in biases in algorithms are only one of many issues as a result. Jonathan Shaw, managing editor, Harvard Magazine, wrote earlier this year: "Artificial intelligence can aggregate and assess vast quantities of data that are sometimes beyond human capacity to analyze unaided, thereby enabling AI to make hiring recommendations, determine in seconds the creditworthiness of loan applicants, and predict the chances that criminals will re-offend." Again, what could possibly go wrong?
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.16)
- Europe > United Kingdom > England > Greater London > London (0.05)
- Law (0.35)
- Government (0.33)
Where AI and ethics meet
Given a swell of dire warnings about the future of artificial intelligence over the last few years, the field of AI ethics has become a hive of activity. These warnings come from a variety of experts such as Oxford University's Nick Bostrom, but also from more public figures such as Elon Musk and the late Stephen Hawking. The picture they paint is bleak. In response, many have dreamed up sets of principles to guide AI researchers and help them negotiate the maze of human morality and ethics. Now, a paper in Nature Machine Intelligence throws a spanner in the works by claiming that such high principles, while laudable, will not give us the ethical AI society we need.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.25)
- Europe > Greece (0.05)
Where Should AI Ethics Come From? Not Medicine, New Study Says - Web AI
As fears about AI's disruptive potential have grown, AI ethics has come to the fore in recent years. Concerns around privacy, transparency and the ability of algorithms to warp social and political discourse in unexpected ways have resulted in a flurry of pronouncements from companies, governments, and even supranational organizations on how to conduct ethical AI development. The majority have focused on outlining high-level principles that should guide those building these systems. Whether by chance or by design, the principles they have coalesced around closely resemble those at the heart of medical ethics. But writing in Nature Machine Intelligence, Brent Mittelstadt from the University of Oxford points out that AI development is a very different beast to medicine, and a simple copy and paste won't work. The four core principles of medical ethics are respect for autonomy (patients should have control over how they are treated), beneficence (doctors should act in the best interest of patients), non-maleficence (doctors should avoid causing harm) and justice (healthcare resources should be distributed fairly).
Principles are no guarantee of ethical AI, says Oxford ethicist
Dr Brent Mittelstadt's paper'Principles alone cannot guarantee ethical AI', published in the journal Nature Machine Intelligence, argues that a principled approach may not be the best way to approach ethical development and governance of AI. Consensus has seemingly emerged around a set of ethical principles for AI that closely resemble the classic ethical principles of medicine, but there are several reasons to doubt that a principled approach will have comparable impact on AI development as it has historically had in medicine. The vast complexity of AI and a lack of common aims between developers and users suggest principles may be too vague and high-level to be workable. To address these shortcomings, Dr Brent Mittelstadt calls for increased support for'bottom-up' work on ethical AI. Companies must be prepared to disclose more about how they develop and audit AI systems, and work more openly with researchers and the public.