We argue that the phenomena of distributed responsibility, induced acceptance, and acceptance through ignorance constitute instances of imperfect delegation when tasks are delegated to computationally-driven systems. Imperfect delegation challenges human accountability. We hold that both direct public accountability via public transparency and indirect public accountability via transparency to auditors in public organizations can be both instrumentally ethically valuable and required as a matter of deontology from the principle of democratic self-government. We analyze the regulatory content of 16 guideline documents about the use of AI in the public sector, by mapping their requirements to those of our philosophical account of accountability, and conclude that while some guidelines refer to processes that amount to auditing, it seems that the debate would benefit from more clarity about the nature of the entitlement of auditors and the goals of auditing, also in order to develop ethically meaningful standards with respect to which different forms of auditing can be evaluated and compared.
Justin Lane is an Oxford University-trained artificial intelligence (AI) expert and entrepreneur with no patience for fluffy theories. That led to some fascinating fieldwork in Northern Ireland, where he studied Irish Republican Army and Ulster Defence Association extremists up close. Ultimately, he applied his humanities research to AI programming and agent-based computer simulations. Somehow, he managed to enter undergrad in Baltimore, Md. as a Green Party liberal and emerge from England's ivory towers as a Second Amendment advocate. He now describes himself as a political moderate with "a libertarian flavor." When I first met him, Lane was working at the Center for Mind and Culture in Boston.
Zhang, Daniel, Mishra, Saurabh, Brynjolfsson, Erik, Etchemendy, John, Ganguli, Deep, Grosz, Barbara, Lyons, Terah, Manyika, James, Niebles, Juan Carlos, Sellitto, Michael, Shoham, Yoav, Clark, Jack, Perrault, Raymond
Welcome to the fourth edition of the AI Index Report. This year we significantly expanded the amount of data available in the report, worked with a broader set of external organizations to calibrate our data, and deepened our connections with the Stanford Institute for Human-Centered Artificial Intelligence (HAI). The AI Index Report tracks, collates, distills, and visualizes data related to artificial intelligence. Its mission is to provide unbiased, rigorously vetted, and globally sourced data for policymakers, researchers, executives, journalists, and the general public to develop intuitions about the complex field of AI. The report aims to be the most credible and authoritative source for data and insights about AI in the world.
As politicians play whack-a-mole with COVID-19 infection rates and try to balance the economic damage caused by lockdowns, stay-at-home orders have also impacted those out there in the dating scene. No longer able to meet up for a drink, a coffee, or now even a walk in the park, organizing an encounter with anyone other than your household or support bubble is banned and can result in a fine in the United Kingdom -- and this includes both dates and overnight stays. Therefore, the only feasible option available is online connections, by way of social networks or dating apps. Dating is hard enough at the best of times but sexual desire doesn't disappear just because you are cooped up at home. Realizing this, a number of healthcare organizations worldwide have urged us not to contribute to the spread of COVID-19 by meeting up with others for discreet sex outside of our social bubbles, bringing new meaning to the phrase, "You are your safest sex partner."
Initiating something new, particularly in the midst of change, at the local, national and global levels, takes courage. I would also argue that truly sustainable change happens when we bring multiple perspectives, disciplines and sectors together around a challenge or opportunity. That's why UK Research and Innovation (UKRI) was formed, and Innovate UK is part of it. It invests over £7 billion a year in research and innovation by partnering with academia, industry and government to make the impossible, possible. UKRI will ensure the UK's research and innovation system is fit for the future and able to respond to environmental, social and economic change on a global scale by: This brings together researchers and innovators across disciplines and sectors including engineering and physical sciences, arts, humanities and social sciences, the natural environment, biological sciences, among many others.
Concerns about bias or unfair results in AI systems have come to the fore in recent years as the technology has infiltrated hiring, insurance, law enforcement, advertising, and other aspects of society. Prejudiced code may be a source of indignation on social media but it affects people's access to opportunities and resources in the real world. It's something that needs to be dealt with on a national and international level. A variety of factors go into making insufficiently neutral systems, such as unrepresentative training data, lack of testing on diverse subjects at scale, lack of diversity among research teams, and so on. But among those who developed Twitter's cropping algorithm, several expressed frustration about the assumptions being made about their work. Ferenc Huszár, former Twitter employee, one of the co-authors of Twitter's image pruning research, and now a senior lecturer on machine-learning at University of Cambridge, acknowledged there's reason to look into the results people have been reporting though cautioned against jumping to conclusions about negligence or lack of oversight. Some of the outrage was based on a small number of reported failure cases. While these failures look very bad, there's work to be done to determine the degree to which they are associated w/ race or gender.
Insurance fraud occurs when policyholders file claims that are exaggerated or based on intentional damages. This contribution develops a fraud detection strategy by extracting insightful information from the social network of a claim. First, we construct a network by linking claims with all their involved parties, including the policyholders, brokers, experts, and garages. Next, we establish fraud as a social phenomenon in the network and use the BiRank algorithm with a fraud specific query vector to compute a fraud score for each claim. From the network, we extract features related to the fraud scores as well as the claims' neighborhood structure. Finally, we combine these network features with the claim-specific features and build a supervised model with fraud in motor insurance as the target variable. Although we build a model for only motor insurance, the network includes claims from all available lines of business. Our results show that models with features derived from the network perform well when detecting fraud and even outperform the models using only the classical claim-specific features. Combining network and claim-specific features further improves the performance of supervised learning models to detect fraud. The resulting model flags highly suspicions claims that need to be further investigated. Our approach provides a guided and intelligent selection of claims and contributes to a more effective fraud investigation process.
TV makes meeting people look much too easy. No one expects to live across the hall from their soulmates like Monica and Chandler or to find love at their small town office job like Jim and Pam. Between success stories from Love Island and going on first dates via video calls to get around a pandemic, the rules for finding love have officially gone right out the window. Online dating is hardly a novel way to meet people and is an increasingly popular topic of study. If you're still doubting the possibility of finding love online, consider this study cited in the MIT Technology Review that found that compatibility was greater in partners who had met online.
Thousands of students in England are angry about the controversial use of an algorithm to determine this year's GCSE and A-level results. They were unable to sit exams because of lockdown, so the algorithm used data about schools' results in previous years to determine grades. It meant about 40% of this year's A-level results came out lower than predicted, which has a huge impact on what students are able to do next. GCSE results are due out on Thursday. There are many examples of algorithms making big decisions about our lives, without us necessarily knowing how or when they do it.
Online dating is great, but there's a slight shudder factor attached to the practice now that everyone and their mother (literally) has some sort of profile. The biggest advantage, obviously, is the potential to meet thousands of eligible singles who you likely wouldn't have known existed otherwise. But whether those singles use their profile regularly or are even on it for the right reasons is another question -- thus, the terrifying edge that can cause singles genuinely searching for the real thing to shy away from such a valuable tool. SEE ALSO: Match vs. eharmony: Both are for serious relationships, but how do the dating sites compare in the UK? When the dating pool is so deep, it's important to narrow down your options to dating sites that are most likely to attract a very specific type of person and introduce you to people who have the same intentions that you do.