Artificial Intelligence (AI) has an increasing say in the range of opportunities we are offered in life. Artificial neural networks might be used in deciding whether you will get a loan, an apartment, or your next job based on datasets collected from around the globe. Generative adversarial networks (GANs) are used to produce real-looking but fake content online that can affect our political opinion-formation and election freedom. In some cases, our only contact for a service provider is an AI system, which is used to collect and analyze the content of customer input and to provide solutions with natural language processing. In the context of Western democracies, threats and issues related to these tools are frequently viewed as problematic. On the one hand, AI technologies are shown to help include more people in collective decision-making and potentially decrease the cognitive bias occurring when humans make decisions, leading to fairer outcomes.On the other hand, studies indicate that certain AI technologies can lead to biased decisions and decrease the level of human autonomy in a way that threatens our fundamental human rights. While recognizing individual cases where rights and freedoms are being violated, we can easily neglect rapid and in some cases alarming changes occurring in the big picture: People seem to have ever less control over their own lives and decisions that affect them. This has been brought forward by several authors and academics, such as James Muldoon in Platform Socialism, Shoshana Zuboff in Surveillance Capitalism and Mark Coeckelbergh in The Political Philosophy of AI. Control over one's life and collective decision-making are both essential building blocks of the fundamental structure of most Western societies: democracy. Whereas some attempts have already been made to better understand the relationship between AI and democracy (see, e.g., Nemitz 2018, Manheim & Kaplan 2019, and Mark Coeckelberg's above-mentioned book), the discussion remains limited.
Artificial intelligence (AI) is everywhere, powering applications such as smart assistants, spam filters and search engines. The technology offers multiple advantages to businesses – such as the ability to provide a more personalised experience for customers. AI can also boost business efficiency and improve security by helping to predict and mitigate cyber-attacks. But while AI offers benefits, the technology poses significant risks to privacy, including the potential to de-anonymise data. Recent research revealed AI-based deep learning models are able to determine the race of patients based on radiologic images such as chest x-rays or mammograms – and with "significantly better" accuracy than human experts.
In September 2022, the United Nations System Chief Executives Board for Coordination endorsed the Principles for the Ethical Use of Artificial Intelligence in the United Nations System, developed through the High-level Committee on Programmes (HLCP) which approved the Principles at an intersessional meeting in July 2022. These Principles were developed by a workstream co-led by United Nations Educational, Scientific and Cultural Organization (UNESCO) and the Office of Information and Communications Technology of the United Nations Secretariat (OICT), in the HLCP Inter-Agency Working Group on Artificial Intelligence. The Principles are based on the Recommendation on the Ethics of Artificial Intelligence adopted by UNESCO's General Conference at its 41st session in November 2021. This set of ten principles, grounded in ethics and human rights, aims to guide the use of artificial intelligence (AI) across all stages of an AI system lifecycle across United Nations system entities. It is intended to be read with other related policies and international law, and includes the following principles: do no harm; defined purpose, necessity and proportionality; safety and security; fairness and non-discrimination; sustainability; right to privacy, data protection and data governance; human autonomy and oversight; transparency and explainability; responsibility and accountability; and inclusion and participation.
Are workers indeed quiet quitting, and if so, where does AI fit into this rising trend? You have almost certainly heard about or seen news reports exclaiming that quiet quitting is here and amongst us all. Yes, indeed, quiet quitting is experiencing its banner headline pronouncements during a seemingly pronounced fifteen minutes of fame. Will the spotlight last longer than a short-lived fad? Will it have endurance and become part of our permanent lexicon? Lots of vital questions abound. I am going to unpack the quiet quitting phenomenon and see what makes the whole matter so notably significant right now. On top of that, I'll introduce a facet that I'm betting most have not realized is getting dragged into the quiet quitting mania. Make sure you are sitting down. The latest dovetailing consideration involves the inclusion of Artificial Intelligence (AI) into the quiet quitting arena. AI is being added to the quiet quitting bandwagon, though not everyone is especially pleased with having AI become inexorably entangled therein. This abundantly raises all sorts of AI Ethics concerns. We will examine how quiet quitting and Ethical AI are going to be at times partners and at other times foes. For my overall ongoing and extensive coverage of AI Ethics and Ethical AI, see the link here and the link here, just to name a few.
Abakar Saidov is co-founder and CEO of Beamery, a leader in talent lifecycle management. In the wake of the "Great Reshuffle," companies continue to reevaluate their approach to recruitment and retention. In order to drive efficiency and remain effective at scale, business leaders are increasingly turning to new technologies for support. One of the most valuable technologies supporting talent management strategies today is artificial intelligence (AI). It has the potential to revolutionize the way in which businesses interact with the wider talent landscape, helping HR teams and recruiters fill much-needed positions and identify the skill sets in most demand.
Rapid advancements in AI require keeping high ethical standards, as much for legal reasons as moral. During a session at this year's AI & Big Data Expo Europe, a panel of experts provided their views on what businesses need to consider before deploying artificial intelligence. The first question called for thoughts about current and upcoming regulations that affect AI deployments. As a lawyer, De Boel kicked things off by giving her take. De Boel highlights the EU's upcoming AI Act which builds upon the foundations of similar legislation such as GDPR but extends it for artificial intelligence.
But the requirement has posed some compliance challenges. Unlike familiar financial audits, refined over decades of accounting experience, the AI audit process is new and without clearly established guidelines. Our Morning Risk Report features insights and news on governance, risk and compliance. "There is a major concern, which is it's not clear exactly what constitutes an AI audit," said Andrew Burt, managing partner at AI-focused law firm BNH. "If you are an organization that's using some type of these tools…it can be pretty confusing."
According to study results by Fivetran, 86% of companies struggle to trust AI to make all business decisions without human participation. In contrast, 90% of enterprises rely on manual data procedures. The companion paper, "Achieving AI: A Study of AI Opportunities and Obstacles," explains the problems businesses confront in today's AI ecosystem. The paper investigates how, even though 87% of businesses identify AI as the future of business and aim to expand their investment in it, a lack of trust in machine-led decision-making is a significant obstacle caused by technical challenges and a lack of education. Only 14% of respondents believe their companies are "advanced" in AI maturity.
For best-in-class artificial intelligence solutions to actually earn that designation, Sindhu Joseph warns that the tools can't be used as "set it and forget it." Joseph, the co-founder and CEO of CogniCor, a California-based developer of an AI-powered business automation platform, reminded those attending her panel on day two of the inaugural Future Proof festival of the massive failure that was Microsoft's Tay. In spring 2016, the AI chatbot, named as an acronym for "thinking about you," was launched and pulled within a day of operation. Its machine-learning capabilities had caused it to spew racist, misogynistic and anti-semitic statements across Twitter, in a spectacular public display of garbage in, garbage out. Just "letting the machine run" without proper human guidance or care is a huge pitfall, said Joseph, who holds a PhD in artificial intelligence and is the inventor of six patents related to the technology. "There's a lot of applications where that works really well.
A study published in Al and Ethics found that people in Japan, Germany, and the United States have different concerns over the use of artificial intelligence technology in everyday life. In Japan, people reported more concern about AI used to fight crime. Alternatively, Germans and Americans tended to report more concern over the ethical and social aspects of using AI in entertainment. Around 1,000 respondents from each country were surveyed. Older respondents were found to be the most concerned about the ethical and social issues of AI, whereas those more familiar with AI were more worried about legal implications.