Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Wondering where AI regulation stands in your state? Today, the Electronic Privacy Information Center (EPIC) released The State of State AI Policy, a roundup of AI-related bills at the state and local level that were passed, introduced or failed in the 2021-2022 legislative session (EPIC gave VentureBeat permission to reprint the full roundup below). Within the past year, according to the document (which was compiled by summer clerk Caroline Kraczon), states and localities have passed or introduced bills "regulating artificial intelligence or establishing commissions or task forces to seek transparency about the use of AI in their state or locality."
AI has the potential to deliver enormous business value for organizations, and its adoption has been sped up by the data-related challenges of the pandemic. Forrester estimates that almost 100% of organizations will be using AI by 2025, and the artificial intelligence software market will reach $37 billion by the same year. But there is growing concern around AI bias -- situations where AI makes decisions that are systematically unfair to particular groups of people. Researchers have found that AI bias has the potential to cause real harm. I recently had the chance to speak with Ted Kwartler, VP of Trusted AI at DataRobot, to get his thoughts on how AI bias occurs and what companies can do to make sure their models are fair.
Amazon might face some political opposition in its bid to acquire iRobot. Democrats including Senator Elizabeth Warren and House Representatives Jesus Garcia, Pramila Jayapal, Mondaire Jones, Katie Porter and Mark Pocan have asked the Federal Trade Commission (FTC) to oppose the purchase of the Roomba creator. The members of Congress pointed to Amazon's history of technology buyouts to support their case, arguing that the company snaps up competitors to eliminate them. Amazon killed sales of Kiva Systems' robots after the 2012 acquisition and used them exclusively in its warehouses, for instance. The 2017 and 2018 acquisitions of Blink and Ring reportedly helped Amazon dominate US video doorbell sales, while the internet retailer has also faced multiple accusations of abusing third-party seller data to launch rival products and promote them above others.
Location: Our offices are in London (Farringdon), with the ability to work from home for part of the week. The Ada Lovelace Institute is recruiting to the newly created position of Associate Director, Data & AI Law and Policy to join our senior leadership team and develop a comprehensive strategy for informing and influencing public policy, regulatory initiatives and legislative debates on data and AI policy and regulation, in the UK and beyond. In the past five years, AI and other tech regulation has become politically palatable, practically achievable and even commercially desirable in jurisdictions around the world. The year 2022 alone has seen a significant global uptick in proposals for the regulation of AI technologies, online markets, social media platforms and other digital technologies, such as the European Union Directive on AI liability, a forthcoming AI regulation whitepaper in the UK, and similar initiatives in jurisdictions such as Canada and Brazil. At the same time, data regulation is being reformed and iterated in the UK, EU and beyond.
We all started to realize that the rapid development of AI was really going to change the world we live in. AI is no longer just a branch of computer science, it has escaped from research labs with the development of "AI systems", "software that, for human-defined purposes, generates content, predictions, recommendations or decisions influencing the environments with which they interact" (european union definition). The issues of governance of these AI systems – with all the nuances of ethics, control, regulation and regulation – have become crucial, as their development today is in the hands of a few digital empires like them Gafa-Natu-Batx… who have become the masters of real societal choices on automation and on the "rationalization" of the world. The complex fabric intersecting AI, ethics and law is then built in power relations – and connivance – between states and tech giants. But the commitment of citizens becomes necessary, to assert other imperatives than a solutionism technology where "everything that can be connected will be connected and streamlined".
Artificial Intelligence (AI) has an increasing say in the range of opportunities we are offered in life. Artificial neural networks might be used in deciding whether you will get a loan, an apartment, or your next job based on datasets collected from around the globe. Generative adversarial networks (GANs) are used to produce real-looking but fake content online that can affect our political opinion-formation and election freedom. In some cases, our only contact for a service provider is an AI system, which is used to collect and analyze the content of customer input and to provide solutions with natural language processing. In the context of Western democracies, threats and issues related to these tools are frequently viewed as problematic. On the one hand, AI technologies are shown to help include more people in collective decision-making and potentially decrease the cognitive bias occurring when humans make decisions, leading to fairer outcomes.On the other hand, studies indicate that certain AI technologies can lead to biased decisions and decrease the level of human autonomy in a way that threatens our fundamental human rights. While recognizing individual cases where rights and freedoms are being violated, we can easily neglect rapid and in some cases alarming changes occurring in the big picture: People seem to have ever less control over their own lives and decisions that affect them. This has been brought forward by several authors and academics, such as James Muldoon in Platform Socialism, Shoshana Zuboff in Surveillance Capitalism and Mark Coeckelbergh in The Political Philosophy of AI. Control over one's life and collective decision-making are both essential building blocks of the fundamental structure of most Western societies: democracy. Whereas some attempts have already been made to better understand the relationship between AI and democracy (see, e.g., Nemitz 2018, Manheim & Kaplan 2019, and Mark Coeckelberg's above-mentioned book), the discussion remains limited.
Artificial intelligence (AI) is everywhere, powering applications such as smart assistants, spam filters and search engines. The technology offers multiple advantages to businesses – such as the ability to provide a more personalised experience for customers. AI can also boost business efficiency and improve security by helping to predict and mitigate cyber-attacks. But while AI offers benefits, the technology poses significant risks to privacy, including the potential to de-anonymise data. Recent research revealed AI-based deep learning models are able to determine the race of patients based on radiologic images such as chest x-rays or mammograms – and with "significantly better" accuracy than human experts.
In September 2022, the United Nations System Chief Executives Board for Coordination endorsed the Principles for the Ethical Use of Artificial Intelligence in the United Nations System, developed through the High-level Committee on Programmes (HLCP) which approved the Principles at an intersessional meeting in July 2022. These Principles were developed by a workstream co-led by United Nations Educational, Scientific and Cultural Organization (UNESCO) and the Office of Information and Communications Technology of the United Nations Secretariat (OICT), in the HLCP Inter-Agency Working Group on Artificial Intelligence. The Principles are based on the Recommendation on the Ethics of Artificial Intelligence adopted by UNESCO's General Conference at its 41st session in November 2021. This set of ten principles, grounded in ethics and human rights, aims to guide the use of artificial intelligence (AI) across all stages of an AI system lifecycle across United Nations system entities. It is intended to be read with other related policies and international law, and includes the following principles: do no harm; defined purpose, necessity and proportionality; safety and security; fairness and non-discrimination; sustainability; right to privacy, data protection and data governance; human autonomy and oversight; transparency and explainability; responsibility and accountability; and inclusion and participation.
Are workers indeed quiet quitting, and if so, where does AI fit into this rising trend? You have almost certainly heard about or seen news reports exclaiming that quiet quitting is here and amongst us all. Yes, indeed, quiet quitting is experiencing its banner headline pronouncements during a seemingly pronounced fifteen minutes of fame. Will the spotlight last longer than a short-lived fad? Will it have endurance and become part of our permanent lexicon? Lots of vital questions abound. I am going to unpack the quiet quitting phenomenon and see what makes the whole matter so notably significant right now. On top of that, I'll introduce a facet that I'm betting most have not realized is getting dragged into the quiet quitting mania. Make sure you are sitting down. The latest dovetailing consideration involves the inclusion of Artificial Intelligence (AI) into the quiet quitting arena. AI is being added to the quiet quitting bandwagon, though not everyone is especially pleased with having AI become inexorably entangled therein. This abundantly raises all sorts of AI Ethics concerns. We will examine how quiet quitting and Ethical AI are going to be at times partners and at other times foes. For my overall ongoing and extensive coverage of AI Ethics and Ethical AI, see the link here and the link here, just to name a few.
Abakar Saidov is co-founder and CEO of Beamery, a leader in talent lifecycle management. In the wake of the "Great Reshuffle," companies continue to reevaluate their approach to recruitment and retention. In order to drive efficiency and remain effective at scale, business leaders are increasingly turning to new technologies for support. One of the most valuable technologies supporting talent management strategies today is artificial intelligence (AI). It has the potential to revolutionize the way in which businesses interact with the wider talent landscape, helping HR teams and recruiters fill much-needed positions and identify the skill sets in most demand.