While artificial intelligence (AI) technology has the potential to transform society, the legal issues it raises touch upon diverse areas of law. These areas include privacy and data security, commercial contracts, intellectual property, antitrust, employee benefits, and products liability. AI is broadly defined as computer technology that can simulate human intelligence. Through algorithms, this software can aggregate data, detect patterns, optimize behaviors, and make future predictions. Some examples of AI applications include machine learning, natural language processing, artificial neural networks, machine perception, and motion manipulation.
Whether it's due to a lack of funding, lack of know-how or censorship, some governments and entities are shrinking the amount of data that they incorporate into their AI. Does this compromise the integrity of AI results? Intentional data shrinking is occurring as a matter of policy and expediency. Roya Ensafi, assistant professor of computer science and engineering at the University of Michigan, discovered that censorship was increasing in 103 countries. Most censorship actions "were driven by organizations or internet service providers filtering content," Ensafi reported.
In 2023, a new law regulating AI-enabled recruiting will go live in New York City, with more legislatures to inevitably follow. This is nearly a decade after Amazon deployed its infamous AI-recruiting tool that caused harmful bias against female candidates. Emerging technologies are often left unchecked as industries take shape around them. Due to rapid innovation and sluggish regulation, first-to-market companies tend to ask for the public's forgiveness versus seeking institutional permission. Nearly 20 years after its founding, Facebook (now Meta) is still largely self-regulated.
Oversight of AI is the board's job, regardless of the subject matter complexity. One of the most consequential challenges confronting corporate governance in the near term will be its ability to exercise informed oversight over the application of artificial intelligence ("AI") within its organization. It will be a challenge that will arise regardless of the industry sector in which the company operates, and regardless of how it applies AI in that operation. The essence of the challenge is the rapidly emerging conflict between the perceived societal and commercial benefits arising from AI implementation, and the perceived societal and institutional risks arising from its use. The need to address the challenge is urgent; the competing interests of benefit and risk are hurtling at each other at hypersonic speed.
Many governments around the world have rightly put ethical development and deployment at the heart of their AI thinking. Core to this complex issue is a set of interconnected problems - AI systems that may automate societal problems, either due to a systemic lack of diversity in development teams, perhaps, or the use of training data that contains historic or structural biases. The design of systems may also be a factor. The result may be the algorithmic exclusion of individuals or groups because of their ethnicity, gender, sexuality, religion, or socioeconomic background. For example, facial recognition systems that misidentify black or Asian people because of a lack of relevant data; or CV-scanning applications that reject applicants from some postcodes/zip codes because, historically, human employers have actively excluded those jobseekers.
AI startups are increasingly embracing AI ethics, though this is trickier than it might seem at ... [ ] first glance. Whatever you are thinking, think bigger. Fake it until you make it. These are the typical startup lines that you hear or see all the time. They have become a kind of advisory lore amongst budding entrepreneurs. If you wander around Silicon Valley, you'll probably see bumper stickers with those slogans and likely witness high-tech founders wearing hoodies emblazoned with such tropes. AI-related startups are assuredly included in the bunch. Perhaps we might though add an additional piece of startup success advice for the AI aiming nascent firms, namely that they should energetically embrace AI ethics. That is a bumper sticker-worthy notion and assuredly a useful piece of sage wisdom for any AI founder that is trying to figure out how they can be a proper leader and a winning entrepreneur. For my ongoing and extensive coverage of AI Ethics and Ethical AI, see the link here and the link here, just to name a few. The first impulse of many AI startups is likely the exact opposite of wanting to embrace AI ethics. Often, the focus of an AI startup is primarily about getting some tangible AI system out the door as quickly as possible. There is usually tremendous pressure to produce an MVP (minimally viable product). Investors are skittish about putting money into some newfangled AI contrivance that might not be buildable, and therefore the urgency to craft an AI pilot or prototype is paramount.
Let's play a little game. Imagine that you're a computer scientist. Your company wants you to design a search engine that will show users a bunch of pictures corresponding to their keywords -- something akin to Google Images. You're a great computer scientist, and this is basic stuff! But say you live in a world where 90 percent of CEOs are male. Should you design your search engine so that it accurately mirrors that reality, yielding images of man after man after man when a user types in "CEO"? Or, since that risks reinforcing gender stereotypes that help keep women out of the C-suite, should you create a search engine that deliberately shows a more balanced mix, even if it's not a mix that reflects reality as it is today?
The emergence of artificial intelligence (AI) has not only offered new developments in technology and science but also prompted concerns regarding its impact on commerce and privacy, among other issues. For that reason, the United States is exploring the actions it can take with an AI advisory committee that will continue to inform the government of new developments in artificial intelligence as the technology develops. AI can solve a wide range of problems, from uncovering the identities of anonymous internet users to even predicting the weather with astounding success. The question arises: what should artificial intelligence be used for, and how should the United States regulate it? For private technology companies, the answer to these questions is simple.
We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. As AI adoption continues to ramp up exponentially, so is the discussion around -- and concern for -- accountable AI. While tech leaders and field researchers understand the importance of developing AI that is ethical, safe and inclusive, they still grapple with issues around regulatory frameworks and concepts of "ethics washing" or "ethics shirking" that diminish accountability. Perhaps most importantly, the concept is not yet clearly defined. While many sets of suggested guidelines and tools exist -- from the U.S. National Institute of Standards and Technology's Artificial Intelligence Risk Management Framework to the European Commission's Expert Group on AI, for example -- they are not cohesive and are very often vague and overly complex.