Collaborating Authors


6 business risks of shortchanging AI ethics and governance


Depending on which Terminator movies you watch, the evil artificial intelligence Skynet has either already taken over humanity or is about to do so. But it's not just science fiction writers who are worried about the dangers of uncontrolled AI. In a 2019 survey by Emerj, an AI research and advisory company, 14% of AI researchers said that AI was an "existential threat" to humanity. Even if the AI apocalypse doesn't come to pass, shortchanging AI ethics poses big risks to society -- and to the enterprises that deploy those AI systems. Central to these risks are factors inherent to the technology -- for example, how a particular AI system arrives at a given conclusion, known as its "explainability" -- and those endemic to an enterprise's use of AI, including reliance on biased data sets or deploying AI without adequate governance in place.

Artificial Intelligence and Automated Systems Legal Update (1Q22)


Secretary shall support a program of fundamental research, development, and demonstration of energy efficient computing and data center technologies relevant to advanced computing applications, including high performance computing, artificial intelligence, and scientific machine learning.").

GPT-3 and GPT-4 Could Ruin the Future Internet -


This is an Op-ed about the future of the internet and, while speculative, it's an example and an attempt to demonstrate how Artificial Intelligence at scale in a human would or could have disastrous impacts without AI regulation and AI ethics to protect us. GPT-3 stands for Generative Pre-trained Transformer. As you likely already know GPT-3 is an autoregressive language model that uses deep learning to produce human-like text. It is the third-generation language prediction model in the GPT-n series (and the successor to GPT-2) created by Microsoft-funded OpenAI (that was supposed to be a not for profit firm). In 2021 we've had a NLP-explosion year in terms of Artificial Intelligence activity.

AI Startups Finally Getting Onboard With AI Ethics And Loving It, Including Those Newbie Autonomous Self-Driving Car Tech Firms Too


AI startups are increasingly embracing AI ethics, though this is trickier than it might seem at ... [ ] first glance. Whatever you are thinking, think bigger. Fake it until you make it. These are the typical startup lines that you hear or see all the time. They have become a kind of advisory lore amongst budding entrepreneurs. If you wander around Silicon Valley, you'll probably see bumper stickers with those slogans and likely witness high-tech founders wearing hoodies emblazoned with such tropes. AI-related startups are assuredly included in the bunch. Perhaps we might though add an additional piece of startup success advice for the AI aiming nascent firms, namely that they should energetically embrace AI ethics. That is a bumper sticker-worthy notion and assuredly a useful piece of sage wisdom for any AI founder that is trying to figure out how they can be a proper leader and a winning entrepreneur. For my ongoing and extensive coverage of AI Ethics and Ethical AI, see the link here and the link here, just to name a few. The first impulse of many AI startups is likely the exact opposite of wanting to embrace AI ethics. Often, the focus of an AI startup is primarily about getting some tangible AI system out the door as quickly as possible. There is usually tremendous pressure to produce an MVP (minimally viable product). Investors are skittish about putting money into some newfangled AI contrivance that might not be buildable, and therefore the urgency to craft an AI pilot or prototype is paramount.

The best way to regulate artificial intelligence? The EU's AI Act


With the Artificial Intelligence Act (AI Act), we have – again – crossed the Rubicon. The die has been cast, there is no way back. We are setting standards for another industry that until now has been left mostly on its own, that has important social functions, and that is of central importance in the global tech rivalry. The European electorate was and still is quite united in demanding rules for digital players while maintaining easy digital access and a competitiveness for all things digital. With the AI Act and other legislation currently under way in such fields as cybersecurity, data, crypto and chips, the European Union is finalizing what it began with the General Data Privacy Regulation (GDPR), the Digital Services Act (DSA) and the Digital Markets Act (DMA). It will surely not be the last time digital policy is undertaken in Brussels, and updates to these regulations are partly already necessary.

Why We Need Ethical AI: 5 Initiatives to Ensure Ethics in AI


Artificial intelligence (AI) has already had a profound impact on business and society. Applied AI and machine learning (ML) are creating safer workplaces, more accurate health diagnoses and better access to information for global citizens. The Fourth Industrial Revolution will represent a new era of partnership between humans and AI, with potentially positive global impact. AI advancements can help society solve problems of income inequality and food insecurity to create a more "inclusive, human-centred future" according to the World Economic Forum (WEF). There is nearly limitless potential to AI innovation, which is both positive and frightening.

What Stanford's recent AI conference reveals about the state of AI accountability


We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. As AI adoption continues to ramp up exponentially, so is the discussion around -- and concern for -- accountable AI. While tech leaders and field researchers understand the importance of developing AI that is ethical, safe and inclusive, they still grapple with issues around regulatory frameworks and concepts of "ethics washing" or "ethics shirking" that diminish accountability. Perhaps most importantly, the concept is not yet clearly defined. While many sets of suggested guidelines and tools exist -- from the U.S. National Institute of Standards and Technology's Artificial Intelligence Risk Management Framework to the European Commission's Expert Group on AI, for example -- they are not cohesive and are very often vague and overly complex.

Regulating AI Through Data Privacy

Stanford HAI

In the absence of a national data privacy law in the U.S., California has been more active than any other state in efforts to fill the gap on a state level. The state enacted one of the nation's first data privacy laws, the California Privacy Rights Act (Proposition 24) in 2020, and an additional law will take effect in 2023. A new state agency created by the law, the California Privacy Protection Agency, recently issued an invitation for public comment on the many open questions surrounding the law's implementation. Our team of Stanford researchers, graduate students, and undergraduates examined the proposed law and have concluded that data privacy can be a useful tool in regulating AI, but California's new law must be more narrowly tailored to prevent overreach, focus more on AI model transparency, and ensure people's rights to delete their personal information are not usurped by the use of AI. Additionally, we suggest that the regulation's proposed transparency provision requiring companies to explain to consumers the logic underlying their "automated decision making" processes could be more powerful if it instead focused on providing greater transparency about the data used to enable such processes. Finally, we argue that the data embedded in machine-learning models must be explicitly included when considering consumers' rights to delete, know, and correct their data.

US Companies Must Deal with EU AI law, Like It or Not


Don't look now, but using Google Analytics to track your website's audience might be illegal. That's the view of a court in Austria, which in January found that Google's data product was in breach of the European Union's General Data Protection Regulation (GDPR) as it was not doing enough to make sure data transferred from the EU to the company's servers in the US was protected (from, say, US intelligence agencies). Well for those working in AI and biotech, it matters, especially to those working outside of Europe with a view to expansion there. For a start, this is a major precedent that threatens to upend the way many tech companies work, since the tech sector relies heavily on the safe use and transfer of large quantities of data. Whether you use Google Analytics is neither here nor there; the case has shown that Privacy Shield -- the EU-US framework that governs the transfer of personal information in compliance with GDPR -- may not be compliant with European law after all.

Podcast transcript: Do we need AI regulation?


This automatically-generated transcript is taken from the IT Pro Podcast episode'Do we need AI regulation?'. To listen to the full episode, click here. The AI industry has been going from strength to strength over the past several years, with machine learning technology becoming increasingly widely available to businesses, along with a stream of breakthroughs in research and development. However, this explosion of AI capabilities has also brought its share of problems. Questions of model transparency, implicit bias and ethical deployments have frequently been levelled at efforts in this space. And numerous campaigners have called for governments to introduce legislation, which will place greater controls on the development and implementation of AI systems. Joining us this week to discuss the issue of AI regulation, whether it's necessary and how it might be implemented without stifling innovation is Cindi Howson, chief data strategy officer for analytics software vendor ThoughtSpot. Cindi, great to have you on the show. Great to be here, Sabina, and Adam.