If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Hosted by Dylan Doyle-Burke and Jessie J Smith, Radical AI is a podcast featuring the voices of the future in the field of artificial intelligence ethics. In this episode Jess and Dylan chat to Ryan Calo about robot regulation. What is robot regulation and why does it matter? To answer this question we welcome to the show Ryan Calo. Ryan is a professor at the University of Washington School of Law.
There is a severe knowledge gap. Business leaders' and HR practitioners' quantitative abilities are based on the descriptive or inferential statistics that we all learned. Machine learning is entirely different. To understand it and evaluate it to the level of dealing with potential risks, let alone algorithm auditing, a systematic approach and a practical methodology is needed. Part of my continuous learning, collaboration, and contribution, which hopefully lead to an articulation of a solution for evaluating the Ethics of workforce AI, is a comprehensive resource list that will be updated monthly.
Artificial intelligence is no longer a buzz phrase -- it's doing real work for real companies. Even in the early stages of implementation, AI is providing enterprise organizations with benefits: Efficiency in operations, cybersecurity protections, digital innovation, and stronger customer relationships. Next up for AI in the enterprise is the ability to scale with more apps serving more departments. However, the race to implement AI and machine learning also raises citizen privacy concerns. There have been revelations about the potential for algorithmic bias reflected in data sources.
Kay Firth-Butterfield was teaching AI, ethics, law, and international relations when a chance meeting on an airplane landed her a job as chief AI ethics officer. In 2017, Kay became head of AI and machine learning at the World Economic Forum, where her team develops tools and on-the-ground programs to improve AI understanding and governance across the globe. Your reviews are essential to the success of Me, Myself, and AI. For a limited time, we're offering a free download of MIT SMR's best articles on artificial intelligence to listeners who review the show. Send a screenshot of your review to firstname.lastname@example.org to receive the download. Kay Firth-Butterfield is head of AI and machine learning and a member of the executive committee of the World Economic Forum. In the United Kingdom, she is a barrister with Doughty Street Chambers and has worked as a mediator, arbitrator, part-time judge, business owner, and professor. She is vice chair of the IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems and serves on the Polaris Council of the U.S. Government Accountability Office advising on AI. In the final episode of the first season of the Me, Myself, and AI podcast, Kay joins cohosts Sam Ransbotham and Shervin Khodabandeh to discuss the democratization of AI, the values of good governance and ethics in technology, and the importance of having people understand the technology across their organizations -- and society.
Technology continues to advance at impressive rates. And while the novelty of technologies such as self-parking cars and robotic vacuums have worn off, we are still many years away from the age of computers capable of human thought. Well, that was true before the development of Artificial Neural Networks (ANN), of course. ANN is one of the only techniques currently available for training machines to truly think like people, and it is a tool used within the deep learning space. Artificial intelligence, defined broadly, is the field of training machines to autonomously perform tasks normally thought to require intelligence. Beneath that umbrella is machine learning, in which machines autonomously learn new tasks, and deep learning is a further subcategory of machine learning.
Now, more than ever, our everyday technology provides brands and advertisers with a unique window into consumer psychology. Questions around ethics, consumer rights, transparency, and data privacy are deserving of careful thought and deliberation. These issues are central to the work of Fiona McEvoy, an AI ethics writer, researcher, and speaker, based in San Francisco, California. She was named one of the 30 Women Influencing AI in San Francisco by RE•WORK and one of the 100 Brilliant Women in AI Ethics (2019 & 2020). As opposed to other business industries, why does the technology sector pose unique ethical challenges?
After two or three years researching and consulting this burgeoning field, it seems appropriate to compile what I've found so far. This article (actually, it's in two parts) is more of an op-ed than a typical article. Over time, I de-emphasized ethics and moral philosophy as a subject. They aren't necessary to create practical frameworks for producing ethical AL, and they crowded out the prescriptive things that are necessary (and unfortunately still do in the industry). Reviewing my early contributions, and those of others, too, three things are missing: First, there have been substantial developments in AI in the past two or three years, and second, those developments also raised new ethical issues, and third, I did allude to practices and remedies but did not provide a prescriptive framework for resolving these pressing issues.
The age of pervasive AI is here.1 Since 2017, Deloitte's annual State of AI in the Enterprise report has measured the rapid advancement of AI technology globally and across industries. In the most recent edition, published in July 2020, a majority of those surveyed reported significant increases in AI investments, with more than three-quarters believing that AI will substantially transform their organization in the next three years. In addition, AI investments are increasingly leading to measurable organizational benefits: improved process efficiency, better decision-making, increased worker productivity, and enhanced products and services.2 These possible benefits have likely driven the growth in AI's perceived value to organizations--nearly three-quarters of respondents report that AI is strategically important, an increase of 10 percentage points from the previous survey.
First edition of the event series to be held every three months with a global outreach. Experts with diverse backgrounds and perspectives share their insights and experience on how to implement AI ethics theoretical frameworks. Join in and ask your questions to the stellar speakers in Q&A sessions. Jean-Matthieu Schertzer - H2O.ai, Senior Data Scientist