If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Referring with my screen, I'm gonna I'm going to finish the presentation first so last night uh as a as a simply next step um oh, yeah. So I think this is good. This confirms what I have uh in my in my slide coming from our consultation. Christians came 30%, I would say this is very uh interesting promising uh and uh perhaps setting the stage for the discussion on how we should we should achieve this. Come out uh next year early next year with the revised plan and with uh a proposed regulatory framework uh that will apply to Europe uh but of course we're interested in uh discussing and exchanging ideas with other countries and in particular with UA and I look forward to this.
The products and services we use in our daily lives have to abide by safety and security standards, from car airbags to construction materials. But no such broad, internationally agreed-upon standards exist for artificial intelligence. And yet, AI tools and technologies are steadily being integrated into all aspects of our lives. AI's potential benefits to humanity, such as improving health-care delivery or tackling climate change, are immense. But potential harms caused by AI tools –from algorithmic bias and labour displacement to risks associated with autonomous vehicles and weapons – risk leading to a lack of trust in AI technologies. To tackle these problems, a new partnership between AI Global, a nonprofit organization focused on advancing responsible and ethical adoption of artificial intelligence, and the Schwartz Reisman Institute for Technology and Society (SRI) at the University of Toronto will create a globally recognized certification mark for the responsible and trusted use of AI systems.
Kay Firth-Butterfield was teaching AI, ethics, law, and international relations when a chance meeting on an airplane landed her a job as chief AI ethics officer. In 2017, Kay became head of AI and machine learning at the World Economic Forum, where her team develops tools and on-the-ground programs to improve AI understanding and governance across the globe. Your reviews are essential to the success of Me, Myself, and AI. For a limited time, we're offering a free download of MIT SMR's best articles on artificial intelligence to listeners who review the show. Send a screenshot of your review to firstname.lastname@example.org to receive the download. Kay Firth-Butterfield is head of AI and machine learning and a member of the executive committee of the World Economic Forum. In the United Kingdom, she is a barrister with Doughty Street Chambers and has worked as a mediator, arbitrator, part-time judge, business owner, and professor. She is vice chair of the IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems and serves on the Polaris Council of the U.S. Government Accountability Office advising on AI. In the final episode of the first season of the Me, Myself, and AI podcast, Kay joins cohosts Sam Ransbotham and Shervin Khodabandeh to discuss the democratization of AI, the values of good governance and ethics in technology, and the importance of having people understand the technology across their organizations -- and society.
Few would dispute the idea that artificial intelligence will be a transformative technology for financial services. Yet the view of how that transformation will shake out may be evolving significantly. A report from Deloitte and the World Economic Forum contends that in the near future, technology expertise will grow so commonly available that raw AI and multiple technologies built around that hub will not be what separates the winners from the other players. Instead, as envisioned by the report, the transformative technologies that excite so many today will become as basic to the industry as the longstanding payments rails they all share today. What institutions do with that transformative technology will mean much more and that will hinge on some surprisingly basic ideas.
Artificial intelligence (AI) continues to transform businesses and society, putting pressure on companies in nearly every industry to invest in the rapidly-evolving space. As AI exerts an ever-increasing effect on our lives, the need for responsible AI grows. Responsible AI may already be a widely-discussed topic, but how many companies are actually putting its principles into practice? In part one of our five part series on 2021 predictions, we focus on the future of responsible AI. Responsible AI can mean many things; it can mean reducing model bias, enhancing data privacy, fair pay for members of the AI supply chain, and more.
AI is receiving a push from the race to find a vaccine, diagnostics and effective treatments for the COVID-19 virus, and the push has also heightened awareness of the need to implement AI that is transparent and free of bias--AI that can be trusted. The World Economic Forum is one organization that has responded. With ethics in mind, the organization's AI and Machine Learning team recently announced its Procurement in a Box toolkit with concrete advice for purchasing, risk assessments, proposal drafting and evaluation. To produce the toolkit, the Forum worked over the past year with many organizations, including the United Kingdom's Office for AI in the Department for Digital, Culture, Media & Sport, with Deloitte, Salesforce and Splunk, as well as 15 other countries and more than 150 members of government, academia, civil society and the private sector. The development process incorporated workshops and interviews with government procurement officials and private sector procurement professionals, according to a recent account in Modern Diplomacy.
Kay Firth-Butterfield is Head of AI & ML at World Economic Forum, and a humanitarian with a strong sense of social justice. Read the full transcript below and watch the video here. It's really great to be with you, and thanks to RE.WORK for making it happen. My title is, Does AI Ethics Matter? Well, I'm going to give you two reasons for why it does.
There are many more trees in the West African Sahara Desert than we thought, according to a recent study based on AI and satellite imagery and published in the journal Nature -- which found more than 1.8 billion trees in the Sahara Desert. Researchers have counted more than 1.8 billion trees and shrubs in the 501,933 square-mile (1.3 million square-kilometer) area -- in an area encompassing the western-most region of the Sahara Desert -- called the Sahel -- along with sub-humid zones of West Africa, reports The World Economic Forum. "We were very surprised to see that quite a few trees actually grow in the Sahara Desert, because up until now, most people thought that virtually none existed," said Professor Martin Brandt from the geosciences and natural resource management department of the University of Copenhagen and lead author of the recent study. "We counted hundreds of millions of trees in the desert alone. Doing so wouldn't have been possible without this technology," explained Brandt, according to a blog post on the University of Copenhagen's website.
The COVID-19 pandemic has accelerated technological advances and the automation of many routine tasks – from contactless cashiers to robots delivering packages. In this environment, many are concerned that artificial intelligence (AI) will drive significant automation and destroy jobs in the coming decades. Just a few decades ago, the internet created similar concerns as it grew. Despite skepticism, the technology created millions of jobs and now comprises 10% of US GDP. Today, AI is poised to create even greater growth in the US and global economies.
Human-computer image generation using Generative Adversarial Networks (GANs) is becoming a well-established methodology for casual entertainment and open artistic exploration. Here, we take the interaction a step further by weaving in carefully structured design elements to transform the activity of ML-assisted imaged generation into a catalyst for large-scale popular dialogue on complex socioscientific problems such as the United Nations Sustainable Development Goals (SDGs) and as a gateway for public participation in research.