If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
You browse an e-commerce site on your mobile device, looking for a pair of shoes. Then, with every swipe on your phone, you see ads from other retailers offering you shoes, shoes and more shoes. Are you flattered that the retailer shared your session cookie with third parties? Or do you shake your head, annoyed that these ads are following you everywhere? You visit an online retailer and can't find what you're looking for.
We think of AI as an arbiter of neutrality, but when fed biased data it churns out biased results. At the beginning of 2017, Amazon's machine learning division shuttered an artificial intelligence (AI) project it had been working on for the past three years. A team in its machine learning wing had been building computer programmes designed to review job applicants' resumes, giving them star-ratings from one to five – not unlike the way shoppers can rate products purchased from Amazon online. However, within a year of the project beginning, the company realised its system was biased against female applicants. The software was trained to vet applicants by observing patterns in resumes submitted to the company over a ten-year period, the majority of which – due to the male-dominance of the tech industry – came from men.
Dr. Ansgar Koene Dr. Ansgar Koene is Global AI Ethics and Regulatory Leader at EY where he supports the AI Lab's Policy activities on Trusted AI. He is also a Senior Research Fellow at the RCUK funded Horizon Digital Economy Research institute (University of Nottingham) where he contributes to the policy impact activities of the institute and leads the policy related stakeholder engagement activities of the ReEnTrust project. As part of this work Ansgar has provided evidence to twelve UK parliamentary inquiries, co-authored a report on Bias in Algorithmic Decision-Making for the Centre for Data Ethics and Innovation, and was lead author of a Science Technology Options Assessment report on a Governance Framework for Algorithmic Accountability and Transparency for the European Parliament. Ansgar chairs the IEEE P7003 Standard for Algorithmic Bias Considerations working group, is the Bias Focus Group leader for the IEEE Ethics Certification Program for Autonomous and Intelligent Systems (ECPAIS), and a trustee for the 5Rgiths foundation for the Rights of Young People Online. Ansgar has a multi-disciplinary research background, having worked and published on topics ranging from Policy and Governance of Algorithmic Systems (AI), data-privacy, AI Ethics, AI Standards, bio-inspired Robotics, AI and Computational Neuroscience to experimental Human Behaviour/Perception studies.
Artificial Intelligence (AI) is acquiring increasing importance in many applications that support decision-making in various areas, including healthcare, consumption, and risk classification of individuals. The growing impact of AI on people's lives naturally raises the question about its ethical and moral components. Are AI decisions ethically acceptable? How can we ensure that AI remains ethical over time? Should we dominate AI and impose specific behavioural rules, possibly limiting its enormous potential, or should we allow AI to develop its own ethics, possibly ultimately subjugating us to intellectual slavery?
If recent television shows are anything to go by, we're a little concerned about the consequences of technological development. Black Mirror projects the negative consequences of social media, while artificial intelligence turns rogue in The 100 and Better Than Us. The potential extinction of the human race is up for grabs in Travellers, and Altered Carbon frets over the separation of human consciousness from the body. And Humans and Westworld see trouble ahead for human-android relations. Narratives like these have a long lineage.
With Artificial Intelligence playing a central role in the era of digital transformation, sharing information and knowledge about the advancements and the opportunities brought by AI is becoming crucial. The influencers presented in this list are actively sharing information and knowledge about AI and disruptive technologies. They are the co-founders of AI ventures, machine learning professors, data scientists, and tech and digital transformation leaders. We want to thank the influencers who took part as evaluation committee members in the TOP 25 Initiative 2018, including Spiros Margaris, Alvin Foo, and Vinod Sharma, and we can officially announce that applications are open to join the evaluation committee of the TOP 25 Initiative 2019. My new book #ArchitectsofIntelligence reveals the truth about #AI from the people building it.
Exactly 10 years ago, Professor Andreas Kaplan and Professor Michael Haenlein wrote the seminal article'Users of the world, unite! The challenges and opportunities of Social Media' without knowing that it would become the worldwide most cited article treating social media and a key reading on any literature list of digital business and management. For its anniversary, Kaplan and Haenlein decided to write a similar article, this time titled'Rulers of the world, unite! "When we wrote our article back in 2009, we saw the potential of social media to give power back to the people, i.e. users of the world, unite. As we know, social media powered by AI is now used to manipulate people, influence elections, and threaten democracies. Smart regulation and intervention is urgently needed, i.e. the title Rulers of the world, unite!" state Professors Kaplan and Haenlein.
In June, TechCrunch Ethicist in Residence Greg M. Epstein attended EmTech Next, a conference organized by the MIT Technology Review. The conference, which took place at MIT's famous Media Lab, examined how AI and robotics are changing the future of work. Greg's essay, Will the Future of Work Be Ethical? reflects on his experiences at the conference, which produced what he calls "a religious crisis, despite the fact that I am not just a confirmed atheist but a professional one as well." In it, Greg explores themes of inequality, inclusion and what it means to work in technology ethically, within a capitalist system and market economy. Accompanying the story for Extra Crunch are a series of in-depth interviews Greg conducted around the conference, with scholars, journalists, founders and attendees.
Meili Gupta is about to ask another question. A poised and eloquent rising senior at elite boarding school Phillips Exeter Academy, Gupta, 17, is anything but the introverted, soft-spoken techie stereotype. She does, however, know as much about computer science as any high school student you'd ever meet. She even grew up faithfully reading the MIT Technology Review, the university's flagship publication, which shows, because Meili is the most ubiquitous student attendee at EmTech Next, a conference the publication held on campus this past summer on AI, Machine Learning, and "the future of work." Ostensibly, the conference is an opportunity for executives and tech professionals to rub elbows while determining how next-generation technologies will shape our jobs and economy in the coming decades. For me, the gathering feels more like an opportunity to have an existential crisis; I could even say a religious crisis, though I'm not just a confirmed atheist but a professional one as well.
It feels like this man needs no introduction, but for anyone who doesn't know who Demis Hassabis is, here's the lowdown. He's the cofounder and chief executive of the London-headquartered DeepMind AI lab, which was acquired by Google in 2014 for £400m. Prior to DeepMind, Hassabis had his own computer games company called Elixir Studios, but his passion for games goes way back. He was a chess master at the age of 13 and the second-highest-rated under 14 player in the world at one time. Catherine Breslin is a machine learning scientist and consultant based in Cambridge.