You browse an e-commerce site on your mobile device, looking for a pair of shoes. Then, with every swipe on your phone, you see ads from other retailers offering you shoes, shoes and more shoes. Are you flattered that the retailer shared your session cookie with third parties? Or do you shake your head, annoyed that these ads are following you everywhere? You visit an online retailer and can't find what you're looking for.
These days it seems that nearly every product and startup boasts some kind of A.I. capability, but when it comes to advancing this domain beyond simplistic machine learning technologists at MIT Technology Review's Future Compute conference say these A.I. will need to be more human than not. When discussing A.I. during the conference's first day on December 2nd, speakers focused on two distinct paths for this technology: more human-like A.I.'s as well as more computer-like humans. This dual approach was presented as a potential future for human-machine symbiosis. But what exactly does that all mean, and is it even a good thing? A research Scientist from Oak Ridge National Laboratory, Catherine Schuman began the conversation by presenting her work on neuromorphic computing.
Artificial intelligence (AI): the hype is real. But is the impact of AI real? "…the theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages" In essence, AI is the intelligence developed in machines, as opposed to the natural intelligence which is developed in humans. And if the hype is to be believed, AI is here to make your life easier: less complex, less burdensome with decision making, less stressful. Artificial intelligence now plays a major role in how your home works: from your sound system, your toaster, your security, to even your lounge room air temp.
January 1, 2020, organizations that employ individuals based in Illinois will need to keep in mind the Artificial Intelligence Video Interview Act. This Act sets forth new requirements for video-recorded interviews using AI to analyze such recordings. The law is not limited to just Illinois residents. It applies to applicants for positions based in Illinois. While brief, and without any definitions, the Act requires three things before using AI technology in video interviews.
We think of AI as an arbiter of neutrality, but when fed biased data it churns out biased results. At the beginning of 2017, Amazon's machine learning division shuttered an artificial intelligence (AI) project it had been working on for the past three years. A team in its machine learning wing had been building computer programmes designed to review job applicants' resumes, giving them star-ratings from one to five – not unlike the way shoppers can rate products purchased from Amazon online. However, within a year of the project beginning, the company realised its system was biased against female applicants. The software was trained to vet applicants by observing patterns in resumes submitted to the company over a ten-year period, the majority of which – due to the male-dominance of the tech industry – came from men.
What does AI mean for businesses big and small? What key opportunities and challenges does it present? Two experts on the topic weigh in: Rotman School Dean Tiff Macklem and Scotiabank CTO Michael Zerbs. Can you talk a bit about how you're leveraging third-party datasets as part of your AI strategy? MZ: If you work for a large organization, never underestimate the challenge of just getting at the data that you think you've already got.
Dr. Ansgar Koene Dr. Ansgar Koene is Global AI Ethics and Regulatory Leader at EY where he supports the AI Lab's Policy activities on Trusted AI. He is also a Senior Research Fellow at the RCUK funded Horizon Digital Economy Research institute (University of Nottingham) where he contributes to the policy impact activities of the institute and leads the policy related stakeholder engagement activities of the ReEnTrust project. As part of this work Ansgar has provided evidence to twelve UK parliamentary inquiries, co-authored a report on Bias in Algorithmic Decision-Making for the Centre for Data Ethics and Innovation, and was lead author of a Science Technology Options Assessment report on a Governance Framework for Algorithmic Accountability and Transparency for the European Parliament. Ansgar chairs the IEEE P7003 Standard for Algorithmic Bias Considerations working group, is the Bias Focus Group leader for the IEEE Ethics Certification Program for Autonomous and Intelligent Systems (ECPAIS), and a trustee for the 5Rgiths foundation for the Rights of Young People Online. Ansgar has a multi-disciplinary research background, having worked and published on topics ranging from Policy and Governance of Algorithmic Systems (AI), data-privacy, AI Ethics, AI Standards, bio-inspired Robotics, AI and Computational Neuroscience to experimental Human Behaviour/Perception studies.
Artificial Intelligence (AI) is acquiring increasing importance in many applications that support decision-making in various areas, including healthcare, consumption, and risk classification of individuals. The growing impact of AI on people's lives naturally raises the question about its ethical and moral components. Are AI decisions ethically acceptable? How can we ensure that AI remains ethical over time? Should we dominate AI and impose specific behavioural rules, possibly limiting its enormous potential, or should we allow AI to develop its own ethics, possibly ultimately subjugating us to intellectual slavery?
The Information Commissioner's Office (ICO) is putting forward a regulation that businesses and other organizations are required to explain decisions made by artificial intelligence (AI) or face multimillion-dollar fines if unable to. The guidance will provide advice such as how to explain the procedures, services, and outcomes delivered or assisted by AI to affected individuals. The report would detail the documentation of the decision-making process and data used to arrive at a decision. In extreme cases, organizations that fail to comply may face a fine of up to 4 percent of a company's global turnover, under the EU's data protection law. The new guidance is crucial as many firms in the UK are using some form of AI to execute critical business decisions, such as shortlisting and hiring candidates for roles.
In January of 2019 Smart Dubai launched the city's official principles and guidelines for the ethical implementation of AI. What truly makes Dubai's approach to AI unique is our city-government launched AI Ethics Self-Assessment Toolkit – which allows anyone implementing AI to self-assess their performance against a set of criteria which when taken together assure an ethical approach. The process uses the data from the toolkit to create a positive feedback loop with those using and developing AI. Express Computer spoke to H.E. Younus Al Nasser, Assistant Director General, Smart Dubai and CEO, Smart Dubai Data. What potential do you see in AI for governance and happiness?