Late last year, China's Ministry of Science and Technology issued guidelines on artificial intelligence ethics. The rules stress user rights and data control while aligning with Beijing's goal of reining in big tech. China is now trailblazing the regulation of AI technologies, and the rest of the world needs to pay attention to what it's doing and why. The European Union had issued a preliminary draft of AI-related rules in April 2021, but we've seen nothing final. In the United States, the notion of ethical AI has gotten some traction, but there aren't any overarching regulations or universally accepted best practices.
Keeping up with artificial intelligence (AI) and data privacy can be overwhelming. While there's loads of promise and opportunity, there are also concerns about data misuse and personal privacy being at risk. As we evaluate these topics and as the Fourth Industrial Revolution unfolds, questions arise about the promise and peril of AI, and how can organizations put steps in place to better realize the value of it. Integrating "ethics" into technology products can feel abstract for engineers and developers. While many technology companies are independently working on initiatives to do this in concrete and tangible ways, it is imperative that we break out of those silos and share best practices.
The idea of artificial intelligence, first coined in 1956, has dominated popular film (think the Matrix Trilogy or Stanley Kubrick's 2001: A Space Odyssey) and ethical debate -- which, in the UK, is addressed by the National Centre for Data Ethics and Innovation, which aims to position the UK as a world-leading force for the future of AI. This public body can't address the potential problem of ethical AI alone. To ensure that AI develops as a force for good, industry collaboration is required. Digital Catapult has released its first Ethics Framework as a means to integrate ethical practice into the development of artificial intelligence and machine learning technologies. The organisations has invited AI companies to test this framework.
A new report from FICO and Corinium has found that many companies are deploying various forms of AI throughout their businesses with little consideration for the ethical implications of potential problems. The increasing scale of AI is raising the stakes for major ethical questions. There have been hundreds of examples over the last decade of the many disastrous ways AI has been used by companies, from facial recognition systems unable to discern darker skinned faces to healthcare apps that discriminate against African American patients to recidivism calculators used by courts that skew against certain races. Despite these examples, FICO's State of Responsible AI report shows business leaders are putting little effort into ensuring that the AI systems they use are both fair and safe for widespread use. The survey, conducted in February and March, features the insights of 100 AI-focused leaders from the financial services sector, with 20 executives hailing from the US, Latin America, Europe, the Middle East, Africa, and the Asia Pacific regions.
As nations across the world slowly reopen their economies after extended lockdowns, businesses will need to hit the ground running to operate in a new abnormal. One of the ways companies can count on meeting the acceleration with safety is by adopting smart tech, especially tools and platforms enabled by artificial intelligence. However, because these tools and platforms are built on algorithms, there is concern that the use of AI technology might unconsciously result in and perpetuate biases. When it comes to this area, a business's commitment to ethical operation is a must in a more transparent world where consumers are keenly aware of a company's track record and business conduct. What can businesses do to effectively tackle this challenge? How can organizations safely deploy platforms enabled with AI to do more with less while ensuring that they are always doing the right thing?