In the race to adopt rapidly developing technologies, organisations run the risk of overlooking potential ethical implications. And that could produce unwelcome results, especially in artificial intelligence (AI) systems that employ machine learning. Machine learning is a subset of AI in which computer systems are taught to learn on their own. Algorithms allow the computer to analyse data to detect patterns and gain knowledge or abilities without having to be specifically programmed. It is this type of technology that empowers voice-enabled assistants such as Apple's Siri or the Google Assistant, among myriad other uses.
The issue of corporate ethics is never far from the business media headlines. Take the troubles embroiling former Nissan chair Carlos Ghosn, or the accounting problems at Patisserie Valerie in the UK, to name just two recent examples. Despite the best intentions and efforts of policymakers, legislators, boards and professional consultants, the corporate scandals keep coming. Now, to further complicate matters, the latest developments in the digital revolution are adding a new dimension to the challenge of ensuring companies and their executives behave responsibly. Ioannis Ioannou, Associate Professor of Strategy and Entrepreneurship at London Business School, and Sam Baker, Monitor Deloitte Partner, suggest that, while the widespread introduction of AI and machine learning technologies can be a force for good, without the right approach there is a risk that the corporate ethics waters become even murkier.
For us and many other companies, trust has become a competitive advantage. As businesses race to adopt artificial intelligence (AI), their ability to use it ethically--and in ways that generate trust from customers, partners, and the public--will become a competitive differentiator. This means companies need to make ethics and values a focus of AI development. Some reasons for this are obvious: Three-fourths of consumers today say they won't buy from unethical companies, while 86% say they're more loyal to ethical companies, according to the 2019 Edelman Trust Barometer. In Salesforce's recent Ethical Leadership and Business survey, 93% of consumers say companies have a responsibility to positively impact society.
In the past 18 months, we have seen a huge rise in the interest of AI development and activation. Countries are developing national strategies, and companies are positioning themselves for the fourth industrial revolution. With this pervasive push of AI, comes also an increased awareness that AIs should act in the interest of a human - and this is not as trivial as one might think. This article provides an overview of several key initiatives that propose ways on approaching AI ethics, regulation and sustainability. As this is a fast evolving field, I aim to update this article regularly.
Artificial intelligence (AI) relies on big data and machine learning for myriad applications, from autonomous vehicles to algorithmic trading, and from clinical decision support systems to data mining. The availability of large amounts of data is essential to the development of AI. Given China's large population and business sector, both of which use digitized platforms and tools to an unparalleled extent, it may enjoy an advantage in AI. In addition, it has fewer constraints on the use of information gathered through the digital footprint left by people and companies. India has also taken a series of similar steps to digitize its economy, including biometric identity tokens, demonetization and an integrated goods and services tax.