Goto

Collaborating Authors

Artificial intelligence made in Europe

#artificialintelligence

Positive, reliable and human-centric artificial intelligence (AI) relies on the willingness of Europe as a whole to design a balanced and inclusive governance framework that would allow it to become a leader in the development of trustworthy AI technologies worldwide. That was the main conclusion reached in the frame of the high-level workshop organised by the Panel for the Future of Science and Technology (STOA) on 29 January 2020 at the European Parliament in Brussels. The first STOA event for this parliamentary term (2019-2024) drew a full house with Members of the European Parliament, European Commission leaders, academic experts and representatives of international organisations debating how to strike the right balance on AI. Harnessing the numerous benefits that the transformative power of AI can bring needs to also take account of the necessity to mitigate a number of potential risks – from hampering people's fundamental rights, such as privacy or non-discrimination – to undermining European values such as democracy, human dignity and the right to assemble. The event proved to be a timely occasion to discuss how Europe could maximise the benefits and address the challenges of AI in a human-centric way, coming only a few days before the publication of the European Commission's legislative plans on AI in the form of a White Paper on 19 February 2020.


Grilling the answers: How businesses need to show how AI decides

#artificialintelligence

Show your working: generations of mathematics students have grown up with this mantra. Getting the right answer is not enough. To get top marks, students must demonstrate how they got there. Now, machines need to do the same. As artificial intelligence (AI) is used to make decisions affecting employment, finance or justice, as opposed to which film a consumer might want to watch next, the public will insist it explains its working.


Artificial Intelligence: Possibilities of transforming the workplace

#artificialintelligence

As the Artificial Intelligence (AI) market in India matures, firms are looking at unavoidable changes in their workplace. The market is seeing technology leaders and large firms investing in this space. At the same time, a number of AI start-ups have mushroomed in India in the recent years. To stay competitive and relevant in the market, organisations would do well to rethink existing practices and develop newer business models and offerings. This would involve making use of technologies, such as machine learning, deep learning, computer vision and natural language processing, among others, to power intelligent systems.


Why Government Agencies Need to Incorporate Explainable AI in 2021

#artificialintelligence

In a world fueled by digital data, the use of artificial intelligence is prolific--from the automation of human processes to discovering hidden insights at scale and speed. Machines can do many tasks far more efficiently and reliably than humans, resulting in everyday life that increasingly resembles science fiction. This inevitably sparks concern about controls--or lack thereof--to inspect and ensure these advanced technologies are used responsibly. Consumers want reassurance about ethical use and fairness related to AI. Businesses need to mitigate the risk of unintended consequences when employing these advanced, complex solutions. Enter: Explainable AI, or XAI, an attempt to create transparency in the "black box" of artificial intelligence. Can you confidently answer the simple questions below about your current AI solutions?


Explainable AI - How humans can trust AI

#artificialintelligence

Artificial intelligence (AI) has achieved growing momentum in its application in many fields to deal with the increased complexity, scalability, and automation, and that also permeates into digital networks today. A rapid surge in the complexity and sophistication of AI-powered systems has evolved to such an extent that humans do not understand the complex mechanisms by which AI systems work or how they make certain decisions -- something that is particularly a challenge when AI-based systems compute outputs that are unexpected or seemingly unpredictable. This especially holds true for opaque decision- making systems, such as those using deep neural networks (DNNs), which are considered complex black box models. The inability for humans to see inside black boxes can result in AI adoption (and even its further development) being hindered, which is why growing levels of autonomy, complexity, and ambiguity in AI methods continues to increase the need for interpretability, transparency, understandability, and explainability of AI products/outputs (such as predictions, decisions, actions, and recommendations). These elements are crucial to ensuring that humans can understand and -- consequently -- trust AI-based systems (Mujumdar, et al., 2020). Explainable artificial intelligence (XAI) refers to methods and techniques that produce accurate, explainable models of why and how an AI algorithm arrives at a specific decision so that AI solution results can be understood by humans (Barredo Arrieta, et al., 2020).