Collaborating Authors

The Promise & Peril of Brain Machine Interfaces, with Ricardo Chavarriaga


ANJA KASPERSEN: Today's podcast will focus on artificial intelligence (AI), neuroscience, and neurotechnologies. My guest today is Ricardo Chavarriaga. Ricardo is an electrical engineer and a doctor of computational neuroscience. He is currently the head of the Swiss office of the Confederation of Laboratories for AI Research in Europe (CLAIRE) and a senior researcher at Zurich University of Applied Sciences. Ricardo, it is an honor and a delight to share the virtual stage with you today. I am really happy and looking forward to a nice discussion today. ANJA KASPERSEN: Neuroscience is a vast and fast-developing field. Maybe you could start by providing our listeners with some background. When we think about the brain, this is something that has fascinated humanity for a long time. The question of how this organ that we have inside our heads can rule our behavior and can store and develop knowledge has been indeed one of the questions for science for many, many years. Neurotechnologies, computational neuroscience, and brain-machine interfaces are tools that we have developed to approach the understanding of this fabulous organ. When we talk about computational neuroscience it is the use of computational tools to create models of the brain. It can be mathematical models, it can be algorithms that try to reproduce our observations about the brain. It can be experiments on humans and on animals: these experiments can be behavioral, they can involve measurements of brain activity, and by looking at how the brains of organisms react and how the activity changes we will then try to apply our knowledge to create models for that. These models can have different flavors. We can for instance have very detailed models of electrochemical processes inside a neuron, and then we are looking at just a small part of the brain. We can have large-scale models with fewer details of how different brain structures interact among themselves, or even less-detailed models that try to reproduce behavior that we observe in animals and in humans as a result of certain mental disorders. We can even test these models using probes to tap into how can our brain construct representations of the world based on images, based on tactile, and based on auditory information.

Survey XII: What Is the Future of Ethical AI Design? – Imagining the Internet


Results released June 16, 2021 – Pew Research Center and Elon University's Imagining the Internet Center asked experts where they thought efforts aimed at ethical artificial intelligence design would stand in the year 2030. Some 602 technology innovators, developers, business and policy leaders, researchers and activists responded to this specific question. The Question – Regarding the application of AI Ethics by 2030: In recent years, there have been scores of convenings and even more papers generated proposing ethical frameworks for the application of artificial intelligence (AI). They cover a host of issues including transparency, justice and fairness, privacy, freedom and human autonomy, beneficence and non-maleficence, freedom, trust, sustainability and dignity. Our questions here seek your predictions about the possibilities for such efforts. By 2030, will most of the AI systems being used by organizations of all sorts employ ethical principles focused primarily on the public ...

AI: Interrogating questions – Idees


It can no longer be denied that Artificial Intelligence is having a growing impact in many areas of human activity. It is helping humans communicate with each other--even beyond linguistic boundaries--, finding relevant information in the vast information resources available on the web, solving challenging problems that go beyond the competence of a single expert, enabling the deployment of autonomous systems, such as self-driving cars or other devices that handle complex interactions with the real world with little or no human intervention, and many other useful things. These applications are perhaps not like the fully autonomous, conscious and intelligent robots that science fiction stories have been predicting, but they are nevertheless important and useful, and most importantly they are real and here today. The growing impact of AI has triggered a kind of'gold rush': we see new research laboratories springing up, new AI start-up companies, and very significant investments, particularly by big digital tech companies, but also by transportation, manufacturing, financial, and many other industries. Management consulting companies are competing in their predictions on how big the economic impact of AI is going to be and governments are responding with strategic planning to see how their countries can avoid staying behind. Although all of this is good news, it cannot be denied that the application of AI comes with certain risks. Several initiatives have been taken in recent years to better understand the risks of AI deployment and came up with legal frameworks, codes of conduct, and value-based design methodologies.

Experts Doubt Ethical AI Design Will Be Broadly Adopted as the Norm Within the Next Decade


This is the 12th "Future of the Internet" canvassing Pew Research Center and Elon University's Imagining the Internet Center have conducted together to get expert views about important digital issues. In this case, the questions focused on the prospects for ethical artificial intelligence (AI) by the year 2030. This is a nonscientific canvassing based on a nonrandom sample; this broad array of opinions about where current trends may lead in the next decade represents only the points of view of the individuals who responded to the queries. Pew Research and Elon's Imagining the Internet Center built a database of experts to canvass from a wide range of fields, choosing to invite people from several sectors, including professionals and policy people based in government bodies, nonprofits and foundations, technology businesses, think tanks and in networks of interested academics and technology innovators. The predictions reported here came in response to a set of questions in an online canvassing conducted between June 30 and July 27, 2020. In all, 602 technology innovators and developers, business and policy leaders, researchers and activists responded to at least one of the questions covered in this report. More on the methodology underlying this canvassing and the participants can be found in the final section. Artificial intelligence systems "understand" and shape a lot of what happens in people's lives. AI applications "speak" to people and answer questions when the name of a digital voice assistant is called out. They run the chatbots that handle customer-service issues people have with companies. They help diagnose cancer and other medical conditions. They scour the use of credit cards for signs of fraud, and they determine who could be a credit risk. They help people drive from point A to point B and update traffic information to shorten travel times. They are the operating system of driverless vehicles. They sift applications to make recommendations about job candidates. They determine the material that is offered up in people's newsfeeds and video choices. They recognize people's faces, translate languages and suggest how to complete people's sentences or search queries. They can "read" people's emotions. They beat them at sophisticated games.

Confucius, Cyberpunk and Mr. Science: Comparing AI ethics between China and the EU Artificial Intelligence

The exponential development and application of artificial intelligence triggered an unprecedented global concern for potential social and ethical issues. Stakeholders from different industries, international foundations, governmental organisations and standards institutions quickly improvised and created various codes of ethics attempting to regulate AI. A major concern is the large homogeneity and presumed consensualism around these principles. While it is true that some ethical doctrines, such as the famous Kantian deontology, aspire to universalism, they are however not universal in practice. In fact, ethical pluralism is more about differences in which relevant questions to ask rather than different answers to a common question. When people abide by different moral doctrines, they tend to disagree on the very approach to an issue. Even when people from different cultures happen to agree on a set of common principles, it does not necessarily mean that they share the same understanding of these concepts and what they entail. In order to better understand the philosophical roots and cultural context underlying ethical principles in AI, we propose to analyse and compare the ethical principles endorsed by the Chinese National New Generation Artificial Intelligence Governance Professional Committee (CNNGAIGPC) and those elaborated by the European High-level Expert Group on AI (HLEGAI). China and the EU have very different political systems and diverge in their cultural heritages. In our analysis, we wish to highlight that principles that seem similar a priori may actually have different meanings, derived from different approaches and reflect distinct goals.