Goto

Collaborating Authors

Results


Neural Natural Language Generation: A Survey on Multilinguality, Multimodality, Controllability and Learning

Journal of Artificial Intelligence Research

Developing artificial learning systems that can understand and generate natural language has been one of the long-standing goals of artificial intelligence. Recent decades have witnessed an impressive progress on both of these problems, giving rise to a new family of approaches. Especially, the advances in deep learning over the past couple of years have led to neural approaches to natural language generation (NLG). These methods combine generative language learning techniques with neural-networks based frameworks. With a wide range of applications in natural language processing, neural NLG (NNLG) is a new and fast growing field of research. In this state-of-the-art report, we investigate the recent developments and applications of NNLG in its full extent from a multidimensional view, covering critical perspectives such as multimodality, multilinguality, controllability and learning strategies. We summarize the fundamental building blocks of NNLG approaches from these aspects and provide detailed reviews of commonly used preprocessing steps and basic neural architectures. This report also focuses on the seminal applications of these NNLG models such as machine translation, description generation, automatic speech recognition, abstractive summarization, text simplification, question answering and generation, and dialogue generation. Finally, we conclude with a thorough discussion of the described frameworks by pointing out some open research directions.


Trust, Regulation, and Human-in-the-Loop AI

Communications of the ACM

Artificial intelligence (AI) systems employ learning algorithms that adapt to their users and environment, with learning either pre-trained or allowed to adapt during deployment. Because AI can optimize its behavior, a unit's factory model behavior can diverge after release, often at the perceived expense of safety, reliability, and human controllability. Since the Industrial Revolution, trust has ultimately resided in regulatory systems set up by governments and standards bodies. Research into human interactions with autonomous machines demonstrates a shift in the locus of trust: we must trust non-deterministic systems such as AI to self-regulate, albeit within boundaries. This radical shift is one of the biggest issues facing the deployment of AI in the European region.


DuckDuckGo 'down-ranks' Russian disinformation. The search engine's users are not happy.

Mashable

Tech companies are continuing to take action as Russia's war in Ukraine rages on. Search engine DuckDuckGo is the latest platform to take measures in the information war that's being battled online. According to DuckDuckGo's founder and CEO, Gabriel Weinberg, the privacy-focused search engine has "down-ranked" websites in its search results that are "associated with Russian disinformation." "Like so many others I am sickened by Russia's invasion of Ukraine and the gigantic humanitarian crisis it continues to create. To those unfamiliar with DuckDuckGo, the move may not feel too out of the ordinary. Social media platforms like Facebook and Twitter have updated their policies to deal with disinformation about Russia's war. Search engines like Google and even Microsoft's Bing have taken actions against disinformation, too. However, the overwhelming response to Weinberg's tweets about the "down-rankings" has been outrage from DuckDuckGo's user base. Some even claim to have already changed their default search engine preference due to this decision. "Can you see how swiftly most of your user base has been put off by this announcement?" "Loyal long time supporters are talking about abandoning the service.


AI Weekly: The Russia-Ukraine conflict is a test case for AI in warfare

#artificialintelligence

As Russia's invasion of Ukraine continues unabated, it's becoming a test case for the role of technology in modern warfare. Destructive software -- presumed to be the work of Russian intelligence -- has compromised hundreds of computers at Ukrainian government agencies. On the other side, a loose group of hackers has targeted key Russian websites, appearing to bring down webpages for Russia's largest stock exchange as well as the Russian Foreign Ministry. AI, too, has been proposed -- and is being used -- as a way to help decisively turn the tide. As Fortune writes, Ukraine has been using autonomous Turkish-made TB2 drones to drop laser-guided bombs and artillery strikes.


Human rights, democracy, and the rule of law assurance framework for AI systems: A proposal

arXiv.org Artificial Intelligence

Following on from the publication of its Feasibility Study in December 2020, the Council of Europe's Ad Hoc Committee on Artificial Intelligence (CAHAI) and its subgroups initiated efforts to formulate and draft its Possible Elements of a Legal Framework on Artificial Intelligence, based on the Council of Europe's standards on human rights, democracy, and the rule of law. This document was ultimately adopted by the CAHAI plenary in December 2021. To support this effort, The Alan Turing Institute undertook a programme of research that explored the governance processes and practical tools needed to operationalise the integration of human right due diligence with the assurance of trustworthy AI innovation practices. The resulting framework was completed and submitted to the Council of Europe in September 2021. It presents an end-to-end approach to the assurance of AI project lifecycles that integrates context-based risk analysis and appropriate stakeholder engagement with comprehensive impact assessment, and transparent risk management, impact mitigation, and innovation assurance practices. Taken together, these interlocking processes constitute a Human Rights, Democracy and the Rule of Law Assurance Framework (HUDERAF). The HUDERAF combines the procedural requirements for principles-based human rights due diligence with the governance mechanisms needed to set up technical and socio-technical guardrails for responsible and trustworthy AI innovation practices. Its purpose is to provide an accessible and user-friendly set of mechanisms for facilitating compliance with a binding legal framework on artificial intelligence, based on the Council of Europe's standards on human rights, democracy, and the rule of law, and to ensure that AI innovation projects are carried out with appropriate levels of public accountability, transparency, and democratic governance.


Three things that could propel the UK towards AI superpower-status in 2022

#artificialintelligence

In 2021, the UK has found itself under the bright lights of the world stage many times. A global audience has watched our pandemic response, the fruition of BREXIT and, most recently, the UK's COP26 presidency. So, why has the UK's AI and wider tech scene still not made it close to global superpower status that we see from China, Russia and the US? The results of the government's recent National AI Strategy are yet to be seen, but I predict there is deeper change needed. A thriving AI industry needs a combination of education, ambition, and nurtured innovation.


Post-Hoc Explanations Fail to Achieve their Purpose in Adversarial Contexts

arXiv.org Artificial Intelligence

Existing and planned legislation stipulates various obligations to provide information about machine learning algorithms and their functioning, often interpreted as obligations to "explain". Many researchers suggest using post-hoc explanation algorithms for this purpose. In this paper, we combine legal, philosophical and technical arguments to show that post-hoc explanation algorithms are unsuitable to achieve the law's objectives. Indeed, most situations where explanations are requested are adversarial, meaning that the explanation provider and receiver have opposing interests and incentives, so that the provider might manipulate the explanation for her own ends. We show that this fundamental conflict cannot be resolved because of the high degree of ambiguity of post-hoc explanations in realistic application scenarios. As a consequence, post-hoc explanation algorithms are unsuitable to achieve the transparency objectives inherent to the legal norms. Instead, there is a need to more explicitly discuss the objectives underlying "explainability" obligations as these can often be better achieved through other mechanisms. There is an urgent need for a more open and honest discussion regarding the potential and limitations of post-hoc explanations in adversarial contexts, in particular in light of the current negotiations about the European Union's draft Artificial Intelligence Act.


Content metadata: why keyword extraction requires automated labelling -- EDIA

#artificialintelligence

Keywords are no science but an art. There is no such thing as'the right keyword,' as we're talking about a core concept incorporated into a piece of content in the broadest form. Texts don't necessarily need to contain an exact keyword. For example, if the term'European Union' is used several times, 'European Commission' may be a suitable keyword even though the writer never uses the term. Despite this fluid definition, keywords should be understandable to those who try to find the right ones.


Systems Challenges for Trustworthy Embodied Systems

arXiv.org Artificial Intelligence

A new generation of increasingly autonomous and self-learning systems, which we call embodied systems, is about to be developed. When deploying these systems into a real-life context we face various engineering challenges, as it is crucial to coordinate the behavior of embodied systems in a beneficial manner, ensure their compatibility with our human-centered social values, and design verifiably safe and reliable human-machine interaction. We are arguing that raditional systems engineering is coming to a climacteric from embedded to embodied systems, and with assuring the trustworthiness of dynamic federations of situationally aware, intent-driven, explorative, ever-evolving, largely non-predictable, and increasingly autonomous embodied systems in uncertain, complex, and unpredictable real-world contexts. We are also identifying a number of urgent systems challenges for trustworthy embodied systems, including robust and human-centric AI, cognitive architectures, uncertainty quantification, trustworthy self-integration, and continual analysis and assurance.


Forecasting: theory and practice

arXiv.org Machine Learning

Forecasting has always been at the forefront of decision making and planning. The uncertainty that surrounds the future is both exciting and challenging, with individuals and organisations seeking to minimise risks and maximise utilities. The large number of forecasting applications calls for a diverse set of forecasting methods to tackle real-life challenges. This article provides a non-systematic review of the theory and the practice of forecasting. We provide an overview of a wide range of theoretical, state-of-the-art models, methods, principles, and approaches to prepare, produce, organise, and evaluate forecasts. We then demonstrate how such theoretical concepts are applied in a variety of real-life contexts. We do not claim that this review is an exhaustive list of methods and applications. However, we wish that our encyclopedic presentation will offer a point of reference for the rich work that has been undertaken over the last decades, with some key insights for the future of forecasting theory and practice. Given its encyclopedic nature, the intended mode of reading is non-linear. We offer cross-references to allow the readers to navigate through the various topics. We complement the theoretical concepts and applications covered by large lists of free or open-source software implementations and publicly-available databases.