Collaborating Authors


One year on: How AI can supercharge the healthcare of the future


As we approach one year since the first national lockdown in the UK, it is clear that Covid-19 is still putting enormous pressures on our healthcare system. Indeed, the NHS reported in January that a record 4.46 million people were on the waiting list for routine treatments and operations, and a recent study by the British Medical Association found that almost 60% of doctors are suffering from some form of anxiety or depression. The path to recovering from this healthcare fallout will not be easy, however, when thinking about how we could alleviate this pressure in the future, emerging artificial intelligence (AI) technologies may be the answer. The World Health Organisation (WHO) predicts that there will be a shortfall of around 9.9 million healthcare professionals worldwide by 2030, despite the economy being able to create 40 million new health sector jobs by the same year. With larger, aging populations and increasingly complex healthcare demands, there will continue to be strain on health workers for the foreseeable future – so how can AI alleviate this?

The Future of Surgery: How AR and VR Will Upend Modern Medicine


Technology is reshaping every aspect of our lives. Once a week in The Future Of, we examine innovations in important fields, from farming to transportation, and what they will mean in the years and decades to come. The case was complicated: Shoulder arthroplasty, to deal with an advanced case of arthritis affecting the patient's glenoid -- the ball part of the ball-and-socket joint in the shoulder. To handle the case most effectively, the surgeon wanted assistance from the best. But the best was physically half a world away.

Rethinking Eye-blink: Assessing Task Difficulty through Physiological Representation of Spontaneous Blinking Artificial Intelligence

Continuous assessment of task difficulty and mental workload is essential in improving the usability and accessibility of interactive systems. Eye tracking data has often been investigated to achieve this ability, with reports on the limited role of standard blink metrics. Here, we propose a new approach to the analysis of eye-blink responses for automated estimation of task difficulty. The core module is a time-frequency representation of eye-blink, which aims to capture the richness of information reflected on blinking. In our first study, we show that this method significantly improves the sensitivity to task difficulty. We then demonstrate how to form a framework where the represented patterns are analyzed with multi-dimensional Long Short-Term Memory recurrent neural networks for their non-linear mapping onto difficulty-related parameters. This framework outperformed other methods that used hand-engineered features. This approach works with any built-in camera, without requiring specialized devices. We conclude by discussing how Rethinking Eye-blink can benefit real-world applications.

What Do We See in Them? Identifying Dimensions of Partner Models for Speech Interfaces Using a Psycholexical Approach Artificial Intelligence

Perceptions of system competence and communicative ability, termed partner models, play a significant role in speech interface interaction. Yet we do not know what the core dimensions of this concept are. Taking a psycholexical approach, our paper is the first to identify the key dimensions that define partner models in speech agent interaction. Through a repertory grid study (N=21), a review of key subjective questionnaires, an expert review of resulting word pairs and an online study of 356 user of speech interfaces, we identify three key dimensions that make up a users' partner model: 1) perceptions toward competence and capability; 2) assessment of human-likeness; and 3) a system's perceived cognitive flexibility. We discuss the implications for partner modelling as a concept, emphasising the importance of salience and the dynamic nature of these perceptions.

James Bruton focus series #3: Virtual Reality combat with a real robot


It's Saturday, it's the turn of another post of the James Bruton focus series, and it's Boxing Day in the UK and most of the Commonwealth countries. Even if this holiday has nothing to do with boxing, I didn't want to miss the opportunity to take it literally and bring you a project in which James teamed up with final year degree students in Computer Games Technology at Portsmouth University to build a robot that fights a human in a Virtual Reality (VR) game. For this project, the students Michael (Coding & VR Hardware), Stephen (Character Design & Animation), George (Environment Art) and Boyan (Character Design & Animation) designed a VR combat game in which you fight another character. James' addition was to design a real robot that fights the player, so that when they get hit in the game, they also get hit in real life by the robot. The robot and the player's costume are tracked using Vive trackers so the VR system knows where to position each of them in the 3D virtual environment.

My Coach, the Artificial Intelligence


Artificial intelligence (AI) is conquering sports and the corona pandemic is accelerating this trend. Whether it's clever fitness apps, strategic statistical analyses, grouping of spectators or even the fight against Covid-19 - AI has become indispensable in sports. In 2013 three young men founded a fitness app start-up in Munich - with a YouTube video, a newsletter and three PDFs. Today Freeletics, as the leading company in the so-called fit-tech scene, has over 50 million users in 175 countries worldwide. This spectacular development shows how rapidly AI has found its way into sports.

Former NHS surgeon creates AI 'virtual patient' for remote training


A former NHS surgeon has created an AI-powered "virtual patient" which helps to keep skills sharp during a time when most in-person training is on hold. Dr Alex Young is a trained orthopaedic and trauma surgeon who founded Virti and set out to use emerging technologies to provide immersive training for both new healthcare professionals and experienced ones looking to hone their skills. COVID-19 has put most in-person training on hold to minimise transmission risks. Hospitals and universities across the UK and US are now using the virtual patient as a replacement--including our fantastic local medics and surgeons at the Bristol NHS Foundation Trust. The virtual patient uses Natural Language Processing (NLP) and'narrative branching' to allow medics to roleplay lifelike clinical scenarios.

State of European Tech: Investment in 'deep tech' like AI drops 13%


The latest State of European Tech report highlights that investment in "deep tech" like AI has dropped 13 percent this year. Data from Dealroom was used for the State of European Tech report. Dealroom defines deep tech as 16 fields: Artificial Intelligence, Machine Learning, Big Data, Augmented Reality, Virtual Reality, Drones, Autonomous Driving, Blockchain, Nanotech, Robotics, Internet of Things, 3D Technology, Computer Vision, Connected Devices, Sensors Technology, and Recognition Technology (NLP, image, video, text, speech recognition). In 2019, there was $10.2 billion capital invested in European deep tech. In 2020, that dropped to $8.9 billion: I think it's fair to say that 2020 has been a tough year for most people and businesses.

Europe's funding for deep tech like AI and VR fell 13% in 2020


As European startups try to gain a competitive edge against the U.S. and China, there has been a big push to promote "deep tech." And recent years have indeed seen a surge in European startups developing products based on scientific breakthroughs. But it looks like the pandemic has put a dent in that momentum, at least for now. According to the latest State of European Tech report, funding for deep tech in Europe fell from $10.2 billion last year to $8.9 billion in 2020. The report is produced annually by venture capital firm Atomico in partnership with Slush, Orrick, and Silicon Valley Bank.

Transdisciplinary AI Observatory -- Retrospective Analyses and Future-Oriented Contradistinctions Artificial Intelligence

In the last years, AI safety gained international recognition in the light of heterogeneous safety-critical and ethical issues that risk overshadowing the broad beneficial impacts of AI. In this context, the implementation of AI observatory endeavors represents one key research direction. This paper motivates the need for an inherently transdisciplinary AI observatory approach integrating diverse retrospective and counterfactual views. We delineate aims and limitations while providing hands-on-advice utilizing concrete practical examples. Distinguishing between unintentionally and intentionally triggered AI risks with diverse socio-psycho-technological impacts, we exemplify a retrospective descriptive analysis followed by a retrospective counterfactual risk analysis. Building on these AI observatory tools, we present near-term transdisciplinary guidelines for AI safety. As further contribution, we discuss differentiated and tailored long-term directions through the lens of two disparate modern AI safety paradigms. For simplicity, we refer to these two different paradigms with the terms artificial stupidity (AS) and eternal creativity (EC) respectively. While both AS and EC acknowledge the need for a hybrid cognitive-affective approach to AI safety and overlap with regard to many short-term considerations, they differ fundamentally in the nature of multiple envisaged long-term solution patterns. By compiling relevant underlying contradistinctions, we aim to provide future-oriented incentives for constructive dialectics in practical and theoretical AI safety research.