Collaborating Authors


GPT-3 Creative Fiction


What if I told a story here, how would that story start?" Thus, the summarization prompt: "My second grader asked me what this passage means: …" When a given prompt isn't working and GPT-3 keeps pivoting into other modes of completion, that may mean that one hasn't constrained it enough by imitating a correct output, and one needs to go further; writing the first few words or sentence of the target output may be necessary.

Prince William, Prince Harry are keeping Zoom chats formal due to security concerns, source claims

FOX News

Prince William and his younger brother Prince Harry are reconnecting after the "Megxit" bombshell that rocked Kensington Palace -- but the royal brothers may have one new obstacle to tackle. "The biggest problem now is security and not just outside security but within the boundaries of calls, Zooms and Skypes," U.K.-based royal correspondent Neil Sean told Fox News. "You have to think that while Harry and Meghan were here in the U.K. there were security measures in place to make sure that private chats over Zoom and so forth remained that -- private," a palace insider told Sean. "Harry is [now] living in [a new house] and exposed to all kinds of mishaps security-wise." The palace insider alleged conversations between William and Harry have been formal out of caution that private chats could be leaked to the press.

Europe and AI: Leading, Lagging Behind, or Carving Its Own Way?


For its AI ecosystem to thrive, Europe needs to find a way to protect its research base, encourage governments to be early adopters, foster its startup ecosystem, expand international links, and develop AI technologies as well as leverage their use efficiently.

Why Fairness Cannot Be Automated: Bridging the Gap Between EU Non-Discrimination Law and AI Artificial Intelligence

This article identifies a critical incompatibility between European notions of discrimination and existing statistical measures of fairness. First, we review the evidential requirements to bring a claim under EU non-discrimination law. Due to the disparate nature of algorithmic and human discrimination, the EU's current requirements are too contextual, reliant on intuition, and open to judicial interpretation to be automated. Second, we show how the legal protection offered by non-discrimination law is challenged when AI, not humans, discriminate. Humans discriminate due to negative attitudes (e.g. stereotypes, prejudice) and unintentional biases (e.g. organisational practices or internalised stereotypes) which can act as a signal to victims that discrimination has occurred. Finally, we examine how existing work on fairness in machine learning lines up with procedures for assessing cases under EU non-discrimination law. We propose "conditional demographic disparity" (CDD) as a standard baseline statistical measurement that aligns with the European Court of Justice's "gold standard." Establishing a standard set of statistical evidence for automated discrimination cases can help ensure consistent procedures for assessment, but not judicial interpretation, of cases involving AI and automated systems. Through this proposal for procedural regularity in the identification and assessment of automated discrimination, we clarify how to build considerations of fairness into automated systems as far as possible while still respecting and enabling the contextual approach to judicial interpretation practiced under EU non-discrimination law. N.B. Abridged abstract

Curing the KYC compliance challenge with AI


Jokingly dubbed "deal prevention units" by some front-office staff, compliance teams now have the third most-stressful City jobs after that of an investment banker and a trader. Pre-crisis, pre-Brexit and pre-cybercrime, compliance used to be (almost!) a stress-free job with regular hours. As regulatory pressure intensifies and personal liability mounts, compliance officers are under increased pressure do the right thing every time, personally and professionally. Our latest research, The Cost of Compliance and How to Reduce It, shows that a typical European bank, serving 10 million customers, could save up to €10 million annually and avoid growing fines by the regulator by implementing technology to improve the "Know Your Customer" (KYC) processes. Following new EU Anti-Money Laundering (AML4/5) and Counter-Terrorist Financing (CTF) rules extending the scope of KYC requirements, the cost each year of punitive non-compliance fines is now €3.5 million.

Algorithmic decision-making in AVs: Understanding ethical and technical concerns for smart cities Artificial Intelligence

Autonomous Vehicles (AVs) are increasingly embraced around the world to advance smart mobility and more broadly, smart, and sustainable cities. Algorithms form the basis of decision-making in AVs, allowing them to perform driving tasks autonomously, efficiently, and more safely than human drivers and offering various economic, social, and environmental benefits. However, algorithmic decision-making in AVs can also introduce new issues that create new safety risks and perpetuate discrimination. We identify bias, ethics, and perverse incentives as key ethical issues in the AV algorithms' decision-making that can create new safety risks and discriminatory outcomes. Technical issues in the AVs' perception, decision-making and control algorithms, limitations of existing AV testing and verification methods, and cybersecurity vulnerabilities can also undermine the performance of the AV system. This article investigates the ethical and technical concerns surrounding algorithmic decision-making in AVs by exploring how driving decisions can perpetuate discrimination and create new safety risks for the public. We discuss steps taken to address these issues, highlight the existing research gaps and the need to mitigate these issues through the design of AV's algorithms and of policies and regulations to fully realise AVs' benefits for smart and sustainable cities.

Inside the urgent battle to stop UK police using facial recognition


The last day of January 2019 was sunny, yet bitterly cold in Romford, east London. Shoppers scurrying from retailer to retailer wrapped themselves in winter coats, scarves and hats. The temperature never rose above three degrees Celsius. For police officers positioned next to an inconspicuous blue van, just metres from Romford's Overground station, one man stood out among the thin winter crowds. The man, wearing a beige jacket and blue cap, had pulled his jacket over his face as he moved in the direction of the police officers.

Data Privacy Clashing with Demand for Data to Power AI Applications - UrIoTNews


Your data has value, but unlocking it for your own benefit is challenging. Understanding how valuable data are collected and approved for use can help you to get there. Two primary means for differentiating audiences by their data collection methods are site-authenticated data collection and people-based data collection, suggested a recent piece in BulletinHealthcare written by Justin Fadgen, chief corporate development officer for the firm. Site-authenticated data are sourced from individual authentication events, such as when a user completes an online form, and generally agrees to a privacy policy that includes a data use agreement. User data are then be combined with other data sources that add meaning, becoming the basis of advertising targeting for instance.

What does it mean to solve the problem of discrimination in hiring? Social, technical and legal perspectives from the UK on automated hiring systems Artificial Intelligence

The ability to get and keep a job is a key aspect of participating in society and sustaining livelihoods. Yet the way decisions are made on who is eligible for jobs, and why, are rapidly changing with the advent and growth in uptake of automated hiring systems (AHSs) powered by data-driven tools. Key concerns about such AHSs include the lack of transparency and potential limitation of access to jobs for specific profiles. In relation to the latter, however, several of these AHSs claim to detect and mitigate discriminatory practices against protected groups and promote diversity and inclusion at work. Yet whilst these tools have a growing user-base around the world, such claims of bias mitigation are rarely scrutinised and evaluated, and when done so, have almost exclusively been from a US socio-legal perspective. In this paper, we introduce a perspective outside the US by critically examining how three prominent automated hiring systems (AHSs) in regular use in the UK, HireVue, Pymetrics and Applied, understand and attempt to mitigate bias and discrimination. Using publicly available documents, we describe how their tools are designed, validated and audited for bias, highlighting assumptions and limitations, before situating these in the socio-legal context of the UK. The UK has a very different legal background to the US in terms not only of hiring and equality law, but also in terms of data protection (DP) law. We argue that this might be important for addressing concerns about transparency and could mean a challenge to building bias mitigation into AHSs definitively capable of meeting EU legal standards. This is significant as these AHSs, especially those developed in the US, may obscure rather than improve systemic discrimination in the workplace.

Details emerge of King's Cross facial-ID tech


King's Cross Central's developers said they wanted facial-recognition software to spot people on the site who had previously committed an offence there. The detail has emerged in a letter one of its managers sent to the London mayor, on 14 August. Sadiq Khan had sought reassurance using facial recognition on the site was legal. Two days before, Argent indicated it was using it to "ensure public safety". On Monday, it said it had now scrapped work on new uses of the technology.