The UK government is launching an investigation to determine the levels of bias in algorithms that could affect people's lives. A browse through our'ethics' category here on AI News will highlight the serious problem of bias in today's algorithms. With AIs being increasingly used for decision-making, parts of society could be left behind. Conducted by the Centre for Data Ethics and Innovation (CDEI), the investigation will focus on areas where AI has tremendous potential – such as policing, recruitment, and financial services – but would have a serious negative impact on lives if not implemented correctly. "Technology is a force for good which has improved people's lives but we must make sure it is developed in a safe and secure way. Our Centre for Data Ethics and Innovation has been set up to help us achieve this aim and keep Britain at the forefront of technological development. I'm pleased its team of experts is undertaking an investigation into the potential for bias in algorithmic decision-making in areas including crime, justice and financial services. I look forward to seeing the Centre's recommendations to Government on any action we need to take to help make sure we maximise the benefits of these powerful technologies for society."
Oliver Letwin's strange and somewhat alarming new book begins at midnight on Thursday 31 December 2037. In Swindon – stay with me! – a man called Aameen Patel is working the graveyard shift at Highways England's traffic HQ when his computer screen goes blank, and the room is plunged into darkness. He tries to report these things to his superiors, but can get no signal on his mobile. Looking at the motorway from the viewing window by his desk, he observes, not an orderly stream of traffic, but a dramatic pile-up of crashed cars and lorries – at which point he realises something is seriously amiss. In the Britain of 2037, everything, or almost everything, is controlled by 7G wireless technology, from the national grid to the traffic (not only are cars driverless; a vehicle cannot even join a motorway without logging into an "on-route guidance system"). There is, then, only one possible explanation: the entire 7G network must have gone down. It sounds like I'm describing a novel – and it's true that Aameen Patel will soon be joined by another fictional creation in the form of Bill Donoghue, who works at the Bank of England, and whose job it will be to tell the prime minister that the country is about to pay a heavy price for its cashless economy, given that even essential purchases will not be possible until the network is back up (Bill's mother-in-law is also one of thousands of vulnerable people whose carers will soon be unable to get to them, the batteries in their electric cars having gone flat).
A report into the use of artificial intelligence by the U.K.'s public sector has warned that the government is failing to be open about automated decision-making technologies which have the potential to significantly impact citizens' lives. Ministers have been especially bullish on injecting new technologies into the delivery of taxpayer-funded healthcare -- with health minister Matt Hancock setting out a tech-fueled vision of "preventative, predictive and personalised care" in 2018, calling for a root and branch digital transformation of the National Health Service (NHS) to support piping patient data to a new generation of "healthtech" apps and services. He has also personally championed a chatbot startup, Babylon Health, that's using AI for healthcare triage -- and which is now selling a service in to the NHS. Policing is another area where AI is being accelerated into U.K. public service delivery, with a number of police forces trialing facial recognition technology -- and London's Met Police switching over to a live deployment of the AI technology just last month. However the rush by cash-strapped public services to tap AI "efficiencies" risks glossing over a range of ethical concerns about the design and implementation of such automated systems, from fears about embedding bias and discrimination into service delivery and scaling harmful outcomes to questions of consent around access to the data sets being used to build AI models and human agency over automated outcomes, to name a few of the associated concerns -- all of which require transparency into AIs if there's to be accountability over automated outcomes.
The Committee has today published its report and recommendations to government to ensure that high standards of conduct are upheld as technologically assisted decision making is adopted more widely across the public sector. Artificial intelligence – and in particular, machine learning – will transform the way public sector organisations make decisions and deliver public services. Adherence to high public standards will help fully realise the benefits of AI in public service delivery. By ensuring that AI is subject to appropriate safeguards and regulations, the public can have confidence that new technologies will be used in a way that upholds the Seven Principles of Public Life. We concluded that the Principles remain a valid guide for public sector practice as AI is deployed across government.
The Committee on Standards in Public Life today published its report and recommendations to the Prime Minister to ensure that high standards of conduct are upheld as technologically assisted decision making is adopted more widely across the public sector. The Committee also published new polling on public attitudes to AI. "Honesty, integrity, objectivity, openness, leadership, selflessness and accountability were first outlined by Lord Nolan as the standards expected of all those who act on the public's behalf. "Artificial intelligence – and in particular, machine learning – will transform the way public sector organisations make decisions and deliver public services. Demonstrating high standards will help realise the huge potential benefits of AI in public service delivery. However, it is clear that the public need greater reassurance about the use of AI in the public sector.
The use of facial recognition by police and other law enforcement is proving divisive, with Verdict readers split over its use. In a poll on Verdict that saw responses from 644 readers between 24 January and 7 February, the majority said they were not happy with the use of facial recognition by police, but only by a slim margin. The response comes as the EU is considering a ban on the use of facial recognition until the technology reaches a greater stage of maturity. A draft white paper, which was first published by the news website EURACTIV in January, showed that a temporary ban was being considered by the European Commission. It proposed that "use of facial recognition technology by private or public actors in public spaces would be prohibited for a definite period (e.g.
Jokingly dubbed "deal prevention units" by some front-office staff, compliance teams now have the third most-stressful City jobs after that of an investment banker and a trader. Pre-crisis, pre-Brexit and pre-cybercrime, compliance used to be (almost!) a stress-free job with regular hours. As regulatory pressure intensifies and personal liability mounts, compliance officers are under increased pressure do the right thing every time, personally and professionally. Our latest research, The Cost of Compliance and How to Reduce It, shows that a typical European bank, serving 10 million customers, could save up to €10 million annually and avoid growing fines by the regulator by implementing technology to improve the "Know Your Customer" (KYC) processes. Following new EU Anti-Money Laundering (AML4/5) and Counter-Terrorist Financing (CTF) rules extending the scope of KYC requirements, the cost each year of punitive non-compliance fines is now €3.5 million.
If you ever have some moments of spare time, in which you really don't know what to do…, well: then what about chatting some minutes with a bot? And I believe, actually it is! But… there is also a reasonable motivation for doing it. I think that we human beings should talk from time to time with these programs, just to verify if it is really true that they are becoming more "intelligent" as time goes by. One of the world's best chatbots (or maybe even the best at all) is Mitsuku. Its creators present this conversational AI as 18-year girl on their website https://www.pandorabots.com/mitsuku/,
DeepMind Technologies, a Google subsidiary and Artificial Intelligence (AI) firm, disclosed that it will adopt Blockchain technology and make use of Distributed Ledger Technology (DLT).This move will help the company secure patient data more efficiently. DeepMind creates algorithms designed for applications, gaming protocols and stimulation. It earned fame for developing a machine-learning program that can be capable of playing video games. Likewise, DeepMind developed the so-called "Neural Turing Machine" that copies short-term memory of human beings. It signed a five-year contract with Royal Free London NHS Trust recently so it can apply the technology to healthcare.
The AI service will help Encompass customers to "quickly and accurately" find risk-relevant information on their own customers, organisations and investments. Sturgeon said: "Encompass is one of a number of international companies that has chosen to locate and steadily expand its operation, making Scotland an attractive place to grow its business. "From its Glasgow base, the company has access to markets, a supportive business environment and has been able to identify local talent from Scottish professionals in the engineering and software development sector. "Backed by almost £2 million of R&D investment from Scottish Enterprise, Encompass will be able to develop artificial intelligence software tools that will assist companies in the financial sector to reduce operational risks associated with meeting compliance and regulatory standards." Encompass's work in Glasgow has primarily been focused on its Know Your Customer product, where it uses data analytics to ensure clients do not unknowingly trade with organised crime or the proceeds of crime.