A new way of training medical artificial intelligence (AI) systems has proven significantly more accurate at diagnosing illnesses than previous efforts. The AI system developed by researchers at University College London and Babylon Health, a medical service provider in the UK, relies on causation rather than correlation to pinpoint what could be wrong with people. It is more accurate than pre-existing AI systems and even outperformed real-life doctors in a small, controlled trial. Unlike traditional AI systems, which identify the most probable disease based on symptoms presented by a patient, the causal AI system more closely mimics the way a doctor diagnoses patients: by using counterfactual questions to narrow the range of possible conditions. A patient could present at a hospital with shortness of breath.
The UK Court of Appeal has unanimously reached a decision against a face-recognition system used by South Wales Police. The judgment, which called the use of automated face recognition (AFR) "unlawful", could have ramifications for the widespread use of such technology across the UK. But there is disagreement about exactly what the consequences will be. Ed Bridges, who initially launched a case after police cameras digitally analysed his face in the street, had appealed, with the support of personal rights campaign group Liberty, against the use of face recognition by police. The police force claimed in court that the technology was similar to the use of closed-circuit television (CCTV) cameras in cities.
The use of facial recognition technology by South Wales police broke race and sex equalities law and breached privacy rights because the force did not apply proper safeguards, the court of appeal has ruled. The critical judgment came in a case brought by Ed Bridges, a civil liberties campaigner, who was scanned by the police software in Cardiff in 2017 and 2018. He argued that capturing of thousands of faces was indiscriminate. Bridges' case had previously been rejected by the high court, but the court of appeal ruled in his favour on three counts, in a significant test case for how the controversial technology is applied in practice by police. But the appeal court held that Bridges' right to privacy, under article 8 of the European convention on human rights, was breached because there was "too broad a discretion" left to police officers as to who to put on its watchlist of suspects.
The UK government has been funneling millions of dollars into a prediction tool for violent crime that uses artificial intelligence. Now, officials are finally ready to admit that it has one big flaw: It's completely unusable. Police have already stopped developing the system called "Most Serious Violence" (MSV), part of the UK's National Data Analytics Solution (NDAS) project, and luckily was never actually put to use -- yet plenty of questions about the system remain. The tool worked by assigning people scores based on how likely they were to commit a gun or knife crime within the next two years. Two databases from two different UK police departments were used to train the system, including crime and custody records.
Human rights organization Liberty is claiming a win in its native Britain after a court ruled that police trials of facial recognition technology violated privacy laws. The Court of Appeal ruled that the use of automatic facial recognition systems unfairly impacted claimant Ed Bridges' right to a private life. Judges added that there were issues around how people's personal data was being processed, and said that the trials should be halted for now. The court also found that the South Wales Police (SWP) had not done enough to satisfy itself that facial recognition technology was not unbiased. A spokesperson for SWP told the BBC that it would not be appealing the judgment, but Chief Constable Matt Jukes said that the force will find a way to "work with" the judgment.
The use of automatic facial recognition (AFR) technology by South Wales Police is unlawful, the Court of Appeal has ruled. It follows a legal challenge brought by civil rights group Liberty and Ed Bridges, 37, from Cardiff. But the court also found its use was proportionate interference with human rights as the benefits outweighed the impact on Mr Bridges. South Wales Police said it would not be appealing the findings. Mr Bridges had said being identified by AFR caused him distress.
Online dating is great, but there's a slight shudder factor attached to the practice now that everyone and their mother (literally) has some sort of profile. The biggest advantage, obviously, is the potential to meet thousands of eligible singles who you likely wouldn't have known existed otherwise. But whether those singles use their profile regularly or are even on it for the right reasons is another question -- thus, the terrifying edge that can cause singles genuinely searching for the real thing to shy away from such a valuable tool. SEE ALSO: Match vs. eharmony: Both are for serious relationships, but how do the dating sites compare in the UK? When the dating pool is so deep, it's important to narrow down your options to dating sites that are most likely to attract a very specific type of person and introduce you to people who have the same intentions that you do.
Privacy campaigners have expressed alarm after the government revealed it had hired an artificial intelligence firm to collect and analyse the tweets of UK citizens as part of a coronavirus-related contract. Faculty, which was hired by Dominic Cummings to work for the Vote Leave campaign and counts two current and former Conservative ministers among its shareholders, was paid £400,000 by the Ministry of Housing, Communities and Local Government for the work, according to a copy of the contract published online. In June the Guardian reported Faculty had been awarded the contract, but that key passages in the published version of the document describing the work that the company would carry out had been redacted. In response to questions about the contract in the House of Lords, the government published an unredacted version of the contract, which describes the company's work as "topic analysis of social media to understand public perception and emerging issues of concern to HMG arising from the Covid-19 crisis". A further paragraph describes how machine learning will be applied to social media data.
Tegmark, president of the Future of Life Institute at MIT, made this rather grandiose statement: "In creating AI [artificial intelligence], we're birthing a new form of life with unlimited potential for good or ill." A study by Sir Nigel Shadbolt and Roger Hampson entitled The Digital Ape carries the subtitle How to Live (in Peace) with Smart Machines. They are optimistic that humans will still be in charge, provided we approach the process sensibly. But is this optimism justified? The director of Cambridge University's Centre for the Study of Existential Risk said: "We live in a world that could become fraught with . . .
Sweeping changes to England's planning system will "cut red tape, but not standards," Housing Secretary Robert Jenrick has said. Under draft new laws, first revealed on Sunday, developers will be granted "automatic" permission to build homes and schools on sites for "growth". It follows Boris Johnson's pledge to "build back better" after coronavirus. But critics warn it could lead to "bad-quality housing" and loss of local control over development. Mr Johnson promised to speed up investment into homes and infrastructure in June to help the UK recover from the economic impact of coronavirus.