An employer in Spain may not be able to fire a worker caught on a surveillance camera doing something prohibited if the company hasn't informed workers about the video system and its purpose, according to a recent trial court decision. In a case involving an employee fired after a security camera captured him in a parking-lot fight after work hours, a Pamplona labor court ruled that the video evidence was inadmissible under the European Union's General Data Protection Regulation (GDPR) and case law from the European Court of Human Rights (ECHR). "The judgment is of great interest since it is the first ruling by a Spanish court on the validity that can be given to the evidence of video recordings after the publication of the new Spanish Data Protection Law and also an interpretation of the new European Data Protection Regulation," according to a blog post from Manuel Vargas of Barcelona's Marti & Associats law firm. Under Spain's own data-protection law, employers who record a worker doing something illegal are considered to have fulfilled their duty to inform so long as they have posted a sign identifying a video surveillance zone, Vargas wrote. He also noted that recent case law from the Spanish Supreme Court endorses the idea that employers aren't obligated to notify workers that they plan to use video cameras to monitor their activity for possible disciplinary purposes.
Evidence is being gathered on U.S. big tech companies as the European Commission prepares for new leadership, The Wall Street Journal reported on Tuesday (Sept. Within 100 days of taking office on Nov. 1, President-elect Ursula von der Leyen and her team indicate there will be new laws governing artificial intelligence (AI) and how tech companies like Facebook use big data. Big tech investigations were already initiated by commissioner Margrethe Vestager and could end with multimillion-dollar fines. Facebook and Amazon deny wrongdoing. Alphabet's Google has already been hit with $9.4 billion in fines resulting from three separate EU investigations, and a fourth is underway.
The Guidelines put forward a set of 7 key requirements that AI systems should meet in order to be deemed trustworthy. The AI HLEG has also prepared a document which elaborates on a Definition of Artificial Intelligence used for the purpose of the Guidelines. The document also provides an assessment list that operationalises the key requirements and offers guidance to implement them in practice. Starting from the 26th of June, this assessment list undergoing a piloting process, to which all stakeholders are invited to test the assessment list and provide practical feedback on how it can be improved.
The UK's success in the field of robotic surgery could be hampered if the country loses its research partnerships with Europe after Brexit, according to a new study. The study's authors said there was a "consensus" that Brexit was likely to "undermine the UK's status as a global leader in science and innovation". Robotic surgery has been touted as one of the technologies that is key to future growth in the UK, according to the Imperial College London study, and international collaboration is key to that success. Dr George Garas, lead author of the study from the department of surgery and cancer at Imperial College London, said: "There is a consensus within the scientific and healthcare communities that Brexit is likely to undermine the UK's status as a global leader in science and innovation. "We need to understand what the impact of losing the existing valuable EU links would be so as to tactically plan the UK's research and innovation strategy after Brexit." The UK currently ranks third in the world for robotic surgery innovation, behind Italy and the US. The best scenario following Brexit would be for the UK to continue its research partnerships with the EU, the study's authors suggested. If this isn't possible, the UK should look to collaborate with the US. But under this scenario the UK's research impact would ultimately suffer unless its new US partners were the top-performing ones in the field, the study found. "Our research shows that in the field of robotic surgery research, replacing EU partners with top US collaborators might maintain or even improve the UK's position," Dr Garas said. "Unfortunately, in the short term this could be difficult and costly.
Artificial intelligence (AI) technologies are forecast to add US$15 trillion to the global economy by 2030. According to the findings of our Index and as might be expected, the governments of countries in the Global North are better placed to take advantage of these gains than those in the Global South. There is a risk, therefore, that countries in the Global South could be left behind by the so-called fourth industrial revolution. Not only will they not reap the potential benefits of AI, but there is also the danger that unequal implementation widens global inequalities. AI has the power to transform the way that governments around the world deliver public services. In turn, this could greatly improve citizens' experiences of government. Governments are already implementing AI in their operations and service delivery, to improve efficiency, save time and money, and deliver better quality public services. In 2017, Oxford Insights created the world's first Government AI Readiness Index, to answer the question: how well placed are national governments to take advantage of the benefits of AI in their operations and delivery of public services? The results sought to capture the current capacity of governments to exploit the innovative potential of AI. The 2019 Government AI Readiness Index, produced with the support of the International Development Research Centre (IDRC), sees a development of our methodology, and an expansion of scope to cover all UN countries (from our previous group of OECD members). It scores the governments of 194 countries and territories according to their preparedness to use AI in the delivery of public services. The overall score is comprised of 11 input metrics, grouped under four high-level clusters: governance; infrastructure and data; skills and education; and government and public services. The data is derived from a variety of resources, ranging from our own desk research into AI strategies, to databases such as the number of registered AI startups on Crunchbase, to indices such as the UN eGovernment Development Index. We divided the countries by region, principally following UN groupings, with the chief exception of the Western European and Others Group, which we separated to allow more in-depth analysis of higher scoring governments.
Artificial intelligence technology used by police forces in the UK to predict future crimes replicates - and in some cases amplifies - human prejudices, according to a new report. While "predictive policing" tools have been used in the UK since at least 2004, advances in machine learning and AI have enabled the development of more sophisticated systems. These are now used for a wide range of functions including facial recognition and video analysis, mobile phone data extraction, social media intelligence analysis, predictive crime mapping and individual risk assessment. However, the report by the Royal United Services Institute (RUSI) warns that human biases are being built into these machine learning algorithms, resulting in people being unfairly discriminated against due to their race, sexuality and age. One police officer who was interviewed for the report commented that: "Young black men are more likely to be stop and searched than young white men, and that's purely down to human bias. "That human bias is then introduced into the datasets, and bias is then generated in the outcomes of the application of those datasets." In addition to these inherent biases, the report points out that individuals from disadvantaged sociodemographic backgrounds are likely to engage with public services more frequently. As a result, police often have access to more data relating to these individuals, which "may in turn lead to them being calculated as posing a greater risk". Matters could worsen over time, another officer said, when software is used to predict future crime hotspots. "We pile loads of resources into a certain area and it becomes a self-fulfilling prophecy, purely because there's more policing going into that area, not necessarily because of discrimination on the part of officers," the officer said. The report also warns that police forces could become over-reliant on the AI to predict future crimes, and discount other relevant information. "Officers often disagree with the algorithm.
Artificial intelligence could be used to help catch paedophiles operating on the dark web. The technology would target the most dangerous and sophisticated offenders in efforts to tackle child sexual abuse, the Home Office said. Earlier this month Chancellor Sajid Javid announced £30 million would be set aside to tackle online child sexual exploitation. The Government has pledged to spend more money on the Child Abuse Image Database (CAID), which since 2014 has allowed police and other law enforcement agencies to search seized computers and other devices for indecent images of children quickly against a record of 14 million images to help identify victims. The investment will be used to consider whether adding aspects of artificial intelligence (AI) to the system to analyse voices and estimate ages would help in tracking down child abusers.
Artificial intelligence could be used to help catch paedophiles operating on the dark web, the Home Office has announced. The government has pledged to spend more money on the child abuse image database, which since 2014 has allowed police and other law enforcement agencies to search seized computers and other devices for indecent images of children quickly, against a record of 14m images, to help identify victims. The investment will be used to trial aspects of AI including voice analysis and age estimation to see whether they would help track down child abusers. Earlier this month, the chancellor, Sajid Javid, announced £30m would be set aside to tackle online child sexual exploitation, with the Home Office releasing more information on how this would be spent on Tuesday. There has been debate over the use of machine learning algorithms, part of the broad field of AI, with the government's Centre for Data Ethics and Innovation developing a code of practice for the trialling of the predictive analytical technology in policing.
Police officers have raised concerns about using "biased" artificial-intelligence tools, a report commissioned by one of the UK government's advisory bodies reveals. The study warns such software may "amplify" prejudices, meaning some groups could become more likely to be stopped in the street and searched. It says officers also worry they could become over-reliant on automation. And it says clearer guidelines are needed for facial recognition's use. "The police are concerned that the lack of clear guidance could lead to uncertainty over acceptable uses of this technology," the Royal United Services Institute (Rusi)'s Alexander Babuta told BBC News.
One morning a few weeks ago Stephen Foot, a warehouseman from Enfield, woke up in a London hospital to discover the unlikely harbinger of a coming medical revolution. This Ghost of Healthcare to Come took the form of a nephrologist at the end of his bed. "That was the last thing I was expecting," he tells me. "Somebody from the renal department to come and say, 'Oh, by the way, there's something going on that has sparked an alert on your kidney.'" Foot had entered hospital because of his foot.