The ban comes after civil liberties groups highlighted what they described as faults in facial recognition algorithms after NIST found most facial recognition software was more likely to misidentify people of colour than white people. The Boston ban follows a ban imposed by San Francisco on the use of face recognition technology last year. The ban prevents any city employee using facial recognition or asking a third party to use the technology on its behalf. Boston's police department said it had not used the technology over what it called reliability fears, though it's clear the best systems are reasonably accurate in average working conditions. Critics also opposed the technology on the basis it might discourage citizens' rights to protest.
In 1963, Martin Luther King gave his "I have a dream" speech, words that reflected the thoughts and attitudes of civil rights activists at the time, and lit a torch that lives on in the hearts and minds of those who fight for civil liberties and equality in the western hemisphere. While the world has advanced since Dr. King ushered those words, it's hard to deny that discrimination still rears its ugly head in modern society. We know for a fact that racial discrimination in the workplace is illegal in most of America and Europe. And yet, just in the USA statistics show that things don't seem to have improved regarding hiring practices for black people and Hispanics in the last 25 years. In theory, AI-assisted hiring is built on an underlying model that makes unbiased decisions as long as the data itself isn't biased.
Each Fourth of July for the past five years I've written about AI with the potential to positively impact democratic societies. I return to this question with the hope of shining a light on technology that can strengthen communities, protect privacy and freedoms, or otherwise support the public good. This series is grounded in the principle that artificial intelligence can is capable of not just value extraction, but individual and societal empowerment. While AI solutions often propagate bias, they can also be used to detect that bias. As Dr. Safiya Noble has pointed out, artificial intelligence is one of the critical human rights issues of our lifetimes.
Decision-making on numerous aspects of our daily lives is being outsourced to machine-learning (ML) algorithms and artificial intelligence (AI), motivated by speed and efficiency in the decision process. ML approaches—one of the typologies of algorithms underpinning artificial intelligence—are typically developed as black boxes. The implication is that ML code scripts are rarely scrutinised; interpretability is usually sacrificed in favour of usability and effectiveness. Room for improvement in practices associated with programme development have also been flagged along other dimensions, including inter alia fairness, accuracy, accountability, and transparency. In this contribution, the production of guidelines and dedicated documents around these themes is discussed. The following applications of AI-driven decision-making are outlined: (a) risk assessment in the criminal justice system, and (b) autonomous vehicles, highlighting points of friction across ethical principles. Possible ways forward towards the implementation of governance on AI are finally examined.
Bottom Line: Barclays' and Kount's co-developed new product, Barclays Transact reflects the future of how companies will innovate together to apply AI-based fraud prevention to the many payment challenges merchants face today. Merchant payment providers have seen the severity, scope, and speed of fraud attacks increase exponentially this year. Account takeovers, card-not-present fraud, SMS spoofing, and phishing are just a few of the many techniques cybercriminals are using to defraud merchants out of millions of dollars. But it doesn't have to be a choice between security and a frictionless transaction. Frustrated by the limitations of existing fraud prevention systems, many payment providers are working as fast as they can to pilot AI- and machine-learning-based applications and platforms.
Three companies – Samsung, IBM and Tencent – dominate the global AI patent race over the past 10 years, while fierce competition between the U.S, and China overshadows other countries and regions, including the EU. These are the key findings of OxFirst, a specialist in IP law and economics (and spin out of Oxford University), which also reported that multiple neural nets, machine learning and speech recognition are driving the market. "Patents are mainly filed in the area of interconnectivity and system architecture, suggesting that top players focus primarily on protecting technologies covering multiple neural nets," OxFirst said in its announcement today. "Other areas of crucial importance are ML and bootstrap methods, alongside procedures used during speech recognition processes; e.g. the further establishment of human-machine dialogue." OxFirst said its sector-specific analysis suggests that major companies have focused on AI in the medical space, particularly medical diagnosis, medical simulation and data mining.
The human brain is an incredibly efficient source of intelligence. Earlier this month, OpenAI announced it had built the biggest AI model in history. This astonishingly large model, known as GPT-3, is an impressive technical achievement. Yet it highlights a troubling and harmful trend in the field of artificial intelligence--one that has not gotten enough mainstream attention. Modern AI models consume a massive amount of energy, and these energy requirements are growing at a breathtaking rate.
Would you let a machine learning model that has a failure rate of 98% and a false positive rate of 81% into production? Well, these claimed performance figures are from a facial recognition system that is in use by the policing force in South Wales and other parts of the United Kingdom. Dave Gershgorn article starts with a description akin to the setting of a dystopian future where an overseeing governing system monitors everyone; which is hysterically a foreshadowing of a foreseeable future. South Wales Police have been using facial recognition systems since 2017 and have done this in no secrecy from the public. They've made arrests as a result of the facial recognition system.
The development and deployment of artificial intelligence (AI) tools should take place in a socio-technical framework where individual interests and the social good are preserved but also opportunities for social knowledge and better governance are enhanced without leading to the extremes of'surveillance capitalism' and'surveillance state'. This was one of the main conclusions of the study'The impact of the General Data Protection Regulation on Artificial Intelligence', which was carried out by Professor Giovanni Sartor and Dr Francesca Lagioia of the European University Institute of Florence at the request of the STOA Panel, following a proposal from Eva Kaili (S&D, Greece), STOA Chair. Data protection is at the forefront of the relationship between AI and the law, as many AI applications involve the massive processing of personal data, including the targeting and personalised treatment of individuals on the basis of such data. This explains why data protection has been the area of the law that has most engaged with AI and, despite the fact that AI is not explicitly mentioned in the General Data Protection Regulation (GPDR), many provisions of the GDPR are not only relevant to AI, but are also challenged by the new ways of processing personal data that are enabled by AI. This new STOA study addresses the relation between the GDPR and AI and analyses how EU data protection rules will apply in this technological domain and thus impact both its development and deployment.
Since widespread protests over racial inequality began, IBM announced it would cancel its facial recognition programs to advance racial equity in law enforcement. Amazon suspended police use of its Rekognition software for one year to "put in place stronger regulations to govern the ethical use of facial recognition technology." But we need more than regulatory change; the entire field of artificial intelligence (AI) must mature out of the computer science lab and accept the embrace of the entire community. We can develop amazing AI that works in the world in largely unbiased ways. But to accomplish this, AI can't be just a subfield of computer science (CS) and computer engineering (CE), like it is right now.