law enforcement


Adopt Artificial Intelligence to improve operational efficiency in financial services sector

#artificialintelligence

The explosion of emerging technologies such as artificial intelligence (AI) is dramatically changing the way businesses operate today. As businesses collect more and more data, the need for solutions to drive true value from that data grows in importance. AI, in conjunction with big data and analytics, can deliver that baseline value and go beyond traditional solutions to find deeper insights. In India, banks are fast moving in this direction and deploying AI-powered chatbots for their operations to gain better insights into their customers' usage patterns, offer customised products, help in detecting fraudulent transactions and improving operational efficiency amongst others. There is no denying that AI helps banks nurture their relationships through better interactions with their customers however, not without challenges.


The 'inner pickpocket' trait inside all of us lets us tell what an object is by touch alone

Daily Mail - Science & tech

Researchers have identified how the human brain is able to determine the properties of a particular object from touch alone, a so-called inner pickpocket trait. This so-called inner pickpocket trait is inherent in all of us, they say, and is the reason a thief can pilfer a handbag and instantly pull out the most valuable item. It relies on the brain's ability to break up a continuous stream of information and turn it into smaller chunks. This manifests itself for professional pickpockets as being bale to interpret the sequence of small depressions on their fingers separate well-defined objects. 'Notably, the participants in our study were not selected for being professional pickpockets - so these results also suggest there is a secret, statistically savvy pickpocket in all of us,' said Professor Máté Lengyel from the University of Cambridge, who co-led the research.


AI and machine learning will throw bigger punches at ad fraud

#artificialintelligence

In a poll conducted by Integral Ad Science (IAS) 69.0% of agency executives said that ad fraud was the biggest hindrance to ad budget growth, compared with more than half (52.6%) of brand professionals who said the same. How much is ad fraud costing advertisers? Nobody knows, but with estimates ranging from $6.5 billion to $19 billion, there's a lot at stake. Marketers are becoming more assertive in their demands for better fraud prevention measures and they are seeking to increase their knowledge of different fraud types – from bots to unauthorised domain reselling – and wider technology adoptions to drive their Marketing strategies overall. Ad tech providers will need to adapt their technology and techniques to meet this demand.


AI and machine learning will throw bigger punches at ad fraud

#artificialintelligence

In a poll conducted by Integral Ad Science (IAS) 69.0% of agency executives said that ad fraud was the biggest hindrance to ad budget growth, compared with more than half (52.6%) of brand professionals who said the same. How much is ad fraud costing advertisers? Nobody knows, but with estimates ranging from $6.5 billion to $19 billion, there's a lot at stake. Marketers are becoming more assertive in their demands for better fraud prevention measures and they are seeking to increase their knowledge of different fraud types – from bots to unauthorised domain reselling – and wider technology adoptions to drive their Marketing strategies overall. Ad tech providers will need to adapt their technology and techniques to meet this demand.


What the ban on facial recognition tech will – and will not – do WeLiveSecurity

#artificialintelligence

As San Francisco moves to regulate the use of facial recognition systems, we reflect on some of the many'faces' of the fast-growing technology Last week, San Francisco became the first city in the United States to ban the use of facial recognition technology, at least by law enforcement, local agencies, and the city's transport authority. My immediate reaction to the headlines was that this was great for individuals' privacy, a truly bold decision by the San Francisco board of supervisors. The ordinance actually covers more than just facial recognition, as it states the following: "'Surveillance Technology' means any software, electronic device, system utilizing an electronic device, or similar device used, designed, or primarily intended to collect, retain, process, or share audio, electronic, visual, location, thermal, biometric, olfactory or similar information specifically associated with, or capable of being associated with, any individual or group.". The ban excludes San Francisco's airport and sea port as these are operated by federal agencies. Nor does it mean that no individual, company or other organizations installing surveillance systems that include facial recognition, and the agencies banned from using the technology, can cooperate with the people allowed to use it.


AI Weekly: Facial recognition policy makers debate temporary moratorium vs. permanent ban

#artificialintelligence

On Tuesday, in an 8-1 tally, the San Francisco Board of Supervisors voted to ban the use of facial recognition software by city departments, including police. Supporters of the ban cited racial inequality in audits of facial recognition software from companies like Amazon and Microsoft, as well as dystopian surveillance happening now in China. At the core of arguments around the regulation of facial recognition software use is the question of whether a temporary moratorium should be put in place until police and governments adopt policies and standards or it should be permanently banned. Some believe facial recognition software can be used to exonerate the innocent and that more time is needed to gather information. Others, like San Francisco Supervisor Aaron Peskin, believe that even if AI systems achieve racial parity, facial recognition is a "uniquely dangerous and oppressive technology."


The U.S. military wants your opinion on AI ethics

#artificialintelligence

The U.S. Department of Defense (DoD) visited Silicon Valley Thursday to ask for ethical guidance on how the military should develop or acquire autonomous systems. The public comment meeting was held as part of a Defense Innovation Board effort to create AI ethics guidelines and recommendations for the DoD. A draft copy of the report is due out this summer. Microsoft director of ethics and society Mira Lane posed a series of questions at the event, which was held at Stanford University. She argued that AI doesn't need to be implemented the way Hollywood has envisioned it and said it is imperative to consider the impact of AI on soldiers' lives, responsible use of the technology, and the consequences of an international AI arms race.


Police Are Feeding Celebrity Photos into Facial Recognition Software to Solve Crimes

#artificialintelligence

Police departments across the nation are generating leads and making arrests by feeding celebrity photos, CGI renderings, and manipulated images into facial recognition software. Often unbeknownst to the public, law enforcement is identifying suspects based on "all manner of'probe photos,' photos of unknown individuals submitted for search against a police or driver license database," a study published on Thursday by the Georgetown Law Center on Privacy and Technology reported. The new research comes on the heels of a landmark privacy vote on Tuesday in San Francisco, which is now the first US city to ban the use of facial recognition technology by police and government agencies. A recent groundswell of opposition has led to the passage of legislation that aims to protect marginalized communities from spy technology. These systems "threaten to fundamentally change the nature of our public spaces," said Clare Garvie, author of the study and senior associate at the Georgetown Law Center on Privacy and Technology.


Britain Has More Surveillance Cameras Per Person Than Any Country Except China. That's a Massive Risk to Our Free Society

TIME - Tech

How would you feel being watched, tracked and identified by facial recognition cameras everywhere you go? Facial recognition cameras are now creeping onto the streets of Britain and the U.S., yet most people aren't even aware. As we walk around, our faces could be scanned and subjected to a digital police line up we don't even know about. There are over 6 million surveillance cameras in the U.K. – more per citizen than any other country in the world, except China. In the U.K., biometric photos are taken and stored of people whose faces match with criminals – even if the match is incorrect. As director of the U.K. civil liberties group Big Brother Watch, I have been investigating the U.K. police's "trials" of live facial recognition surveillance for several years.


NYPD uses photos of celebrities like Woody Harrelson to help find suspects using facial recognition

Daily Mail - Science & tech

A new report details what privacy experts are calling a dangerous misapplication of facial recognition that uses photos of celebrities and digitally-doctored images to comb for criminals. According to a detailed investigation by Georgetown Law's Center on Privacy and Technology, one New York Police Department detective attempted to identify a suspect by scanning the face of actor Woody Harrelson. After footage from a security camera failed to produce results in a facial recognition scan, the detective used Google images of what he concluded to be the suspects celebrity doppelganger -- Woody Harrelson -- to run a test. The system turned up a match, says the report, who was eventually arrested on charges of petit larceny. In a new report from Georgetown University, an investigation shows that police have used celebrities to help its facial recognition software identify suspects.