Nearly two thousand government bodies, including police departments and public schools, have been using Clearview AI without oversight. Buzzfeed News reports that employees from 1,803 public bodies used the controversial facial-recognition platform without authorization from bosses. Reporters contacted a number of agency heads, many of which said they were unaware their employees were accessing the system. A database of searches, outlining which agencies were able to access the platform, and how many queries were made, was leaked to Buzzfeed by an anonymous source. It has published a version of the database online, enabling you to examine how many times each department has used the tool.
In 2012, in United States' Santa Cruz, a company called Predpol Inc devised a software that promised to predict future criminal activities by analysing past criminal records and identifying patterns. This simple idea of "predictively policing" an unsuspecting population aimed to change the face of law and order in the US. Police departments in major US cities began to use such predictive technology in their efforts to curb crime. In India too, such artificial intelligence tools are increasingly being put to use. For instance, during his annual press briefing in February, the Delhi police commissioner said that 231 of the 1,818 people arrested for their alleged role in the 2020 Delhi riots had been identified using technological tools.
She would end up sharing some of those thoughts with her circle, few will be researched further, few will be written down or few will be acted upon. She wasn't entirely aware of the data ecosystem of which she will be more a part of today than she was yesterday The image reflects the current ecosystem of data flow and activities where a user generates data through interaction with the environment around us like websites, search engines, government agencies, retail stores, banks, etc. Data is then collected from these multiple sources, collated, and mapped to build a massive database with PII (Personally identifiable information), behavioral, transactional, demographical information, etc. Which is then sold to companies, law enforcement agencies, and the same person is targeted/threatened/ surveilled. The person interacts again, and the cycle continues.
This week, Facebook CEO Mark Zuckerberg, Google CEO Sundar Pichai, and Twitter CEO Jack Dorsey went back to Congress, the first hearing with Big Tech executives since the January 6 insurrection led by white supremacists that directly threatened the lives of lawmakers. The main topic of discussion was the role social media plays in the spread of extremism and disinformation. The end of liability protections granted by Section 230 of the Communications Decency Act (CDA), disinformation, and how tech can harm the mental health of children were discussed, but artificial intelligence took center stage. The word "algorithm" alone was used more than 50 times. Whereas previous hearings involved more exploratory questions and took on a feeling of Geek Squad tech repair meets policy, in this hearing lawmakers asked questions based on evidence and seemed to treat tech CEOs like hostile witnesses.
The year is 2029, and you wake up one morning living in a community called Hope, a dystopian dictatorship. "Everyone here wears the same outfit, lives the same repetitive routine, and is happy … For many, Hope is their entire universe. They are uninterested in the outside world. However, you are different--you have the ability to choose." This is how you are introduced to the game Name of the Will on Kickstarter.
As financial institutions push out more digital products focused on speed and convenience, it creates additional points of vulnerability that fraudsters could exploit online. As a result, financial institutions are also expected to stay agile and deploy the latest technologies to protect their customers. In fact, the Movement Control Order (MCO) period last year presented a case study of what could happen as more financial transactions move online. Globally, a record-high number of scam and phishing sites were detected in 2020, according to Atlas VPN. "Propelled by the pandemic, there has been a significant shift towards digital transactions and real-time payments. This new normal has brought [not only] unprecedented efficiency and convenience but also an increase in payment-related fraud," says Abrar A Anwar, managing director and CEO of Standard Chartered Malaysia.
Robots applications in our daily life increase at an unprecedented pace. As robots will soon operate "out in the wild", we must identify the safety and security vulnerabilities they will face. Robotics researchers and manufacturers focus their attention on new, cheaper, and more reliable applications. Still, they often disregard the operability in adversarial environments where a trusted or untrusted user can jeopardize or even alter the robot's task. In this paper, we identify a new paradigm of security threats in the next generation of robots. These threats fall beyond the known hardware or network-based ones, and we must find new solutions to address them. These new threats include malicious use of the robot's privileged access, tampering with the robot sensors system, and tricking the robot's deliberation into harmful behaviors. We provide a taxonomy of attacks that exploit these vulnerabilities with realistic examples, and we outline effective countermeasures to prevent better, detect, and mitigate them.
When a secretive start-up scraped the internet to build a facial-recognition tool, it tested a legal and ethical limit -- and blew the future of privacy in America wide open. In May 2019, an agent at the Department of Homeland Security received a trove of unsettling images. Found by Yahoo in a Syrian user's account, the photos seemed to document the sexual abuse of a young girl. One showed a man with his head reclined on a pillow, gazing directly at the camera. The man appeared to be white, with brown hair and a goatee, but it was hard to really make him out; the photo was grainy, the angle a bit oblique. The agent sent the man's face to child-crime investigators around the country in the hope that someone might recognize him. When an investigator in New York saw the request, she ran the face through an unusual new facial-recognition app she had just started using, called Clearview AI. The team behind it had scraped the public web -- social media, employment sites, YouTube, Venmo -- to create a database with three billion images of people, along with links to the webpages from which the photos had come. This dwarfed the databases of other such products for law enforcement, which drew only on official photography like mug shots, driver's licenses and passport pictures; with Clearview, it was effortless to go from a face to a Facebook account. The app turned up an odd hit: an Instagram photo of a heavily muscled Asian man and a female fitness model, posing on a red carpet at a bodybuilding expo in Las Vegas. The suspect was neither Asian nor a woman. But upon closer inspection, you could see a white man in the background, at the edge of the photo's frame, standing behind the counter of a booth for a workout-supplements company. On Instagram, his face would appear about half as big as your fingernail. The federal agent was astounded. The agent contacted the supplements company and obtained the booth worker's name: Andres Rafael Viola, who turned out to be an Argentine citizen living in Las Vegas.
Mapping of spatial hotspots, i.e., regions with significantly higher rates or probability density of generating certain events (e.g., disease or crime cases), is a important task in diverse societal domains, including public health, public safety, transportation, agriculture, environmental science, etc. Clustering techniques required by these domains differ from traditional clustering methods due to the high economic and social costs of spurious results (e.g., false alarms of crime clusters). As a result, statistical rigor is needed explicitly to control the rate of spurious detections. To address this challenge, techniques for statistically-robust clustering have been extensively studied by the data mining and statistics communities. In this survey we present an up-to-date and detailed review of the models and algorithms developed by this field. We first present a general taxonomy of the clustering process with statistical rigor, covering key steps of data and statistical modeling, region enumeration and maximization, significance testing, and data update. We further discuss different paradigms and methods within each of key steps. Finally, we highlight research gaps and potential future directions, which may serve as a stepping stone in generating new ideas and thoughts in this growing field and beyond.
AI and machine learning have been hot buzzwords in 2020. As we approach 2021, it's a good time to take a look at five "big-picture" trends and issues around the growing use of artificial intelligence and machine learning technologies. Hyperautomation, an IT mega-trend identified by market research firm Gartner, is the idea that most anything within an organization that can be automated -- such as legacy business processes -- should be automated. The pandemic has accelerated adoption of the concept, which is also known as "digital process automation" and "intelligent process automation." AI and machine learning are key components -- and major drivers -- of hyperautomation (along with other technologies like robot process automation tools).