Goto

Collaborating Authors

Results


Democratic lawmakers want FTC to investigate controversial identity firm ID.me

Engadget

A group of Democratic lawmakers led by Senator Ron Wyden of Oregon is calling on the Federal Trade Commission to investigate ID.me, the controversial identification company best known for its work with the Internal Revenue Service. In a letter addressed to FTC Chair Lina Khan, the group suggests the firm misled the American public about the capabilities of its facial recognition technology. Specifically, lawmakers point to a statement ID.me made at the start of the year. After CEO Blake Hall said the company did not use one-to-many facial recognition, an approach that involves matching images against those in a database, ID.me backtracked on those claims. It clarified it uses a "specific" one-to-many check during user enrollment to prevent identity theft.


The Role of Social Movements, Coalitions, and Workers in Resisting Harmful Artificial Intelligence and Contributing to the Development of Responsible AI

arXiv.org Artificial Intelligence

There is mounting public concern over the influence that AI based systems has in our society. Coalitions in all sectors are acting worldwide to resist hamful applications of AI. From indigenous people addressing the lack of reliable data, to smart city stakeholders, to students protesting the academic relationships with sex trafficker and MIT donor Jeffery Epstein, the questionable ethics and values of those heavily investing in and profiting from AI are under global scrutiny. There are biased, wrongful, and disturbing assumptions embedded in AI algorithms that could get locked in without intervention. Our best human judgment is needed to contain AI's harmful impact. Perhaps one of the greatest contributions of AI will be to make us ultimately understand how important human wisdom truly is in life on earth.


One year after Amazon, Microsoft and IBM ended facial recognition sales to police, smaller players fill void

ZDNet

Almost one year ago, at the onset of global protests over racism and police brutality, Microsoft, Amazon and IBM joined forces to announce either outright bans on the sale of facial recognition software to police departments or temporary moratoriums. Moving forward from a white male work culture is a requirement for delivering real business value. The technology has faced backlash for years due to its proven inaccuracy, particularly with identifying the faces of people with darker skin. The ACLU, MIT and even people within Amazon criticized the widespread usage of the technology, and before long stories began to emerge of people erroneously arrested based on mistakes made by the facial recognition software. In recent weeks, all three companies reiterated their commitment to ending their foray into providing facial recognition software to police departments, either in public statements or in comments to ZDNet.


RealNetworks wins facial recognition and AI analytics contract with US Air Force

#artificialintelligence

Safr from RealNetworks has earned its third Small Business Research Innovation (SBIR) deal from the United States Air Force (USAF) to enable the extension of Safr AI-powered analytics, including facial recognition, to unmanned ground vehicles (UGVs). The UGVs would be used to reduce risks in perimeter protection and domestic emergency medical services (EMS) search and rescue missions. In a news release, the company said the contract will help improve its platform to be able to operate on an NVIDIA Jetson AGX Xavier-based UGV system, with the goal of reducing the risk service members face as the Safr-enhanced UGVs will be able to detect unauthorized persons in restricted areas with face biometrics. "As a USAF military working dog handler, I have employed canines in various environments fulfilling the multi-use role of detection and deterrence. The ability to utilize UGV systems to augment K9 teams during work/rest cycles, or as an additional force, broadens security in-depth and allows operations to continue unhindered," said Air Force Technical Sergeant Dustin Cain, Non-Commissioned Officer in Charge of Police Services, 366th Security Forces Squadron, Mountain Home Air Force Base, Idaho.


ICE just signed a contract with facial recognition company Clearview AI

#artificialintelligence

Immigration and Customs Enforcement (ICE) signed a contract with facial recognition company Clearview AI this week for "mission support," government contracting records show (as first spotted by the tech accountability nonprofit Tech Inquiry). The purchase order for $224,000 describes "clearview licenses" and lists "ICE mission support dallas" as the contracting office. ICE is known to use facial recognition technology; last month, The Washington Post reported the agency, along with the FBI, had accessed state drivers' license databases -- a veritable facial recognition gold mine, as the Post termed it -- but without the knowledge or consent of drivers. The agency has been criticized for its practices at the US southern border, which has included separating immigrant children from their families and detaining refugees indefinitely. "Clearview AI's agreement is with Homeland Security Investigations (HSI), which uses our technology for their Child Exploitation Unit and ongoing criminal investigations," Clearview AI CEO Hoan Ton-That said in an emailed statement to The Verge.


Clearview AI wins an ICE contract as it prepares to defend itself in court

Engadget

Immigration and Customs Enforcement (ICE) this week signed a deal with Clearview AI to licence the facial recognition company's technology. According to a federal purchase order unearthed by the nonprofit Tech Inquiry (via The Verge), an ICE mission support office in Dallas is paying $224,000 for "Clearview licenses." Engadget has contacted Clearview and ICE for details on the scope of this agreement, as well as what ICE plans to do with those licenses. ICE and Clearview signed the deal just as the company is set to defend itself in court. Lawsuits filed in a number of states accuse Clearview of violating privacy and safety laws. It can identify a person by matching their photo against billions of images it has scraped from social media and other internet services.


OPTEC CEO Introduces the Company Safely Re-Open American Schools and Business Solution, Using OPTEC Advanced Technology Products Including the New Safe Scan Temperature Scanners with Facial Recognition and Mask Compliance Features

#artificialintelligence

CARLSBAD, CA / ACCESSWIRE / August 6, 2020 / OPTEC International, Inc (OTC PINK:OPTI) The Company CEO announced the OPTEC solution to the Safely Re-Open American Schools and Business's plan using a suite of OPTEC advanced technology products. The company solution includes the introduction of the new "Safe-Scan" stand-alone infrared temperature scanning technology for use in complying with CDC standards for assisting the reopening of Schools, Churches, Gym's businesses, health and senior care facilities and government buildings during the current pandemic crisis. The free-standing thermal temperature scanning models feature advanced engineered technology and are FCC certified scans for elevated temperature and mask compliance in less than one second. The scanner recognizes individuals passing by the device with or without masks and/or with an elevated body temperature at which time will emit an audio alert and red flashing light, while regular temperature scans receive an" Access Allowed" green light audio alert. The device recognizes the no mask alert and the "Please Wear a Mask" audio alert is voiced.


Abolish the #TechToPrisonPipeline

#artificialintelligence

The authors of the Harrisburg University study make explicit their desire to provide "a significant advantage for law enforcement agencies and other intelligence agencies to prevent crime" as a co-author and former NYPD police officer outlined in the original press release.[38] At a time when the legitimacy of the carceral state, and policing in particular, is being challenged on fundamental grounds in the United States, there is high demand in law enforcement for research of this nature, research which erases historical violence and manufactures fear through the so-called prediction of criminality. Publishers and funding agencies serve a crucial role in feeding this ravenous maw by providing platforms and incentives for such research. The circulation of this work by a major publisher like Springer would represent a significant step towards the legitimation and application of repeatedly debunked, socially harmful research in the real world. To reiterate our demands, the review committee must publicly rescind the offer for publication of this specific study, along with an explanation of the criteria used to evaluate it. Springer must issue a statement condemning the use of criminal justice statistics to predict criminality and acknowledging their role in incentivizing such harmful scholarship in the past. Finally, all publishers must refrain from publishing similar studies in the future.


Divesting from one facial recognition startup, Microsoft ends outside investments in the tech โ€“ TechCrunch

#artificialintelligence

Microsoft is pulling out of an investment in an Israeli facial recognition technology developer as part of a broader policy shift to halt any minority investments in facial recognition startups, the company announced late last week. The decision to withdraw its investment from AnyVision, an Israeli company developing facial recognition software, came as a result of an investigation into reports that AnyVision's technology was being used by the Israeli government to surveil residents in the West Bank. The investigation, conducted by former U.S. Attorney General Eric Holder and his team at Covington & Burling, confirmed that AnyVision's technology was used to monitor border crossings between the West Bank and Israel, but did not "power a mass surveillance program in the West Bank." Microsoft's venture capital arm, M12 Ventures, backed AnyVision as part of the company's $74 million financing round which closed in June 2019. Investors who continue to back the company include DFJ Growth and OG Technology Partners, LightSpeed Venture Partners, Robert Bosch GmbH, Qualcomm Ventures, and Eldridge Industries.


Major Police Body Camera Manufacturer Rejects Facial Recognition Software

NPR Technology

A Los Angeles police officer wears an Axon body camera in 2017. On Thursday, the company announced it is holding off on facial recognition software, citing its unreliability. A Los Angeles police officer wears an Axon body camera in 2017. On Thursday, the company announced it is holding off on facial recognition software, citing its unreliability. The largest manufacturer of police body cameras is rejecting the possibility of selling facial recognition technology โ€“ at least, for now.