Goto

Collaborating Authors

Law


What a Black tech movement might look like

#artificialintelligence

Dr. Fallon Wilson is, like civil rights activist Fannie Lou Hamer, sick and tired of being sick and tired. Hamer and Wilson were both talking about a lack of progress on civil rights, but Wilson is talking specifically about data, AI, and tech from companies that have for years failed to make meaningful progress on diversity and inclusion initiatives. In a speech at the Kapor Center in Oakland, California, she said people cannot rely on companies like Facebook or Google to bring about meaningful change. "The truth is that the business of diversity and inclusion in tech companies will never eradicate structural racism, and I think we have to be clear about that," she said. "They cannot be the weathervane, nor should they, of what equitable progress looks like for Black people in this country as it relates to tech. Wilson was not referencing recent events like boycotts over Facebook's willingness to profit from hate or renewed diversity promises from Google and Microsoft.


Prince William, Prince Harry are keeping Zoom chats formal due to security concerns, source claims

FOX News

Prince William and his younger brother Prince Harry are reconnecting after the "Megxit" bombshell that rocked Kensington Palace -- but the royal brothers may have one new obstacle to tackle. "The biggest problem now is security and not just outside security but within the boundaries of calls, Zooms and Skypes," U.K.-based royal correspondent Neil Sean told Fox News. "You have to think that while Harry and Meghan were here in the U.K. there were security measures in place to make sure that private chats over Zoom and so forth remained that -- private," a palace insider told Sean. "Harry is [now] living in [a new house] and exposed to all kinds of mishaps security-wise." The palace insider alleged conversations between William and Harry have been formal out of caution that private chats could be leaked to the press.


Why are Artificial Intelligence systems biased? – IAM Network

#artificialintelligence

A machine-learned AI system used to assess recidivism risks in Broward County, Fla., often gave higher risk scores to African Americans than to whites, even when the latter had criminal records. The popular sentence-completion facility in Google Mail was caught assuming that an "investor" must be a male.A celebrated natural language generator called GPT, with an uncanny ability to write polished-looking essays for any prompt, produced seemingly racist and sexist completions when given prompts about minorities. Amazon found, to its consternation, that an automated AI-based hiring system it built didn't seem to like female candidates.Commercial gender-recognition systems put out by industrial heavy-weights, including Amazon, IBM and Microsoft, have been shown to suffer from high misrecognition rates for people of color. Another commercial face-recognition technology that Amazon tried to sell to government agencies has been shown to have significantly higher error rates for minorities. And a popular selfie lens by Snapchat appears to "whiten" people's faces, apparently to make them more attractive.ADVERTISEMENTThese are not just academic curiosities.


Driving anti-money laundering efficiency gains using artificial intelligence - Risk.net

#artificialintelligence

Anti-money laundering (AML) is expensive and labour-intensive, and artificial intelligence (AI) can offer improved efficiency gains. Could they be a match made in heaven? This Risk.net webinar, in association with NICE Actimize, took place amid the strain on banks' back offices driven by the lockdown in response to the global Covid‑19 pandemic, and explores this potential pairing Today's evolving regulatory environment and criminal typologies have influenced AML compliance teams to adopt AI technologies such as machine learning to improve detection and better focus analyst workloads. The marriage of AI to existing compliance processes and risk modelling techniques has the potential to eliminate backlogs and create new efficiencies. But there may be some risks and question marks for those in the early stages of adoption. The strain on many financial institutions has only increased in 2020 due to the unexpected arrival of Covid‑19.


EFF's new database reveals what tech local police are using to spy on you

ZDNet

The Electronic Frontier Foundation (EFF) has debuted a new database that reveals how, and where, law enforcement is using surveillance technology in policing strategies. Launched on Monday in partnership with the University of Nevada's Reynolds School of Journalism, the "Atlas of Surveillance" is described as the "largest-ever collection of searchable data on police use of surveillance technologies." The civil rights and privacy organization says the database was developed to help the general public learn about the accelerating adoption and use of surveillance technologies by law enforcement agencies. The map pulls together thousands of data points from over 3,000 police departments across the United States. Users can zoom in to different locations and find summaries of what technologies are in use, by what department, and track how adoption is spreading geographically.


'Booyaaa': Australian Federal Police use of Clearview AI detailed

ZDNet

Earlier this year, the Australian Federal Police (AFP) admitted to using a facial recognition tool, despite not having an appropriate legislative framework in place, to help counter child exploitation. The tool was Clearview AI, a controversial New York-based startup that has scraped social media networks for people's photos and created one of the biggest facial recognition databases in the world. It provides facial recognition software, marketed primarily at law enforcement. The AFP previously said while it did not adopt the facial recognition platform Clearview AI as an enterprise product and had not entered into any formal procurement arrangements with the company, it did use a trial version. Documents published by the AFP under the Freedom of Information Act 1982 confirmed that the AFP-led Australian Centre to Counter Child Exploitation (ACCCE) registered for a free trial of the Clearview AI facial recognition tool and conducted a pilot of the system from 2 November 2019 to 22 January 2020.


How to protect algorithms as intellectual property

#artificialintelligence

Ogilvy is in the midst of a project that converges robotic process automation and Microsoft Vision AI to solve a unique business problem for the advertising, marketing and PR firm. Yuri Aguiar is already thinking about how he will protect the resulting algorithms and processes from theft. "I doubt it is patent material, but it does give us a competitive edge and reduces our time-to-market significantly," says Aguiar, chief innovation and transformation officer. "I look at algorithms as modern software modules. If they manage proprietary work, they should be protected as such." Intellectual property theft has become a top concern of global enterprises.


Why are Artificial Intelligence systems biased?

#artificialintelligence

A machine-learned AI system used to assess recidivism risks in Broward County, Fla., often gave higher risk scores to African Americans than to whites, even when the latter had criminal records. The popular sentence-completion facility in Google Mail was caught assuming that an "investor" must be a male. A celebrated natural language generator called GPT, with an uncanny ability to write polished-looking essays for any prompt, produced seemingly racist and sexist completions when given prompts about minorities. Amazon found, to its consternation, that an automated AI-based hiring system it built didn't seem to like female candidates. Commercial gender-recognition systems put out by industrial heavy-weights, including Amazon, IBM and Microsoft, have been shown to suffer from high misrecognition rates for people of color.


Teaching an AI to be less biased doesn't have to make it less accurate

New Scientist

Making an artificial intelligence less biased makes it less accurate, according to conventional wisdom, but that may not be true. A new way of testing AIs could help us build algorithms that are both fairer and more effective. The data sets we gather from society are infused with historical prejudice and AIs trained on them absorb this bias. This is worrying, as the technology is creeping into areas like job recruitment and the criminal justice system.


This Drone Maker Is Swooping In Amid US Pushback Against DJI

WIRED

These being pandemic times, a recent visit to the Silicon Valley offices of drone startup Skydio involved slipping past dumpsters into the deserted yard behind the company's loading dock. Moments later, a black quadcopter eased out of the large open door sounding like a large and determined wasp. Skydio is best known for its "selfie drones," which use onboard artificial intelligence to automatically follow and film a person, whether they're running through a forest or backcountry skiing. The most recent model, released last fall, costs $999. The larger and more severe-looking machine that greeted WIRED has similar autonomous flying skills but aims to expand the startup's technology beyond selfies into business and government work, including the military.