Goto

Collaborating Authors

Litigation


Humans Against the Machines: Is Predictive Coding Really Better Than Humans? – Part 1

#artificialintelligence

Technological advancements are significantly influencing the legal services landscape. At unprecedented rates, corporations, law firms, and state and federal enforcement agencies are accepting and adopting the use of advanced technology in legal matters, including automation, machine learning, and algorithm-driven data analytics. With respect to discovery, over the past decade, the expansion of technology-assisted review has been well documented and debated. The wide embrace of technology-assisted review – or "TAR" for short, has met with acclaim from clients and their counsel. It is essentially undisputed by now, for instance, that TAR has proven to help produce quality results, while also achieving quantifiable cost savings.


Surge Pricing, Artificial Intelligence, and Responsibility

#artificialintelligence

On my first work trip to Jakarta 14 January 2016 for Grab, multiple terrorist bombs exploded a couple of miles from the GrabBike office where I had just arrived. People were fleeing cafes and restaurants around the attack site. My new colleagues were shaken, glad to be safe, looking to help. There was news of crowds on the streets trying to get away, confirmed by a spike in booking requests from the blocks around the explosion. My colleagues remembered the 2002 Bali bombings, and knew we should get people to spread out.


TikTok agrees to pay $92 million to settle teen privacy class-action lawsuit

ZDNet

TikTok has agreed to pay a proposed $92 million to settle a class-action lawsuit alleging the company invaded user privacy. The settlement, if approved, would lay to rest claims that the video content-sharing app, owned by Beijing-headquartered ByteDance, wrongfully collected the private and biometric data of users including teenagers and minors. The class-action lawsuit originated from 21 separate class-action lawsuits filed in California and Illinois last year. If accepted, the settlement -- filed in the US District Court for the Northern District of Illinois -- would require the creation of a compensation fund for TikTok users. In addition, TikTok would be required to launch a new "privacy compliance" training program and would need to take further measures to protect user data.


TikTok agrees to $92 million settlement in class action privacy lawsuit

Mashable

TikTok's parent company ByteDance has agreed to pay a $92 million settlement in a lawsuit alleging it violated Illinois' biometric privacy laws. The company still disputes the truth of the accusations against them, of course, but right now it just wants to move on from the whole thing. The federal lawsuit combined 21 separate class action suits from across multiple districts into one big Katamari lawsuit, claiming that both TikTok and its predecessor Musical.ly "Specifically, Plaintiffs allege that the TikTok app infiltrates its users' devices and extracts a broad array of private data including biometric data and content that Defendants use to track and profile TikTok users for the purpose of, among other things, ad targeting and profit," reads the settlement agreement filed to the U.S. District Court for the Northern District of Illinois on Thursday. SEE ALSO: TikTok faces scrutiny over minors' user data ... again The complainants also accused TikTok of using the facial recognition technology in its video filters to gather data such as a user's age and ethnicity, and expressed concern about TikTok storing data on servers outside the U.S. TikTok has repeatedly been accused of sharing user data with the Chinese government, or at the very least being a virtual treasure trove of information the government could dip into if it chose.


TikTok To Pay $92 Million To Settle Class-Action Suit Over 'Theft' Of Personal Data

NPR Technology

TikTok on Wednesday agreed to pay $92 million to settle claims stemming from a class-action lawsuit alleging the app illegally tracked and shared the personal data of users without their consent. TikTok on Wednesday agreed to pay $92 million to settle claims stemming from a class-action lawsuit alleging the app illegally tracked and shared the personal data of users without their consent. TikTok has agreed to pay $92 million to settle dozens of lawsuits alleging that the popular video-sharing app harvested personal data from users, including information using facial recognition technology, without consent and shared the data with third-parties, some of which were based in China. The proposed settlement, which lawyers in the case have called among the largest privacy-related payouts in history, applies to 89 million TikTok users in the U.S. whose personal data was allegedly tracked and sold to advertisers in violation of state and federal law. "First, it provides compensation for TikTok users, but equally as important, it ensures TikTok will respect its users' privacy going forward," Katrina Carroll, one of the lawyers for TikTok users, said.


New York City's Surveillance Battle Offers National Lessons

WIRED

In January, when New York's Public Oversight of Surveillance Technology Act went into effect, the City of New York Police Department was suddenly forced to detail the tools it had long kept from public view. But instead of giving New Yorkers transparency, the NYPD gave error-filled, boilerplate statements that hide almost everything of value. Almost none of the policies list specific vendors, surveillance tool models, or information-sharing practices. The department's facial recognition policy says it can share data "pursuant to on-going criminal investigations, civil litigation, and disciplinary proceedings," a standard so broad it's largely meaningless. This marks the greatest test yet of Community Control of Police Surveillance (CCOPS), a growing effort to ensure that the public can take back control over the decisions of how communities are surveilled, deciding whether tools like facial recognition, drones, and predictive policing are acceptable for their neighborhoods.


Riot Games CEO Nicolas Laurent accused of gender-based harassment, misconduct in new lawsuit

Washington Post - Technology News

In 2018, Riot Games, the developer and publisher behind games such as "League of Legends" and "Valorant," made headlines after a Kotaku exposé about the company's culture of sexism. The article outlined an environment in which women were regularly passed over for promotions, and a company with an ingrained "bro culture," where demeaning and discriminatory behavior was viewed as normal. Kotaku's story led to a class action gender discrimination lawsuit. It also spawned two separate investigations by regulators in California, where Riot is based.



Trump pardons Anthony Levandowski, who stole trade secrets from Google

Mashable

Donald Trump is on his way out of the White House, but that didn't stop him from pardoning 73 people and commuting the sentences of another 70 people on the last day of his presidency. One name on that list is Anthony Levandowski, who was sentenced to 18 months in prison for stealing trade secrets from the Google-owned, self-driving car company Waymo. Levandowski was a co-founder of Google's self-driving car division before leaving the tech giant in 2016 to start a self-driving truck company called Otto. That company was subsequently acquired by Uber, and Waymo filed a lawsuit alleging that their confidential information ended up in the hands of Uber. Levandowski was looking at a 10-year sentence, but he eventually pleaded guilty to trade secret theft, thus reducing his prison sentence.


Faster Convergence in Deep-Predictive-Coding Networks to Learn Deeper Representations

arXiv.org Artificial Intelligence

Deep-predictive-coding networks (DPCNs) are hierarchical, generative models that rely on feed-forward and feed-back connections to modulate latent feature representations of stimuli in a dynamic and context-sensitive manner. A crucial element of DPCNs is a forward-backward inference procedure to uncover sparse states of a dynamic model, which are used for invariant feature extraction. However, this inference and the corresponding backwards network parameter updating are major computational bottlenecks. They severely limit the network depths that can be reasonably implemented and easily trained. We therefore propose a optimization strategy, with better empirical and theoretical convergence, based on accelerated proximal gradients. We demonstrate that the ability to construct deeper DPCNs leads to receptive fields that capture well the entire notions of objects on which the networks are trained. This improves the feature representations. It yields completely unsupervised classifiers that surpass convolutional and convolutional-recurrent autoencoders and are on par with convolutional networks trained in a supervised manner. This is despite the DPCNs having orders of magnitude fewer parameters.