Collaborating Authors

civil rights & constitutional law

California Sues Gaming Giant Activision Blizzard Over Unequal Pay, Sexual Harassment

NPR Technology

A lawsuit filed by the state of California on Wednesday alleges sexual harassment, gender discrimination and violations of the state's equal pay law at the video game giant Activision Blizzard. A lawsuit filed by the state of California on Wednesday alleges sexual harassment, gender discrimination and violations of the state's equal pay law at the video game giant Activision Blizzard. The video game studio behind the hit franchises Call of Duty, World of Warcraft and Candy Crush is facing a civil lawsuit in California over allegations of gender discrimination, sexual harassment and potential violations of the state's equal pay law. A complaint, filed by the state Department of Fair Employment and Housing on Wednesday, alleges that Activision Blizzard Inc. "fostered a sexist culture" where women were paid less than men and subjected to ongoing sexual harassment including groping. Officials at the gaming company knew about the harassment and not only failed to stop it but retaliated against women who spoke up, the complaint also alleges.

Disability rights advocates are worried about discrimination in AI hiring tools


Your ability to land your next job could depend on how well you play one of the AI-powered games that companies like AstraZeneca and Postmates are increasingly using in the hiring process. Some companies that create these games, like Pymetrics and Arctic Shores, claim that they limit bias in hiring. But AI hiring games can be especially difficult to navigate for job seekers with disabilities. In the latest episode of MIT Technology Review's podcast "In Machines We Trust," we explore how AI-powered hiring games and other tools may exclude people with disabilities. And while many people in the US are looking to the federal commission responsible for employment discrimination to regulate these technologies, the agency has yet to act.

Podcast: Playing the job market

MIT Technology Review

Increasingly, job seekers need to pass a series of tests in the form of artificial-intelligence games just to be seen by a hiring manager. In this third of a four-part miniseries on AI and hiring, we speak to someone who helped create these tests, and we ask who might get left behind in the process and why there isn't more policy in place. We also try out some of these tools ourselves. This miniseries on hiring was reported by Hilke Schellmann and produced by Jennifer Strong, Emma Cillekens, Anthony Green, and Karen Hao. Jennifer: Often in life … you have to "play the metaphorical game"… to get the win you might be chasing. It's just a complicated game.. Gh - ah.. Game.." Jennifer: But what if that game… was literal? And what if winning at it could mean the difference between landing a job you've been dreaming of… or not. Increasingly job seekers need to pass a series of "tests" in the form of artificial-intelligence games… just to be seen by a hiring manager. Anonymous job seeker: For me, being a military veteran being able to take tests and quizzes or being under pressure is nothing for me, but I don't know why the cognitive tests gave me anxiety, but I think it's because I knew that it had nothing to do with software engineering that's what really got me. She asked us to call her Sally because she's criticizing the hiring methods of potential employers and she's concerned about publishing her real name. She has a graduate degree in information from Rutgers University in New Jersey, with specialties in data science and interaction design. And Sally fails to see how solving a timed puzzle... or playing video games like Tetris... have any real bearing on her potential to succeed in her field. So companies want to do diversity and inclusion, but you're not doing diversity and inclusion when it comes to thinking, not everyone thinks the same. So how are you inputting that diversity and inclusion when you're only selecting the people that can figure out a puzzle within 60 seconds.

AI legislation must address bias in algorithmic decision-making systems


All the sessions from Transform 2021 are available on-demand now. In early June, border officials "quietly deployed" the mobile app CBP One at the U.S.-Mexico border to "streamline the processing" of asylum seekers. While the app will reduce manual data entry and speed up the process, it also relies on controversial facial recognition technologies and stores sensitive information on asylum seekers prior to their entry to the U.S. The issue here is not the use of artificial intelligence per se, but what it means in relation to the Biden administration's pre-election promise of civil rights in technology, including AI bias and data privacy. When the Democrats took control of both House and Senate in January, onlookers were optimistic that there was an appetite for a federal privacy bill and legislation to stem bias in algorithmic decision-making systems. This is long overdue, said Ben Winters, Equal Justice Works Fellow of the Electronic Privacy Information Center (EPIC), who works on matters related to AI and the criminal justice system.

The importance of having accountability in AI ethics


AI ethics expert Joanna J Bryson spoke to about the challenges of regulating AI and why more work needs to be done. As AI becomes a bigger part of society, the ethics around the technology require more discussion, with everything from privacy and discrimination to human safety needing consideration. There have been several examples in recent years highlighting ethical problems with AI, including an MIT image library to train AI that contained racist and misogynistic terms and the controversial credit score system in China. In recent years, the EU has made conscious steps towards addressing some of these issues, laying the groundwork for proper regulation for the technology. Its most recent proposals revealed plans to classify different AI applications depending on their risks.

The Absurd Idea to Put Bodycams on Teachers Is ... Feasible?


In the realm of international cybersecurity, "dual use" technologies are capable of both affirming and eroding human rights. Facial recognition may identify a missing child, or make anonymity impossible. Hacking may save lives by revealing key intel on a terrorist attack, or empower dictators to identify and imprison political dissidents. The same is true for gadgets. Your smart speaker makes it easier to order pizza and listen to music, but also helps tech giants track you even more intimately and target you with more ads.

Navigating the Intersections of Data, Artificial Intelligence, and Privacy


While the U.S. is figuring out privacy laws at the state and federal level, artificial and augmented intelligence (AI) is evolving and becoming commonplace for businesses and consumers. These technologies are driving new privacy concerns. Years ago, consumers feared a stolen Social Security number. Now, organizations can uncover political views, purchasing habits, and much more. The repercussions of data are broader and deeper than ever.

Is there any way out of Clearview's facial recognition database?


In March 2020, two months after The New York Times exposed that Clearview AI had scraped billions of images from the internet to create a facial recognition database, Thomas Smith received a dossier encompassing most of his digital life. Using the recently enacted California Consumer Privacy Act, Smith asked Clearview for what they had on him. The company sent him pictures that spanned moments throughout his adult life: a photo from when he got married and started a blog with his wife, another when he was profiled by his college's alumni magazine, even a profile photo from a Python coding meetup he had attended a few years ago. "That's what really threw me: All the things that I had posted to Facebook and figured, 'Nobody's going to ever look for that,' and here it is all laid out in a database," Smith told The Verge. Clearview's massive surveillance apparatus claims to hold 3 billion photos, accessible to any law enforcement agency with a subscription, and it's likely you or people you know have been scooped up in the company's dragnet.

University of Alabama in Huntsville sued for allegedly violating state's 'Campus Free Speech Act'

FOX News

Fox News Flash top headlines are here. Check out what's clicking on Young Americans for Liberty, the nation's leading youth libertarian organization, announced a free speech lawsuit against the University of Alabama in Huntsville Thursday aiming to strike down a policy that requires students to obtain speaking permits three days in advance of campus events. The Alliance Defending Freedom, which is representing the school's YAL chapter in the suit, is alleging that the policy violates Alabama's Campus Free Speech Act. "Alabama law is clear: Students don't need a permit from college officials to speak on campus, but that's exactly what the University of Alabama in Huntsville is doing -- violating the law and shutting down speech on campus," ADF counsel Michael Ross, who specializes in academic freedom, said in a statement.

No cults, no politics, no ghouls: how China censors the video game world

The Guardian

In the years after it was founded in 1999, the Swedish video game company Paradox Interactive quietly built a reputation for developing some of the best, and most hardcore, strategy games on the market. "Deep, endless, complex, unyielding games," is how Shams Jorjani, the company's chief business development officer, describes Paradox's offerings. Most of its biggest hits, such as the middle ages-themed Crusader Kings, or Sengoku, in which you play as a 16th-century Japanese noble, were loosely based on history. But in 2016, Paradox decided to try something a little different. Its new game, Stellaris, was a work of sprawling science fiction, set 200 years in the future. In this virtual universe, players could explore richly detailed galaxies, command their own fusion-powered starship fleets and fight with extraterrestrials to expand their space empires. Gamers could choose to play as the human race, or one of many alien species. Another type of alien is a sentient crystal that eats rocks.) The game was an instant hit, selling more than 200,000 copies in its first 24 hours. Later that year, Paradox decided to take Stellaris to China. This would mean navigating the country's notoriously tricky censorship rules, but given that China was, at the time, home to an estimated 560 million gamers, the commercial appeal was irresistible. Paradox had been burned in China before.