Australia's Minister for Superannuation, Financial Services and the Digital Economy Jane Hume has assured the country's AI ethics framework will remain voluntary for the foreseeable future. As part of her address during the virtual CEDA AI Innovation in Action event on Tuesday, Hume explained there were sufficient regulatory frameworks in place and that another one would be unnecessary. "We already have a very strong regulatory framework; we already have privacy laws, we already have consumer laws, we already have a data commissioner, we already have a privacy commissioner, we have a misconduct regulator. We have all those guardrails that already sit around the way we run our businesses," she told ZDNet. "AI is simply a technology that's being imposed upon an existing business. It's important that technology is being used to solve problems. The problems themselves haven't really changed, so our regulations certainly have to be flexible enough to accommodate technology changes … we want to make sure that there's nothing in regulations and legislation that prevents the advancement of technology. "But at the same time, building new regulations for technology, unless we can see a use case for it, is something that we would be reluctant to do, to over legislate and overprescribe." The federal government developed the national AI ethics framework in 2019, following the release of a discussion paper by Data61, the digital innovation arm of the Commonwealth Scientific and Industrial Research Organisation (CSIRO). The discussion paper highlighted a need for development of AI in Australia to be wrapped with a sufficient framework to ensure nothing is set onto citizens without appropriate ethical consideration. Making up the framework are eight ethical principles: Human, social and environment wellbeing; human-centred values in respect to human rights, diversity, and the autonomy of individuals; fairness; privacy protection and security of data; reliability and safety in accordance with the intended purpose of the AI systems; transparency and explainability; contestability; and accountability. Hume believes the principles have been designed in a way that make them "kind of universal" and therefore industry would be willing to adopt them voluntarily. "There's nothing in there that people would feel uncomfortable with, there's nothing that's too prescriptive … these are all things that we would expect.
If efforts by states and cities to pass privacy regulations curbing the use of facial recognition are anything to go by, you might fear the worst for the companies building the technology. But a recent influx of investor cash suggests the facial recognition startup sector is thriving, not suffering. Facial recognition is one of the most controversial and complex policy areas in play. The technology can be used to track where you go and what you do. It's used by public authorities and in private businesses like stores.
A lawsuit filed by the state of California on Wednesday alleges sexual harassment, gender discrimination and violations of the state's equal pay law at the video game giant Activision Blizzard. A lawsuit filed by the state of California on Wednesday alleges sexual harassment, gender discrimination and violations of the state's equal pay law at the video game giant Activision Blizzard. The video game studio behind the hit franchises Call of Duty, World of Warcraft and Candy Crush is facing a civil lawsuit in California over allegations of gender discrimination, sexual harassment and potential violations of the state's equal pay law. A complaint, filed by the state Department of Fair Employment and Housing on Wednesday, alleges that Activision Blizzard Inc. "fostered a sexist culture" where women were paid less than men and subjected to ongoing sexual harassment including groping. Officials at the gaming company knew about the harassment and not only failed to stop it but retaliated against women who spoke up, the complaint also alleges.
Your ability to land your next job could depend on how well you play one of the AI-powered games that companies like AstraZeneca and Postmates are increasingly using in the hiring process. Some companies that create these games, like Pymetrics and Arctic Shores, claim that they limit bias in hiring. But AI hiring games can be especially difficult to navigate for job seekers with disabilities. In the latest episode of MIT Technology Review's podcast "In Machines We Trust," we explore how AI-powered hiring games and other tools may exclude people with disabilities. And while many people in the US are looking to the federal commission responsible for employment discrimination to regulate these technologies, the agency has yet to act.
Increasingly, job seekers need to pass a series of tests in the form of artificial-intelligence games just to be seen by a hiring manager. In this third of a four-part miniseries on AI and hiring, we speak to someone who helped create these tests, and we ask who might get left behind in the process and why there isn't more policy in place. We also try out some of these tools ourselves. This miniseries on hiring was reported by Hilke Schellmann and produced by Jennifer Strong, Emma Cillekens, Anthony Green, and Karen Hao. Jennifer: Often in life … you have to "play the metaphorical game"… to get the win you might be chasing. It's just a complicated game.. Gh - ah.. Game.." Jennifer: But what if that game… was literal? And what if winning at it could mean the difference between landing a job you've been dreaming of… or not. Increasingly job seekers need to pass a series of "tests" in the form of artificial-intelligence games… just to be seen by a hiring manager. Anonymous job seeker: For me, being a military veteran being able to take tests and quizzes or being under pressure is nothing for me, but I don't know why the cognitive tests gave me anxiety, but I think it's because I knew that it had nothing to do with software engineering that's what really got me. She asked us to call her Sally because she's criticizing the hiring methods of potential employers and she's concerned about publishing her real name. She has a graduate degree in information from Rutgers University in New Jersey, with specialties in data science and interaction design. And Sally fails to see how solving a timed puzzle... or playing video games like Tetris... have any real bearing on her potential to succeed in her field. So companies want to do diversity and inclusion, but you're not doing diversity and inclusion when it comes to thinking, not everyone thinks the same. So how are you inputting that diversity and inclusion when you're only selecting the people that can figure out a puzzle within 60 seconds.
All the sessions from Transform 2021 are available on-demand now. In early June, border officials "quietly deployed" the mobile app CBP One at the U.S.-Mexico border to "streamline the processing" of asylum seekers. While the app will reduce manual data entry and speed up the process, it also relies on controversial facial recognition technologies and stores sensitive information on asylum seekers prior to their entry to the U.S. The issue here is not the use of artificial intelligence per se, but what it means in relation to the Biden administration's pre-election promise of civil rights in technology, including AI bias and data privacy. When the Democrats took control of both House and Senate in January, onlookers were optimistic that there was an appetite for a federal privacy bill and legislation to stem bias in algorithmic decision-making systems. This is long overdue, said Ben Winters, Equal Justice Works Fellow of the Electronic Privacy Information Center (EPIC), who works on matters related to AI and the criminal justice system.
AI ethics expert Joanna J Bryson spoke to Siliconrepublic.com about the challenges of regulating AI and why more work needs to be done. As AI becomes a bigger part of society, the ethics around the technology require more discussion, with everything from privacy and discrimination to human safety needing consideration. There have been several examples in recent years highlighting ethical problems with AI, including an MIT image library to train AI that contained racist and misogynistic terms and the controversial credit score system in China. In recent years, the EU has made conscious steps towards addressing some of these issues, laying the groundwork for proper regulation for the technology. Its most recent proposals revealed plans to classify different AI applications depending on their risks.
In the realm of international cybersecurity, "dual use" technologies are capable of both affirming and eroding human rights. Facial recognition may identify a missing child, or make anonymity impossible. Hacking may save lives by revealing key intel on a terrorist attack, or empower dictators to identify and imprison political dissidents. The same is true for gadgets. Your smart speaker makes it easier to order pizza and listen to music, but also helps tech giants track you even more intimately and target you with more ads.
While the U.S. is figuring out privacy laws at the state and federal level, artificial and augmented intelligence (AI) is evolving and becoming commonplace for businesses and consumers. These technologies are driving new privacy concerns. Years ago, consumers feared a stolen Social Security number. Now, organizations can uncover political views, purchasing habits, and much more. The repercussions of data are broader and deeper than ever.
In March 2020, two months after The New York Times exposed that Clearview AI had scraped billions of images from the internet to create a facial recognition database, Thomas Smith received a dossier encompassing most of his digital life. Using the recently enacted California Consumer Privacy Act, Smith asked Clearview for what they had on him. The company sent him pictures that spanned moments throughout his adult life: a photo from when he got married and started a blog with his wife, another when he was profiled by his college's alumni magazine, even a profile photo from a Python coding meetup he had attended a few years ago. "That's what really threw me: All the things that I had posted to Facebook and figured, 'Nobody's going to ever look for that,' and here it is all laid out in a database," Smith told The Verge. Clearview's massive surveillance apparatus claims to hold 3 billion photos, accessible to any law enforcement agency with a subscription, and it's likely you or people you know have been scooped up in the company's dragnet.