SAN FRANCISCO, CA - SEPTEMBER 07: Google AI Research Scientist Timnit Gebru speaks onstage during ... [ ] Day 3 of TechCrunch Disrupt SF 2018 at Moscone Center on September 7, 2018 in San Francisco, California. 'Taking On Tech is an informative series that explores artificial intelligence, data science, algorithms, and mass censorship. In this inaugural report, For(bes) The Culture kicks things off with Dr. Timnit Gebru, a former researcher and co-lead of Google's Ethical AI team. When Gebru was forced out of Google after refusing to retract a research paper that was already cleared by Google's internal review process, a conversation about the tech industry's inherent diversity problem resurfaced. The paper raised concerns on algorithmic bias in machine learning and the latent perils that AI presents for marginalized communities. Around 1,500 Google employees signed a letter in protest, calling for accountability and answers over her unethical firing.
'Phrenology' has an old-fashioned ring to it. It sounds like it belongs in a history book, filed somewhere between bloodletting and velocipedes. We'd like to think that judging people's worth based on the size and shape of their skull is a practice that's well behind us. However, phrenology is once again rearing its lumpy head. In recent years, machine-learning algorithms have promised governments and private companies the power to glean all sorts of information from people's appearance.
As the use of AI accelerates around the world, policymakers are asking questions about what frameworks should guide the design and use of AI, and how it can benefit society. The EU is the first institution to take a major step to answer these questions through a proposed legal framework for AI released on 21 April 2021. In doing so, the EU is seeking to establish a safe environment for AI innovation and to position itself as a leader in setting "the global gold standard" for regulating AI. This is a positive aspect of the proposal as AI is a broad set of technology, tools and applications. Shifting the focus away from AI technology, which can have significantly different impacts depending on the application for which it is used, helps to mitigate the risk of divergent requirements for AI products and services.
On July 20, California filed an explosive workplace discrimination and harassment lawsuit against Activision Blizzard, publisher of immensely popular video games including World of Warcraft, Overwatch, and the Call of Duty franchise. It has resulted in a shockwave of response from employees, other games studios and players. The lawsuit alleges a "frat bro" culture was allowed to flourish in the office, creating an environment in which women were sexually harassed and discriminated against in advancement and compensation decisions. Activision Blizzard is one of the largest video game publishers in the world, owning studios who have created and released some of the most popular titles over the past decade. Its 2016 acquisition of Candy Crush publisher King, expanded its audience by millions more.
You might've heard about face recognition and its different applications. A face recognition system can identify people in videos or static images to put it in simple terms. Many fields use the technology for surveillance and tracking people. Some countries are using face recognition systems more widely than others. But while you may hear about it more frequently now, the technology has been in existence for decades.
AI is evolving on fast pace. Financial organizations are already using AI technologies to identify fraud and unusual transactions, personalize customer service, help make decisions on creditworthiness, using natural language processing on text documents, and for cybersecurity and general risk management. Over the past decades, banks have been improving their methods of interacting with customers. They have tailored modern technology to the specific character of their work. As an example, in the 1960s, the first ATMs were installed, and ten years later, there were already cards for doing transactions and payment.
An artificial intelligence system is capable of being an "inventor" under Australian patent law, the federal court has ruled, in a decision that could have wider intellectual property implications. University of Surrey professor Ryan Abbott has launched more than a dozen patent applications across the globe, including in the UK, US, New Zealand and Australia, on behalf of US-based Dr Stephen Thaler. They seek to have Thaler's artificial intelligence device known as Dabus (a device for the autonomous bootstrapping of unified sentience) listed as the inventor. The applications claimed Dabus, which is made up of artificial neural networks, invented an emergency warning light and a type of food container, among other inventions. Several countries, including Australia, had rejected the applications, stating a human must be named the inventor.
All the sessions from Transform 2021 are available on-demand now. Ethics and artificial intelligence have become increasingly intertwined due to the pervasiveness of AI. But researchers, creators, corporations, and governments still face major challenges if they hope to address some of the more pressing concerns around AI's impact on society. Much of this comes down to foresight -- being able to adequately predict what problems a new AI product, feature, or technology could create down the line, rather than focusing purely on short-term benefits. "If you do believe in foresight, then it should become part of what you do before you make the product," AI researcher and former Googler Margaret Mitchell said during a fireside chat at VentureBeat's Transform 2021 event today.
Representatives from Google have told an Australian Parliamentary committee looking into foreign interference that the country has not been the target of coordinated influence campaigns. "We've not seen the sort of foreign coordinated foreign influence campaigns targeted at Australia that we have with other jurisdictions, including the United States," Google director of law enforcement and information security Richard Salgado said. "Some of the disinformation campaigns that originate outside Australia, even if not targeting Australia, may affect Australia as collateral ... but not as a target of the campaign. "We have found no instances of foreign coordinated influence campaigns targeting Australia." While acknowledging campaigns that reach Australia do exist, he reiterated they have not specifically targeted Australia. "Some of these campaigns are broad enough that the disinformation could be, sort of, divisive in any jurisdiction in which it is consumed, even if it's not targeting that jurisdiction," Salgado told the Select Committee on Foreign Interference Through Social Media. "Google services, YouTube in particular, which is where we have seen most of these kinds of campaigns run, isn't really very well designed for the purpose of targeting groups to create the division that some of the other platforms have suffered, so it isn't actually all that surprising that we haven't seen this on our services." Appearing alongside Salgado on Friday was Google Australia and New Zealand director of government affairs and public policy Lucinda Longcroft, who told the committee her organisation has been in close contact with the Australian government as it looks to prevent disinformation from emerging leading up the next federal election. Additionally, the pair said that Google undertakes a "constant tuning" of the artificial intelligence and machine learning tech used. It said it also constantly adjusts policies and strategies to avoid moments of surprise, where Google could find itself unable to handle a shift in attacker strategy or shift in volume of attack. Appearing earlier in the week before the Parliamentary Joint Committee on Corporations and Financial Services, Google VP of product membership and partnerships Diana Layfield said her company does not monetise data from Google Pay in Australia. "I suppose you could argue that there are non-transaction data aspects -- so people's personal profile information," she added. "If you sign up for an app, you have to have a Google account.