Let me begin with a disclaimer. I am no expert in Australian law. However, it seems to me that the Australian Federal Court (FCA) has made a series of missteps in the DABUS case (Thaler v Commissioner of Patents  FCA 879) that led it to conclude that an AI-driven system can be an inventor. As similar DABUS cases are currently pending before the European Patent Office and the UK Court of Appeal and a number of companies employ AI in their inventive endeavours, I find it relevant to discuss the FCA's argumentation. DABUS is an AI-driven system which is currently on a world tour of courts and patent offices with the claim to have invented the following subject matter: food container and devices and methods for attracting enhanced attention.
Representatives from Google have told an Australian Parliamentary committee looking into foreign interference that the country has not been the target of coordinated influence campaigns. "We've not seen the sort of foreign coordinated foreign influence campaigns targeted at Australia that we have with other jurisdictions, including the United States," Google director of law enforcement and information security Richard Salgado said. "Some of the disinformation campaigns that originate outside Australia, even if not targeting Australia, may affect Australia as collateral ... but not as a target of the campaign. "We have found no instances of foreign coordinated influence campaigns targeting Australia." While acknowledging campaigns that reach Australia do exist, he reiterated they have not specifically targeted Australia. "Some of these campaigns are broad enough that the disinformation could be, sort of, divisive in any jurisdiction in which it is consumed, even if it's not targeting that jurisdiction," Salgado told the Select Committee on Foreign Interference Through Social Media. "Google services, YouTube in particular, which is where we have seen most of these kinds of campaigns run, isn't really very well designed for the purpose of targeting groups to create the division that some of the other platforms have suffered, so it isn't actually all that surprising that we haven't seen this on our services." Appearing alongside Salgado on Friday was Google Australia and New Zealand director of government affairs and public policy Lucinda Longcroft, who told the committee her organisation has been in close contact with the Australian government as it looks to prevent disinformation from emerging leading up the next federal election. Additionally, the pair said that Google undertakes a "constant tuning" of the artificial intelligence and machine learning tech used. It said it also constantly adjusts policies and strategies to avoid moments of surprise, where Google could find itself unable to handle a shift in attacker strategy or shift in volume of attack. Appearing earlier in the week before the Parliamentary Joint Committee on Corporations and Financial Services, Google VP of product membership and partnerships Diana Layfield said her company does not monetise data from Google Pay in Australia. "I suppose you could argue that there are non-transaction data aspects -- so people's personal profile information," she added. "If you sign up for an app, you have to have a Google account.
Earlier this year, the Australian Federal Police (AFP) admitted to using a facial recognition tool, despite not having an appropriate legislative framework in place, to help counter child exploitation. The tool was Clearview AI, a controversial New York-based startup that has scraped social media networks for people's photos and created one of the biggest facial recognition databases in the world. It provides facial recognition software, marketed primarily at law enforcement. The AFP previously said while it did not adopt the facial recognition platform Clearview AI as an enterprise product and had not entered into any formal procurement arrangements with the company, it did use a trial version. Documents published by the AFP under the Freedom of Information Act 1982 confirmed that the AFP-led Australian Centre to Counter Child Exploitation (ACCCE) registered for a free trial of the Clearview AI facial recognition tool and conducted a pilot of the system from 2 November 2019 to 22 January 2020.
Artificial intelligence (AI) might be technology's Holy Grail, but Australia's Human Rights Commissioner Edward Santow has warned about the need for responsible innovation and an understanding of the challenges new technology poses for basic human rights. "AI is enabling breakthroughs right now: Healthcare, robotics, and manufacturing; pretty soon we're told AI will bring us everything from the perfect dating algorithm to interstellar travel -- it's easy in other words to get carried away, yet we should remember AI is still in its infancy," Santow told the Human Rights & Technology conference in Sydney in July. Santow was launching the Human Rights and Technology Issues Paper, which was described as the beginning of a major project by the Human Rights Commission to protect the rights of Australians in a new era of technological change. The paper [PDF] poses questions centred on what protections are needed when AI is used in decisions that affect the basic rights of people. It asks also what is required from lawmakers, governments, researchers, developers, and tech companies big and small. Pointing to Microsoft's AI Twitter bot Tay, which in March 2016 showed the ugly side of humanity -- at least as present on social media -- Santow said it is a key example of how AI must be right before it's unleashed onto humans.
An SMS survey conducted by polling company Roy Morgan over the last weekend has found that most respondents were not concerned by Australia's upcoming facial recognition system, with 67.5 percent stating they were unconcerned. Roy Morgan asked the question -- "Under anti-terror measures, state governments will provide driver licence photos for mass facial recognition technology. Does this concern you?" -- and found that the younger each cohort was, the more concerned they were, but no one age bracket had a majority of respondents who were more concerned than not. Broken down by state, the most concerned were Victoria and NSW, with 38 percent and 34 percent, respectively. Queensland and South Australia tied as the least concerned, with less than a quarter of respondents concerned.