Goto

Collaborating Authors

Australia Government


Eating-disorder group's AI chatbot gave weight loss tips, activist says

Washington Post - Technology News

The National Eating Disorders Collaboration of Australia warns that increasing focus on weight or body shape puts one at greater risk of an eating-disorder relapse. Other warning signs include repeated self-weighing and counting calories, according to the group, which advises the Australian government. The mention of specific weights, measurements, weight loss and quantities around food should also be avoided around people with a history of eating disorders, according to the InsideOut Institute at the University of Sydney.


Deepfake AI tech could assist and empower online predators, safety expert warns

FOX News

Criminals are taking advantage of AI technology to conduct misinformation campaigns, commit fraud and obstruct justice through deepfake audio and video. Australia's eSafety Commission has raised concerns about the potential for artificial intelligence (AI) to assist predators in grooming children online as the country debates restrictions on the emerging technology. Australian eSafety Commissioner Julie Inman Grant posted on Twitter that "the manipulative power of generative AI to execute on grooming and sextortion is no longer speculative." "eSafety is already receiving cyberbullying reports and image-based abuse reports around deepfakes," she wrote. "The fact is AI has been'exfiltrated into the wild' without guardrails."


NSW gov takes cautious approach with generative AI - Strategy - Software - iTnews

#artificialintelligence

The NSW government will be taking a "deliberate but cautious" approach when implementing new artificial intelligence technology in line with citizen trust around data and AI usage. NSW government chief data scientist and industry professor at UTS Dr Ian Oppermann told an Infosys and Trans-Tasman Business Circle event that trust is a "very big issue", with his work on the NSW AI assurance framework and a multitude of other government policies. "Ultimately, there are a whole lot of other elements around trust and demonstration of trustworthiness," Oppermann said. "We need to explore what happens when things go wrong. "We need to be very clear about what we will not do with data in order to help build confidence, that we're behaving appropriately and demonstrating trustworthiness.


Killer robot dogs that are controlled by soldiers' MINDS are trialed by Australian army

Daily Mail - Science & tech

Soldiers controlling a robot dog with their mind as they patrol a dusty road and sweep an delipidated building may sound like science fiction, but it is the scene in a real world demonstration. The Australian Army has perfected mind-controlling abilities with eight sensors neatly packed inside a helmet that work in tandem with a Microsoft HoloLens. The innovation features an AI-decoder that translates a soldier's brain signals into explainable instructions that are sent to the robotic quadruped, allowing humans to stay focused on their surroundings. A new video shows military personal conducting a simulated patrol clearance using the robot dog, which was instructed to sweep a facility using what it read from a person's brain waves - and with 94 percent accuracy. The system was developed by the University of Technology Sydney that first unveiled the innovation last year, but recently published a new paper detailing the work. 'The user used our augmented brain–robot interface (aBRI) platform to control the robot systems,' reads the paper published by American Chemical Society on March 16.


How AI fooled Centrelink, and could fool you

#artificialintelligence

Thanks to artificial intelligence, faking someone's voice is easier than ever - all you need is a few minutes of audio. An investigation by Guardian Australia has found that this technology is able to fool a voice identification system that's used by the Australian government to secure the private information of millions of people. Data and interactives editor Nick Evershed explains how he discovered this security flaw and AI expert Toby Walsh explores how this technology could potentially make it easier than ever to steal someone's identity or commit scams


Cybersecurity funds should go towards beefing up Centrelink voice authentication, Greens say

The Guardian

The federal government should be using some of the $10bn allocated in the budget to cybersecurity defences to combat people using AI to bypass biometric securities including voice authentication, a Greens senator has said. On Friday Guardian Australia reported that Centrelink's voice authentication system can be tricked using a free online AI cloning service and just four minutes of audio of the user's voice. After the Guardian Australia journalist Nick Evershed cloned his own voice, he was able to access his account using his cloned voice and his customer reference number. The voiceprint service, provided by the Microsoft-owned voice software company Nuance, was being used by 3.8 million Centrelink clients at the end of February, and more than 7.1 million people had verified their voice using the same system with the Australian Taxation Office. Despite being alerted to the vulnerability last week, Services Australia has not indicated it will change its use of voice ID, saying the technology is a "highly secure authentication method" and the agency "continually scans for potential threats and make ongoing enhancements to ensure customer security".


Can Drones And Artificial Intelligence Keep Us Safe From Sharks?

#artificialintelligence

You might be rolling your eyes as you see the drone take off to the skies and hover over the Australian coastline, camera angled straight down towards the glistening turquoise water. "Another TikTok influencer trying to get the perfect shot," you grumble to yourself. But if you look closely at the pilot, you'll notice they've got a sign next to them that says "Keep Clear" in bright yellow and red letters. Drones have been a helpful tool in spotted sharks from the skies. It's an Australian surf lifesaver, using the above drone to spot sharks at the beach before they get too close to swimmers like yourself.


When Algorithms Rule, Values Can Wither

#artificialintelligence

Interest in the possibilities afforded by algorithms and big data continues to blossom as early adopters gain benefits from AI systems that automate decisions as varied as making customer recommendations, screening job applicants, detecting fraud, and optimizing logistical routes.1 But when AI applications fail, they can do so quite spectacularly.2 Consider the recent example of Australia's "robodebt" scandal.3 In 2015, the Australian government established its Income Compliance Program, with the goal of clawing back unemployment and disability benefits that had been made inappropriately to recipients. It set out to identify overpayments by analyzing discrepancies between the annual income that individuals reported and the income assessed by the Australian Tax Office.


Annual GWP at Australia's agencies passes $7 billion� - Insurtech - Insurance News - insuranceNEWS.com.au

#artificialintelligence

The Underwriting Agencies Council (UAC) says annual gross written premium at Australian agencies is now around $7.5 billion, and technology-enabled firms are leading the way as the sector expands dramatically. Sydney-based GM William Legge says UAC now has more than 120 agency members, even as mergers and acquisitions created fewer, larger agencies and a build-up of "cluster groups" owning multiple specialist agency brands. As major carriers relinquish capacity in some lines, the agency market is filling gaps in coverage, Mr Legge says, and brokers have found agencies to be a trusted, reliable market that can provide responsive service, quick turn-around times, and bespoke, tailored products for hard-to-place risks. Insurance consulting firm Xceedance offers its MGA Agility Suite tailored platform to agencies, encompassing policy administration, underwriting, distribution, a broker portal and reporting functionality. Xceedance works with agencies and insurers to facilitate and support end-to-end insurance processes across claims, finance and accounting, insurance operations, catastrophe modelling, underwriting, actuarial and analytical services, policy services and data management.


Check, mate: A lesson in the need for stronger AI regulation

#artificialintelligence

Disturbing footage emerged this week of a chess-playing robot breaking the finger of a seven-year-old child during a tournament in Russia. Public commentary on this event highlights some concern in the community about the increasing use of robots in our society. Some people joked on social media that the robot was a "sore loser" and had a "bad temper". Of course, robots cannot actually express real human characteristics such as anger (at least, not yet). But these comments do demonstrate increasing concern in the community about the "humanisation" of robots.