Goto

Collaborating Authors

 data61


Operationalizing Responsible AI at Scale: CSIRO Data61's Pattern-Oriented Responsible AI Engineering Approach

Communications of the ACM

For the world to realize the benefits brought by AI, it is important to ensure artificial intelligent (AI) systems are responsibly developed, used throughout their entire life cycle, and trusted by the humans expected to rely on them.1 The goal for AI adoption has triggered a significant national effort to realize responsible AI (RAI) in Australia. CSIRO Data61 is the data and digital specialist arm of Australia's national science agency. In 2019, CSIRO Data61's worked with the Australian government to conduct the AI Ethics Framework research. This work led to the release of eight AI ethics principles to ensure Australia's adoption of AI is safe, secure, and reliable.a It is challenging to turn high-level AI ethics principles into real-life practices.


Can artificial intelligence now influence human decision-making? - Dataconomy

#artificialintelligence

Artificial intelligence has made many breakthroughs in the last decade, including beating champion players at Jeopardy!, learning to identify cats, seeing better than humans, and driving cars autonomously. A new study (PDF) by researchers from the Commonwealth Scientific and Industrial Research Organisation's (CSIRO) Data61, along with the Australian National University and researchers from Germany, has determined that AI can influence human decision-making. The research involved having humans play three games against a computer. In the first two experiments, people were asked to click on red or blue-colored boxes with fake currency as a prize. In the third, participants were given the role of an investor and asked to make investment decisions, with the AI acted as the trustee.


Data61's Year in Review – Algorithm

#artificialintelligence

We'll be working with our partners across government, industry and academia to improve data management, analysis, and decision making. We're working with government to apply AI and machine learning for better service delivery in areas like transport, infrastructure and the environment. Our work in developing privacy-preserving data sharing technologies, cybersecurity, digital twins, robots, and advanced data analytics will support Australian industry in creating new and better products, services, jobs and export opportunities.


Artificial Intelligence Roadmap - Data61

#artificialintelligence

Artificial Intelligence: Solving problems, growing the economy and improving our quality of life outlines the importance of action for Australia to capture the benefits of artificial intelligence (AI), estimated to be worth AU$22.17 Published by the Australian Government in November 2019, and codeveloped by CSIRO's Data61 and the Department of Industry, Innovation and Science, the report identifies strategies to help develop a national AI capability to boost the productivity of Australian industry, create jobs and economic growth, and improve the quality of life for current and future generations. The roadmap identifies three high potential areas of AI specialisation for Australia based on the opportunity to solve significant problems at home, export the solutions to the world and build on Australia's existing strengths. This report is intended to help guide future investment in AI and machine learning, and accompanies Artificial Intelligence: Australia's Ethics Framework, a discussion paper prepared by CSIRO's Data61 and published by the Australian Government in April 2019.


Data61 using AI and gamification to diagnose mental health patients ZDNet

#artificialintelligence

Data61, the innovation arm of Commonwealth Scientific and Industrial Research Organisation (CSIRO), has used artificial intelligence (AI) and gamification to help psychiatrists and other clinicians accurately diagnose patients with mental health disorders and help improve overall mental health research. Speaking to ZDNet during D61 Live in Sydney on Wednesday, lead author of the research Amir Dezfouli said the idea for the research brings together his two areas of specialty: neuroscience and AI. "We know from neuroscience that most of the mental health disorders affect how we make decisions. One of the easiest ways to assess that is to complete a simple task or – in this case – a simple computer game, which allows us to record a patient's behaviour," he said. "We then use machine learning AI to analyse this complex data set. This gives us an idea about the underlying pathology and gives us a diagnosis of mental health disorders."


Computer game to assist clinicians in diagnosing mental health disorders

#artificialintelligence

A team of researchers led by CSIRO's Data61, the data and digital specialist arm of Australia's national science agency, have developed a novel technique that could assist psychiatrists and other clinicians to diagnose and characterize complex mental health disorders, potentially enabling more effective treatments. Announced today at D61 LIVE in Sydney, the researchers revealed that using a simple computer game and artificial intelligence techniques, they were able to identify behavioral patterns in subjects with depression and bipolar disorder, down to subtle individual differences in each group. The study included 101 participants: 34 with depression, 33 with bipolar disorder, and a control group of 34 subjects. The computer game presents individuals with two choices, and tracks their behavior as they respond. The complex data collected from the game is analyzed through artificial neural networks--brain-inspired systems intended to replicate the way that humans learn--which are able to disentangle the nuanced behavioral differences between healthy individuals, and those with depression or bipolar disorder.


Cyber Security Research Centre, Data61, Penten join forces to build AI-enabled defence systems ZDNet

#artificialintelligence

Cyber Security Cooperative Research Centre (CSCRC), together with Data61, the innovation arm of the Commonwealth Scientific and Industrial Research Organisation (CSIRO), and cybersecurity startup Penten, have announced a joint research project that will focus on developing artificial intelligence (AI) enabled cybersecurity defence mechanisms. Under the arrangement announced at D61 Live on Wednesday, Penten will have access to Data61's AI research, which it will use to extend on its existing work to build AI-enabled technology such as "cyber traps" and "decoys". According to Penten CEO Matthew Wilson, using AI will help speed up the creation of cyber traps and make them more realistic. "Our solutions use artificial intelligence to learn the patterns of activity and content from surrounding computers and data. We then use this information to create realistic and believable mimics. This means we can deliver suitable content extremely efficiently, tailored to a customer environment and with minimal effort on the part of the defender," he said.


Data61 on Twitter

#artificialintelligence

The future of AI, smart cities transforming urban life, trust in a trustless world, what our future will look like by 2040, capitalising on the robot revolution - what does your D61 LIVE 2019 look like?


Researchers develop 'vaccine' against attacks on machine learning

#artificialintelligence

Algorithms'learn' from the data they are trained on to create a machine learning model that can perform a given task effectively without needing specific instructions, such as making predictions or accurately classifying images and emails. These techniques are already used widely, for example to identify spam emails, diagnose diseases from X-rays, predict crop yields and will soon drive our cars. While the technology holds enormous potential to positively transform our world, artificial intelligence and machine learning are vulnerable to adversarial attacks, a technique employed to fool machine learning models through the input of malicious data causing them to malfunction. Dr Richard Nock, machine learning group leader at CSIRO's Data61 said that by adding a layer of noise (i.e. an adversary) over an image, attackers can deceive machine learning models into misclassifying the image. "Adversarial attacks have proven capable of tricking a machine learning model into incorrectly labelling a traffic stop sign as speed sign, which could have disastrous effects in the real world. "Our new techniques prevent adversarial attacks using a process similar to vaccination," Dr Nock said. "We implement a weak version of an adversary, such as small modifications or distortion to a collection of images, to create a more'difficult' training data set.


War of words on the AI front

#artificialintelligence

As if anyone needed reminding that a federal election looms, a war of words has broken out between the offices of Industry Minister Karen Andrews and shadow human services minister Ed Husic over a briefing on, of all things, artificial intelligence. Late last year, Mr Husic approached Ms Andrews' office seeking a briefing on the progress of an AI technology roadmap report being prepared by the CSIRO unit Data61 and the Department of Industry, and to get an understanding of the thinking in the report. The request was knocked by the Minister's office – not once but repeatedly – according to Ed Husic and he is not happy about it. These briefings are quite routine and rarely rejected, he says. While there are no specific rules around such briefings, by convention they are commonplace – although the understanding is that they are done in the background, quietly and without any resulting overtly politicisation. Even in the hyper-partisan times we live in, governments see merit in ensuring both the government and opposition benches the opportunity to understand the detail of evolving policy – particularly where there is complexity.