assurance framework
Artificial Intelligence Assurance Framework to drive safe, secure and smart projects
Safer and more efficient services will be delivered for NSW residents using Artificial Intelligence (AI), with a new world-leading AI Assurance Framework to come into effect in March 2022. All agencies across the NSW Government can apply the Assurance Framework to ensure increasingly sophisticated AI systems are safe, effective and delivering on state outcomes, improving the lives of people in NSW and the resilience of communities and driving the economy. NSW Government's Chief Data Scientist Dr Ian Oppermann said the Framework would ensure Government services using AI were aligned to state outcomes, easy to access and use by customers as well as being personalised and secure. "AI creates a huge opportunity to improve Government services. We are already piloting the technology with eHealth NSW to help doctors to earlier identify sepsis in patients attending emergency departments," Dr Oppermann said.
Safe and Trusted AI - KDR Recruitment
The last two decades have seen dramatic advances in automation, from affordable smartphones that can understand your voice commands, to self-driving cars with safety records comparable to human drivers, and computers that can diagnose disease as well as experienced doctors. These advances have been driven not just by falling costs of computing power, but huge leaps forward in machine learning – techniques which automate the discovery of patterns and associations in data. The most powerful of these require minimal human expertise to guide that learning. In many cases, this means computers can discover the underlying rules and patterns in data by themselves. Whilst the terminology has exciting connotations in science fiction, artificial intelligence, or AI, is the use of these techniques to perform tasks that we previously thought could only be done by a human – driving a car, playing chess, or recommending medication, for example.
Machine Learning Security - Considerations and Assurance
Machine learning security is an emerging concern for companies, as recent research by teams from Google Brain, OpenAI, US Army Research Laboratory and top universities has shown how machine learning models can be manipulated to return results fitting the attacker's desire. One area of significant finding has been in image recognition models. Image recognition is one of the stalwarts of machine learning and deep learning systems, allowing for superhuman performance on classification tasks and enabling proofs of concept in autonomous vehicles. Recent highly successful research showing the exploitation of image recognition models, specifically convolutional neural networks, is especially troubling for autonomous vehicles as attackers could theoretically take control of vehicles, or at least cause them to lose control. Advancements by Geoffrey Hinton and team address a few of the key problems plaguing convolutional neural networks, or CNNs, (more on that below), however, definitive research has not yet been performed to check if they fix the security problems. I'll outline several security issues that exist in current algorithmic deployments and then walk through some steps to take in order to provide assurance over algorithmic integrity.
- Information Technology > Security & Privacy (1.00)
- Government > Military > Army (0.55)