Summary: Using 3D imaging and artificial intelligence, researchers discovered the shortest distance between two points on the curved surface of the face predicted, with 89% accuracy, which patients had sleep apnea. Facial features analyzed from 3D photographs could predict the likelihood of having obstructive sleep apnea, according to a study published in the April issue of the Journal of Clinical Sleep Medicine. Using 3D photography, the study found that geodesic measurements -- the shortest distance between two points on a curved surface -- predicted with 89 percent accuracy which patients had sleep apnea. Using traditional 2D linear measurements alone, the algorithm's accuracy was 86 percent. "This application of the technique used predetermined landmarks on the face and neck," said principle investigator Peter Eastwood, who holds a doctorate in respiratory and sleep physiology and is the director of the Centre for Sleep Science at the University of Western Australia (UWA).
The simultaneous control of multiple coordinated robotic agents represents an elaborate problem. If solved, however, the interaction between the agents can lead to solutions to sophisticated problems. The concept of swarming, inspired by nature, can be described as the emergence of complex system-level behaviors from the interactions of relatively elementary agents. Due to the effectiveness of solutions found in nature, bio-inspired swarming-based control techniques are receiving a lot of attention in robotics. One method, known as swarm shepherding, is founded on the sheep herding behavior exhibited by sheepdogs, where a swarm of relatively simple agents are governed by a shepherd (or shepherds) which is responsible for high-level guidance and planning. Many studies have been conducted on shepherding as a control technique, ranging from the replication of sheep herding via simulation, to the control of uninhabited vehicles and robots for a variety of applications. We present a comprehensive review of the literature on swarm shepherding to reveal the advantages and potential of the approach to be applied to a plethora of robotic systems in the future.
This paper explores the use of a novel form of Hierarchical Graph Neurons (HGN) for in-operation behaviour selection in a swarm of robotic agents. This new HGN is called Robotic-HGN (R-HGN), as it matches robot environment observations to environment labels via fusion of match probabilities from both temporal and intra-swarm collections. This approach is novel for HGN as it addresses robotic observations being pseudo-continuous numbers, rather than categorical values. Additionally, the proposed approach is memory and computation-power conservative and thus is acceptable for use in mobile devices such as single-board computers, which are often used in mobile robotic agents. This R-HGN approach is validated against individual behaviour implementation and random behaviour selection. This contrast is made in two sets of simulated environments: environments designed to challenge the held behaviours of the R-HGN, and randomly generated environments which are more challenging for the robotic swarm than R-HGN training conditions. R-HGN has been found to enable appropriate behaviour selection in both these sets, allowing significant swarm performance in pre-trained and unexpected environment conditions.
Silas: High Performance, Explainable and V erifiable Machine Learning Hadrien Bride, Zh e H ou Griffith University, Nathan, Brisbane, Australia Jie Dong Dependable Intelligence Pty Ltd, Brisbane, Australia Jin Song Dong National University of Singapore, Singapore Ali Mirjalili Griffith University, Nathan, Brisbane, AustraliaAbstract This paper introduces a new classification tool named Silas, which is built to provide a more transparent and dependable data analytics service. A focus of Silas is on providing a formal foundation of decision trees in order to support logical analysis and verification of learned prediction models. This paper describes the distinct features of Silas: The Model Audit module formally verifies the prediction model against user specifications, the Enforcement Learning module trains prediction models that are guaranteed correct, the Model Insight and Prediction Insight modules reason about the prediction model and explain the decision-making of predictions. We also discuss implementation details ranging from programming paradigm to memory management that help achieve high-performance computation.1. Introduction Machine learning has enjoyed great success in many research areas and industries, including entertainment , self-driving cars , banking , medical diagnosis , shopping , and among many others. However, the wide adoption of machine learn-Preprint submitted to Elsevier October 4, 2019 arXiv:1910.01382v1 The ramifications of the black-box approach are multifold. First, it may lead to unexpected results that are only observable after the deployment of the algorithm. For instance, Amazon's Alexa offered porn to a child , a self-driving car had a deadly accident , etc. Some of these accidents result in lawsuits or even lost lives, the cost of which is immeasurable. Second, it prevents the adoption in some applications and industries where an explanation is mandatory or certain specifications must be satisfied. For example, in some countries, it is required by law to give a reason why a loan application is rejected. In recent years, eXplainable AI (XAI) has been gaining attention, and there is a surge of interest in studying how prediction models work and how to provide formal guarantees for the models. A common theme in this space is to use statistical methods to analyse prediction models.
Go player Lee Sedol (R) during the third game of the Google DeepMind Challenge Match against Google-developed supercomputer AlphaGo. Leading Australian artificial intelligence scientist Professor Toby Walsh is warning that we are "sleepwalking" into an AI future in which billions of machines and computers will be able to think. Professor Walsh, from the University of New South Wales, is calling for a national discussion about whether society needs to adopt clear boundaries and guidelines around how AI is developed and how it's used in our lives. In his book It's Alive: Artificial Intelligence From The Logic Piano to Killer Robots, he has highlighted key questions in a series of predictions that describe how our future could be far better or far worse because of AI. Here's how he thinks society might change by 2050 thanks to artificial intelligence.
Uber wants to bring the trial of its flying taxi play down under, telling a House Infrastructure, Transport and Cities Committee on Thursday that it just needs Australian governments to work alongside the Silicon Valley darling to make that happen. Uber Air is touted by the company as an "urban aviation ride-sharing product", with Uber's Australia and New Zealand head of cities Natalie Malligan telling the committee the plan is for customers to be able to "push a button and get a flight", just like they currently do with an on-road vehicle. The company in August announced five possible markets to launch its pipedream: Australia, Brazil, France, India, and Japan. It also confirmed that from 2023, customers will be able to get a flight on-demand in Dallas and Los Angeles. But before launch, the company needs to trial the initiative, and learning from past mistakes in Australia, Uber is asking for government support before it starts offering flights.
Olsen, Alex, Konovalov, Dmitry A., Philippa, Bronson, Ridd, Peter, Wood, Jake C., Johns, Jamie, Banks, Wesley, Girgenti, Benjamin, Kenny, Owen, Whinney, James, Calvert, Brendan, Azghadi, Mostafa Rahimi, White, Ronald D.
Robotic weed control has seen increased research in the past decade with its potential for boosting productivity in agriculture. Majority of works focus on developing robotics for arable croplands, ignoring the significant weed management problems facing rangeland stock farmers. Perhaps the greatest obstacle to widespread uptake of robotic weed control is the robust detection of weed species in their natural environment. The unparalleled successes of deep learning make it an ideal candidate for recognising various weed species in the highly complex Australian rangeland environment. This work contributes the first large, public, multiclass image dataset of weed species from the Australian rangelands; allowing for the development of robust detection methods to make robotic weed control viable. The DeepWeeds dataset consists of 17,509 labelled images of eight nationally significant weed species native to eight locations across northern Australia. This paper also presents a baseline for classification performance on the dataset using the benchmark deep learning models, Inception-v3 and ResNet-50. These models achieved an average classification performance of 87.9% and 90.5%, respectively. This strong result bodes well for future field implementation of robotic weed control methods in the Australian rangelands.
Recently IT Brief had the opportunity to get in touch with Adrian Jones, Automation Anywhere APJ EVP, to discuss AI and its impact on the modern workforce. Can you tell me a bit more about Automation Anywhere? Automation Anywhere is a global leader in Robotic Process Automation (RPA) software and AI technology for enterprises looking to deploy intelligent digital workforces. Our technology uses software bots that work alongside the human workforce to take on repetitive, mundane work, allowing people to do more meaningful work. Beyond automating tasks, Automation Anywhere's platform also helps improve them on the back-end by enhancing efficiency, minimising error and reducing operational costs, while helping enterprises manage and scale business processes faster.
Scientists have developed a new artificial intelligence system that can track a person's eye movements to identify their personality type. Researchers, including those from the University of Stuttgart in Germany and Flinders University in Australia used state-of-the-art machine-learning algorithms to demonstrate a link between personality and eye movements. Their findings show that people's eye movements reveal whether they are sociable, conscientious or curious, with the algorithm software reliably recognising four of the Big Five personality traits: neuroticism, extroversion, agreeableness, and conscientiousness. Researchers tracked the eye movements of 42 participants as they undertook everyday tasks around a university campus, and subsequently assessed their personality traits using well-established questionnaires. The study provides new links between previously under-investigated eye movements and personality traits and delivers important insights for emerging fields of social signal processing and social robotics.