Goto

Collaborating Authors

 brisbane


Prepare for Warp Speed: Sub-millisecond Visual Place Recognition Using Event Cameras

Ramanathan, Vignesh, Milford, Michael, Fischer, Tobias

arXiv.org Artificial Intelligence

Visual Place Recognition (VPR) enables systems to identify previously visited locations within a map, a fundamental task for autonomous navigation. Prior works have developed VPR solutions using event cameras, which asynchronously measure per-pixel brightness changes with microsecond temporal resolution. However, these approaches rely on dense representations of the inherently sparse camera output and require tens to hundreds of milliseconds of event data to predict a place. Here, we break this paradigm with Flash, a lightweight VPR system that predicts places using sub-millisecond slices of event data. Our method is based on the observation that active pixel locations provide strong discriminative features for VPR. Flash encodes these active pixel locations using efficient binary frames and computes similarities via fast bitwise operations, which are then normalized based on the relative event activity in the query and reference frames. Flash improves Recall@1 for sub-millisecond VPR over existing baselines by 11.33x on the indoor QCR-Event-Dataset and 5.92x on the 8 km Brisbane-Event-VPR dataset. Moreover, our approach reduces the duration for which the robot must operate without awareness of its position, as evidenced by a localization latency metric we term Time to Correct Match (TCM). To the best of our knowledge, this is the first work to demonstrate sub-millisecond VPR using event cameras.


Artificial Intelligence-assisted Pixel-level Lung (APL) Scoring for Fast and Accurate Quantification in Ultra-short Echo-time MRI

Xin, Bowen, Hickey, Rohan, Blake, Tamara, Jin, Jin, Wainwright, Claire E, Benkert, Thomas, Stemmer, Alto, Sly, Peter, Coman, David, Dowling, Jason

arXiv.org Artificial Intelligence

Lung magnetic resonance imaging (MRI) with ultrashort echo-time (UTE) represents a recent breakthrough in lung structure imaging, providing image resolution and quality comparable to computed tomography (CT). Due to the absence of ionising radiation, MRI is often preferred over CT in paediatric diseases such as cystic fibrosis (CF), one of the most common genetic disorders in Caucasians. To assess structural lung damage in CF imaging, CT scoring systems provide valuable quantitative insights for disease diagnosis and progression. However, few quantitative scoring systems are available in structural lung MRI (e.g., UTE-MRI). To provide fast and accurate quantification in lung MRI, we investigated the feasibility of novel Artificial intelligence-assisted Pixel-level Lung (APL) scoring for CF. APL scoring consists of 5 stages, including 1) image loading, 2) AI lung segmentation, 3) lung-bounded slice sampling, 4) pixel-level annotation, and 5) quantification and reporting. The results shows that our APL scoring took 8.2 minutes per subject, which was more than twice as fast as the previous grid-level scoring. Additionally, our pixel-level scoring was statistically more accurate (p=0.021), while strongly correlating with grid-level scoring (R=0.973, p=5.85e-9). This tool has great potential to streamline the workflow of UTE lung MRI in clinical settings, and be extended to other structural lung MRI sequences (e.g., BLADE MRI), and for other lung diseases (e.g., bronchopulmonary dysplasia).


The iToBoS dataset: skin region images extracted from 3D total body photographs for lesion detection

Saha, Anup, Adeola, Joseph, Ferrera, Nuria, Mothershaw, Adam, Rezze, Gisele, Gaborit, Séraphin, D'Alessandro, Brian, Hudson, James, Szabó, Gyula, Pataki, Balazs, Rajani, Hayat, Nazari, Sana, Hayat, Hassan, Primiero, Clare, Soyer, H. Peter, Malvehy, Josep, Garcia, Rafael

arXiv.org Artificial Intelligence

Artificial intelligence has significantly advanced skin cancer diagnosis by enabling rapid and accurate detection of malignant lesions. In this domain, most publicly available image datasets consist of single, isolated skin lesions positioned at the center of the image. While these lesion-centric datasets have been fundamental for developing diagnostic algorithms, they lack the context of the surrounding skin, which is critical for improving lesion detection. The iToBoS dataset was created to address this challenge. It includes 16,954 images of skin regions from 100 participants, captured using 3D total body photography. Each image roughly corresponds to a $7 \times 9$ cm section of skin with all suspicious lesions annotated using bounding boxes. Additionally, the dataset provides metadata such as anatomical location, age group, and sun damage score for each image. This dataset aims to facilitate training and benchmarking of algorithms, with the goal of enabling early detection of skin cancer and deployment of this technology in non-clinical environments.


Elucidating Discrepancy in Explanations of Predictive Models Developed using EMR

Brankovic, Aida, Huang, Wenjie, Cook, David, Khanna, Sankalp, Bialkowski, Konstanty

arXiv.org Artificial Intelligence

The lack of transparency and explainability hinders the clinical adoption of Machine learning (ML) algorithms. While explainable artificial intelligence (XAI) methods have been proposed, little research has focused on the agreement between these methods and expert clinical knowledge. This study applies current state-of-the-art explainability methods to clinical decision support algorithms developed for Electronic Medical Records (EMR) data to analyse the concordance between these factors and discusses causes for identified discrepancies from a clinical and technical perspective. Important factors for achieving trustworthy XAI solutions for clinical decision support are also discussed.


Brisbane's Queen's Wharf to undergo digital transformation

#artificialintelligence

Schneider Electric has officially commenced its digital transformation journey with Queen's Wharf Brisbane after two years' preparation. As a technology partner for the high-profile development, Schneider is set to future-proof the precinct with its cutting-edge technology in digital buildings and unrivalled local resources. Currently the largest development in Queensland, worth $3.6 billion and covering over 26 hectares of land and water, the Queen's Wharf Development transforms a once underutilised area into a vibrant location of major significance to the Brisbane CBD's future plans. The partnership will see Schneider design and implement integrated digital solutions that feature Building Management Systems (BMS), and Integrated System Platforms (ISP) across the whole precinct, including The Star Grand hotel, casino, main podium area, Sky Deck, as well as the Dorsett hotel and Rosewood hotel. Louise Monger, Vice President of Digital Buildings at Schneider Electric said, "We are delighted to have the opportunity to apply Schneider's world-class expertise in digital buildings to future proof such a significant project for the community. "Our relationship with the Queen's Wharf team began in 2017, when we first identified technology to be a key focus for the development.


'A long road': the Australian city aiming to give self-driving cars the green light

The Guardian

As the traffic lights turn from amber to red, Miranda Blogg accelerates towards them. "Here we go," she says. A dash-mounted screen in her Renault ZOE flashes a warning featuring a traffic light symbol. The screen erupts with a more aggressive visual display ("Stop!") accompanied by three loud, grating, beeps. "Whoops," she says, as she brakes, still well ahead of the lights.


Land Use Detection & Identification using Geo-tagged Tweets

Khan, Saeed, Shahzamal, Md

arXiv.org Artificial Intelligence

Geo-tagged tweets can potentially help with sensing the interaction of people with their surrounding environment. Based on this hypothesis, this paper makes use of geotagged tweets in order to ascertain various land uses with a broader goal to help with urban/city planning. The proposed method utilises supervised learning to reveal spatial land use within cities with the help of Twitter activity signatures. Specifically, the technique involves using tweets from three cities of Australia namely Brisbane, Melbourne and Sydney. Analytical results are checked against the zoning data provided by respective city councils and a good match is observed between the predicted land use and existing land zoning by the city councils. We show that geo-tagged tweets contain features that can be useful for land use identification.


#302: Robots That Can See, Do, and Win, with Juxi Leitner

Robohub

Juxi Leitner is co-founder of LYRO Robotics, a deep-tech startup based in Brisbane, Australia, creating robotic picking and packing solutions. LYRO is a spin-out of the Australian Centre of Excellence for Robotic Vision (ACRV), where Juxi is the research lead for the manipulation research stream (previously Vision and Action project). His research focus is on integrating Robotics, Computer Vision and Machine Learning/Artificial Intelligence (AI) for robust grasping and manipulation in real-world scenarios. In 2017 his team won the Amazon Robotics Challenge. Juxi is active in the local Brisbane deep-tech ecosystem and started Brisbane.AI and the Brisbane robotics interest group.

  Country: Oceania > Australia > Queensland > Brisbane (0.33)

Swim with the sharks in Brisbane's very own Great Barrier Reef

#artificialintelligence

Queensland is now home to a second Great Barrier Reef, allowing children and adults alike the ability to interact with the world's largest coral reef system without leaving the city. The Living Reef is the brainchild of game developers and researchers at the Queensland University of Technology's (QUT) The Cube in Brisbane. Large 10-metre-tall screens are educating visitors about the creatures of the reef as well as the environmental issues it faces now and into the future. The team is one of the first in the world to use a system where coral was grown with a method called the space colonisation algorithm to help mimic nature. "We created a system where we could grow coral mathematically using simulation software," Cube studio manager Simon Harrison said.


ANALYSIS: Brisbane has the talent and ideas to be a global AI hub - Choose Brisbane

#artificialintelligence

Many of you will have heard of artificial intelligence (AI). You are probably aware that it is not the indestructible Arnie chasing after you in an old, abandoned factory but you may not realise how much of your life already functions on AI. As a general rule, any digital system that exhibits an aspect of human intelligence, such as perception or decision-making, is likely to be running on AI. Have you ever wondered how Google Maps precisely knows every location you search for? The amount of imagery data that would need to be analysed to annotate each house address in the entire world would take an army of workers several years.