Not enough data to create a plot.
Try a different view from the menu above.
Munster
Early Stopping Criteria for Training Generative Adversarial Networks in Biomedical Imaging
Saad, Muhammad Muneeb, Rehmani, Mubashir Husain, O'Reilly, Ruairi
Generative Adversarial Networks (GANs) have high computational costs to train their complex architectures. Throughout the training process, GANs' output is analyzed qualitatively based on the loss and synthetic images' diversity and quality. Based on this qualitative analysis, training is manually halted once the desired synthetic images are generated. By utilizing an early stopping criterion, the computational cost and dependence on manual oversight can be reduced yet impacted by training problems such as mode collapse, non-convergence, and instability. This is particularly prevalent in biomedical imagery, where training problems degrade the diversity and quality of synthetic images, and the high computational cost associated with training makes complex architectures increasingly inaccessible. This work proposes a novel early stopping criteria to quantitatively detect training problems, halt training, and reduce the computational costs associated with synthesizing biomedical images. Firstly, the range of generator and discriminator loss values is investigated to assess whether mode collapse, non-convergence, and instability occur sequentially, concurrently, or interchangeably throughout the training of GANs. Secondly, utilizing these occurrences in conjunction with the Mean Structural Similarity Index (MS-SSIM) and Fr\'echet Inception Distance (FID) scores of synthetic images forms the basis of the proposed early stopping criteria. This work helps identify the occurrence of training problems in GANs using low-resource computational cost and reduces training time to generate diversified and high-quality synthetic images.
Adaptive Input-image Normalization for Solving the Mode Collapse Problem in GAN-based X-ray Images
Saad, Muhammad Muneeb, Rehmani, Mubashir Husain, O'Reilly, Ruairi
Biomedical image datasets can be imbalanced due to the rarity of targeted diseases. Generative Adversarial Networks play a key role in addressing this imbalance by enabling the generation of synthetic images to augment datasets. It is important to generate synthetic images that incorporate a diverse range of features to accurately represent the distribution of features present in the training imagery. Furthermore, the absence of diverse features in synthetic images can degrade the performance of machine learning classifiers. The mode collapse problem impacts Generative Adversarial Networks' capacity to generate diversified images. Mode collapse comes in two varieties: intra-class and inter-class. In this paper, both varieties of the mode collapse problem are investigated, and their subsequent impact on the diversity of synthetic X-ray images is evaluated. This work contributes an empirical demonstration of the benefits of integrating the adaptive input-image normalization with the Deep Convolutional GAN and Auxiliary Classifier GAN to alleviate the mode collapse problems. Synthetically generated images are utilized for data augmentation and training a Vision Transformer model. The classification performance of the model is evaluated using accuracy, recall, and precision scores. Results demonstrate that the DCGAN and the ACGAN with adaptive input-image normalization outperform the DCGAN and ACGAN with un-normalized X-ray images as evidenced by the superior diversity scores and classification scores.
Assessing Intra-class Diversity and Quality of Synthetically Generated Images in a Biomedical and Non-biomedical Setting
Saad, Muhammad Muneeb, Rehmani, Mubashir Husain, O'Reilly, Ruairi
In biomedical image analysis, data imbalance is common across several imaging modalities. Data augmentation is one of the key solutions in addressing this limitation. Generative Adversarial Networks (GANs) are increasingly being relied upon for data augmentation tasks. Biomedical image features are sensitive to evaluating the efficacy of synthetic images. These features can have a significant impact on metric scores when evaluating synthetic images across different biomedical imaging modalities. Synthetically generated images can be evaluated by comparing the diversity and quality of real images. Multi-scale Structural Similarity Index Measure and Cosine Distance are used to evaluate intra-class diversity, while Frechet Inception Distance is used to evaluate the quality of synthetic images. Assessing these metrics for biomedical and non-biomedical imaging is important to investigate an informed strategy in evaluating the diversity and quality of synthetic images. In this work, an empirical assessment of these metrics is conducted for the Deep Convolutional GAN in a biomedical and non-biomedical setting. The diversity and quality of synthetic images are evaluated using different sample sizes. This research intends to investigate the variance in diversity and quality across biomedical and non-biomedical imaging modalities. Results demonstrate that the metrics scores for diversity and quality vary significantly across biomedical-to-biomedical and biomedical-to-non-biomedical imaging modalities.
Irish teenager wins national award for 'deepfake' video detector
A teenage student in Ireland has won a national science competition for developing technology that can more easily detect "deepfake" videos online. Greg Tarr, from County Cork, was declared the winner of the 2021 BT Young Scientist & Technologist of the Year award last week for his project, "Towards Deepfake Detection". The picture or audio of deepfake videos is altered by artificial intelligence (AI) to make it appear as though someone has said or done something they have not. The viral spread of deepfake videos has caused international concern, in an age of digital news consumption, and social media companies have come under renewed scrutiny on how to tackle the spread of this misinformation. An altered video, claiming to show US President-elect Joe Biden falling asleep during a television interview, was widely shared before November's election.
Altada Opens London Office to Meet Local Demand for AI Solutions for Asset Management โ A Team
Altada Technology Solutions, a provider of artificial intelligence (AI) solutions supporting improved data-driven decision making in the asset management community, is expanding on a global basis with the addition of a London office. Through 2021, the company has added offices in New York, San Francisco, Malta, Dublin and Barcelona. Altada was founded in 2018 in Cork, Ireland with a view to ensuring the ethical and responsible use of AI. Its financial services solutions cover investment and portfolio management, and are built on technology that allows firms to analyse key variables in a fast and accurate way, and provide sentiment and valuation analyses that help decision makers efficiently allocate investments. Altada's London office, its first in the UK, is a step in its business growth strategy.
AI Ethics
This past year has seen a significant blossoming of discussions on the ethics of AI. In working groups and meetings spanning IEEE, ACM, U.N. and the World Economic Forum as well as a handful of governmental advisory committees, more intimate breakout sessions afford an opportunity to observe how we, as robotics and AI researchers, communicate our own relationship to ethics within a field teeming with possibilities of both benefit and harm. Unfortunately, many of these opportunities fail to realize authentic forward progress during discussions that repeat similar memes. Three common myths pervade such discussions, frequently stifling any synthesis: education is not needed; external regulation is undesirable; and technological optimism provides justifiable hope. The underlying good news is that discourse and curricular experimentation are now occurring at scales that were unmatched in the recent past.
Walmart's anti-shoplifting tech slammed by staff as 'fake AI'
A group of anonymous Walmart workers have raised concerns about the anti-shoplifting technology used to monitor the company's self-checkout kiosks. A group that calls themselves'Concerned Home Office Associates' has circulated a video documenting the system's flaws, including frequent failures to identify unscanned items, and incorrectly identifying personal items potentially shoplifted. In an email sent to company management at Walmart's headquarters in Bentonville, Arkansas, the group claims to be'past their breaking point,' saying the system's frequent false positives are irritating customers and putting workers at greater risk of COVID-19 exposure by unnecessarily having to verify customer's purchases at unsafe distances. An anonymous group of Walmart employees have raised concerns about anti-theft technology used at self-checkout kiosks, saying it's'a fake AI that just pretends to safeguard' 'It's like a noisy tech, a fake AI that just pretends to safeguard,' one of the Walmart employees, who asked to remain anonymous, told Wired. The system was originally designed by Everseen--an artificial intelligence and technology firm based in Cork, Ireland--and relies on overhead cameras, or'digital eyes,' that film customers as they scan objects into the register.
Walmart Employees Are Out to Show Its Anti-Shoplifting AI Doesn't Work
In January, my coworker received a peculiar email. The message, which she forwarded to me, was from a handful of corporate Walmart employees calling themselves the "Concerned Home Office Associates." While it's not unusual for journalists to receive anonymous tips, they don't usually come with their own slickly produced videos. The employees said they were "past their breaking point," with Everseen, a small artificial intelligence firm based in Cork, Ireland, whose technology Walmart began using in 2017. Walmart uses Everseen in thousands of stores to prevent shoplifting at registers and self-checkout kiosks.
What is the future of AI? The experts' view
Artificial Intelligence and related fields like deep learning and machine learning have come to dominate discussions about where our society is headed, with debate often polarised between declaring AI to be the savior of society or its potential downfall. The truth lies somewhere in the middle: AI won't destroy our civilization, but we can't cede responsibility for saving it to AI, either. In any case, AI-related technologies are versatile and powerful and can provide the basis for change in our world. We caught up with some leading experts in the field to ask them for their view on the future of AI. Prof. Barry O'Sullivan is Director of the Insight Centre for Data Analytics, Computer Science, at University College Cork, Ireland.