Goto

Collaborating Authors

 critic


YARE-GAN: Yet Another Resting State EEG-GAN

Farahzadi, Yeganeh, Ansarinia, Morteza, Kekecs, Zoltan

arXiv.org Artificial Intelligence

Generative Adversarial Networks (GANs) have shown promise in synthesising realistic neural data, yet their potential for unsupervised representation learning in resting-state EEG remains under explored. In this study, we implement a Wasserstein GAN with Gradient Penalty (WGAN-GP) to generate multi-channel resting-state EEG data and assess the quality of the synthesised signals through both visual and feature-based evaluations. Our results indicate that the model effectively captures the statistical and spectral characteristics of real EEG data, although challenges remain in replicating high-frequency oscillations in the frontal region. Additionally, we demonstrate that the Critic's learned representations can be fine-tuned for age group classification, achieving an out-of-sample accuracy, significantly better than a shuffled-label baseline. These findings suggest that generative models can serve not only as EEG data generators but also as unsupervised feature extractors, reducing the need for manual feature engineering. This study highlights the potential of GAN-based unsupervised learning for EEG analysis, suggesting avenues for more data-efficient deep learning applications in neuroscience.


How to train your draGAN: A task oriented solution to imbalanced classification

Guertler, Leon O., Ashfahani, Andri, Luu, Anh Tuan

arXiv.org Artificial Intelligence

The long-standing challenge of building effective classification models for small and imbalanced datasets has seen little improvement since the creation of the Synthetic Minority Over-sampling Technique (SMOTE) over 20 years ago. Though GAN based models seem promising, there has been a lack of purpose built architectures for solving the aforementioned problem, as most previous studies focus on applying already existing models. This paper proposes a unique, performance-oriented, data-generating strategy that utilizes a new architecture, coined draGAN, to generate both minority and majority samples. The samples are generated with the objective of optimizing the classification model's performance, rather than similarity to the real data. We benchmark our approach against state-of-the-art methods from the SMOTE family and competitive GAN based approaches on 94 tabular datasets with varying degrees of imbalance and linearity. Empirically we show the superiority of draGAN, but also highlight some of its shortcomings. All code is available on: https://github.com/LeonGuertler/draGAN.


We asked an AI tool to 'paint' images of Australia. Critics say they're good enough to sell

#artificialintelligence

The images are so crafted and "painterly" that you may not realise at first they have been dreamed up by a machine in just a few minutes. Maybe you've seen one already, but not realised what it was. It may have looked like something you'd seen before in an art book or a museum. These images are the product of a new AI-generated art scene that's exploded thanks to the development of free and easy-to-use tools that require (at the very least) short text prompts to create unique pictures. The image in the tweet above, for example, was created by giving the text prompt "a summer day" to an AI tool.


Coded Bias: New PBS Documentary Explores Gender & Racial Bias in AI

#artificialintelligence

An upcoming PBS documentary dives deep into the controversy surrounding bias in artificial intelligence (AI). Coded Bias explores MIT Media Lab researcher Joy Buolamwini's shocking discovery that facial recognition does not see women and dark-skinned faces accurately. The 90-minute film covers her push for U.S. government legislation against bias in algorithms that are becoming increasingly prevalent in modern-day society. Directed by award-winning filmmaker Shalini Kantayya, Coded Bias will premiere on PBS and PBS video app on March 22. Kantayya tells the story of dynamic women leading the fight for the ethical use of AI. She profiles data scientists, mathematicians, ethicists, and everyday citizens from around the world who have been impacted by these disruptive technologies and are fighting to shed light on the impact of unconscious bias in artificial intelligence.


Research Workshop on Expert Judgment, Human Error, and Intelligent Systems

AI Magazine

This workshop brought together 20 computer scientists, psychologists, and human-computer interaction (HCI) researchers to exchange results and views on human error and judgment bias. Human error is typically studied when operators undertake actions, but judgment bias is an issue in thinking rather than acting. Both topics are generally ignored by the HCI community, which is interested in designs that eliminate human error and bias tendencies. As a result, almost no one at the workshop had met before, and the discussion for most participants was novel and lively. Many areas of previously unexamined overlap were identified.


Expert Critics in Engineering Design: Lessons Learned and Research Needs

AI Magazine

Human error is an increasingly important and addressable concern in modernday high-technology accidents. Avoidable human errors led to many famous accidents, including Bhopal, the space shuttle Challenger, Chernobyl, the Exxon Valdez, and Three Mile Island. Many hundreds of thousands of nonfamous accidents occur each year that are equally or more avoidable. Dramatic examples make the local headlines, such as car crashes, train and plane wrecks, and military-related operations mishaps. Less dramatic consequences happen even more frequently because of millions of mundane errors that appear daily in the products we use (for example, poorly designed cars), the processes we are affected by (for example, banking or healthcare institutions), and the automation that surrounds us (for example, unfriendly computers that expect us to adapt to their interfaces).


how-algorithms-are-transforming-artistic-creativity?utm_content=bufferd0951&utm_medium=social&utm_source=twitter.com&utm_campaign=buffer

#artificialintelligence

When IBM's Deep Blue chess computer defeated the world champion Garry Kasparov in 1997, humanity let out a collective sigh, recognising the loss of an essential human territory to the onslaught of thinking machines. And not just against them: for the past two decades, Kasparov has been exploring an idea he calls'Advanced Chess', where humans collaborate with computer chess programs against other hybrid teams, sometimes called'Centaurs'. We rely on computational systems for our essential aesthetic vocabulary, learning what is good and beautiful through a prism of five-star rating systems and social-media endorsements, all closely watched over by algorithmic critics of loving grace. Many artists today explore the seams and rough edges of digital platforms, creating art out of the glitches and unintended juxtapositions that they can eke out of increasingly complicated creative systems.