Goto

Collaborating Authors

 molnar


This brain implant is smaller than a grain of rice

Popular Science

The wireless neural transmitter safely delivers brain signals like a microchip. Breakthroughs, discoveries, and DIY tips sent every weekday. Today's neural implants are smaller than ever, but often remain cumbersome and prone to complications . According to researchers at Cornell University, a new iteration detailed this week in the journal may offer a novel path forward for brain implants. Small enough to fit on a grain of rice, the microscale optoelectronic tetherless electrode (or MOTE) is vastly smaller than similar implants and its design could be adapted to work in other delicate areas of the body.


I Tried These Brain-Tracking Headphones That Claim to Improve Focus

WIRED

Activity trackers have come a long way. No longer mere step-counters, they can monitor your heart rate, blood oxygen level, and skin temperature, and can even detect whether you suffer from sleep apnea. Now, there's a new wearable for your brain--and I've been testing it out for the past two weeks. Today, Boston-based company Neurable announced the launch of its smart headphones, dubbed the MW75 Neuro, which use electroencephalography, or EEG, and artificial intelligence to track the wearer's focus levels by reading their brain waves. The device sends this data to a mobile app, with the goal of helping the user tweak their habits to improve their work routine.


Connecting Algorithmic Fairness to Quality Dimensions in Machine Learning in Official Statistics and Survey Production

Schenk, Patrick Oliver, Kern, Christoph

arXiv.org Machine Learning

National Statistical Organizations (NSOs) increasingly draw on Machine Learning (ML) to improve the timeliness and cost-effectiveness of their products. When introducing ML solutions, NSOs must ensure that high standards with respect to robustness, reproducibility, and accuracy are upheld as codified, e.g., in the Quality Framework for Statistical Algorithms (QF4SA; Yung et al. 2022). At the same time, a growing body of research focuses on fairness as a pre-condition of a safe deployment of ML to prevent disparate social impacts in practice. However, fairness has not yet been explicitly discussed as a quality aspect in the context of the application of ML at NSOs. We employ Yung et al. (2022)'s QF4SA quality framework and present a mapping of its quality dimensions to algorithmic fairness. We thereby extend the QF4SA framework in several ways: we argue for fairness as its own quality dimension, we investigate the interaction of fairness with other dimensions, and we explicitly address data, both on its own and its interaction with applied methodology. In parallel with empirical illustrations, we show how our mapping can contribute to methodology in the domains of official statistics, algorithmic fairness, and trustworthy machine learning.


The Fight Over Which Uses of AI Europe Should Outlaw

#artificialintelligence

The system, called iBorderCtrl, analyzed facial movements to attempt to spot signs a person was lying to a border agent. The trial was propelled by nearly $5 million in European Union research funding, and almost 20 years of at Manchester Metropolitan University, in the UK. Polygraphs and other technologies built to detect lies from physical attributes have been widely declared unreliable by psychologists. Soon, errors were reported from iBorderCtrl, too. Media reports indicated that its [lie-prediction algorithm didn't and the project's own website that the technology "may imply risks for fundamental human rights."


The Fight Over Which Uses of AI Europe Should Outlaw

WIRED

The system, called iBorderCtrl, analyzed facial movements to attempt to spot signs a person was lying to a border agent. The trial was propelled by nearly $5 million in European Union research funding, and almost 20 years of research at Manchester Metropolitan University, in the UK. Polygraphs and other technologies built to detect lies from physical attributes have been widely declared unreliable by psychologists. Soon, errors were reported from iBorderCtrl, too. Media reports indicated that its lie-prediction algorithm didn't work, and the project's own website acknowledged that the technology "may imply risks for fundamental human rights."


Interpretable Machine Learning: The Free eBook - KDnuggets

#artificialintelligence

Interpretable machine learning is a genuine concern to stakeholders across the domain. No longer an esoteric consternation, or a "nice to have" for practitioners, the importance of interpretable machine learning and AI has been made known to more and more people over the past number of years for a wide array of different reasons. All of this could leave one wondering: where does one go to find a cache of quality reading material for learning such an important issue? Enter Interpretable Machine Learning, a free eBook by Christoph Molnar. First, what is the motivation for the book?


Transforming Feature Space to Interpret Machine Learning Models

Brenning, Alexander

arXiv.org Machine Learning

Interpreting complex nonlinear machine-learning models is an inherently difficult task. A common approach is the post-hoc analysis of black-box models for dataset-level interpretation (Murdoch et al. 2019) using model-agnostic techniques such as the permutation-based variable importance, and graphical displays such as partial dependence plots that visualize main effects while integrating over the remaining dimensions (Molnar, Casalicchio, and Bischl 2020). These tools are so far limited to displaying the relationship between the response and one (or sometimes two) predictor(s), while attempting to control for the influence of the other predictors. This can be rather unsatisfactory when dealing with a large number of highly correlated predictors, which are often semantically grouped. While the literature on explainable machine learning has often focused on dealing with dependencies affecting individual features, e.g. by introducing conditional diagnostics (Strobl et al. 2008; Molnar, König, Bischl, et al. 2020), no practical solutions are available yet for dealing with model interpretation in highdimensional feature spaces with strongly dependent features (Molnar, Casalicchio, and Bischl 2020; Molnar, König, Herbinger, et al. 2020). These situations routinely occur in environmental remote sensing and other geographical and ecological analyses (Landgrebe 2002; Zortea, Haertel, and Clarke 2007), which motivated the present proposal to enhance existing model interpretation tools by offering a new, transformed perspective. For example, vegetation'greenness' as a measure of photosynthetic activity is often used to classify landcover or land use from satellite imagery acquired at multiple time points throughout the growing season (Peña and Brenning 2015; Peña, Liao, and Brenning 2017). Spectral reflectances of equivalent spectral bands (the features) are usually strongly correlated within the same phenological stage since vegetation characteristics vary gradually.


Explainable AI: A guide for making black box machine learning models explainable

#artificialintelligence

Robots have moved off the assembly line and into warehouses, offices, hospitals, retail shops, and even our homes. ZDNet explores how the explosive growth in robotics is affecting specific industries, like healthcare and logistics, and the enterprise more broadly on issues like hiring and workplace safety. But machine learning (ML), which many people conflate with the broader discipline of artificial intelligence (AI), is not without its issues. ML works by feeding historical real world data to algorithms used to train models. ML models can then be fed new data and produce results of interest, based on the historical data used to train the model.


Interpretability of machine learning based prediction models in healthcare

Stiglic, Gregor, Kocbek, Primoz, Fijacko, Nino, Zitnik, Marinka, Verbert, Katrien, Cilar, Leona

arXiv.org Machine Learning

There is a need of ensuring machine learning models that are interpretable. Higher interpretability of the model means easier comprehension and explanation of future predictions for end-users. Further, interpretable machine learning models allow healthcare experts to make reasonable and data-driven decisions to provide personalized decisions that can ultimately lead to higher quality of service in healthcare. Generally, we can classify interpretability approaches in two groups where the first focuses on personalized interpretation (local interpretability) while the second summarizes prediction models on a population level (global interpretability). Alternatively, we can group interpretability methods into model-specific techniques, which are designed to interpret predictions generated by a specific model, such as a neural network, and model-agnostic approaches, which provide easy-to-understand explanations of predictions made by any machine learning model. Here, we give an overview of interpretability approaches and provide examples of practical interpretability of machine learning in different areas of healthcare, including prediction of health-related outcomes, optimizing treatments or improving the efficiency of screening for specific conditions. Further, we outline future directions for interpretable machine learning and highlight the importance of developing algorithmic solutions that can enable machine-learning driven decision making in high-stakes healthcare problems.


Lessons Learned from a Chatbot Failure

#artificialintelligence

Have you ever had a chatbot set up a meeting with someone you've never heard of via calendar invite, respond with "bye" after you decline the invite, then prompt a human to enter the email chain who calls you by the wrong name? We get a lot of emails from vendors, but they usually involve press pitches. A vendor, through its chatbot, set up an unprompted meeting with me. Things got stranger when I engaged. While the exchange was surreal and entertaining in equal parts, it was a good example of where bot technology and artificial intelligence can fail and, even more importantly, a reminder for brands to do better deploying such tech into their sales and marketing processes.