Goto

Collaborating Authors

 molnar


This brain implant is smaller than a grain of rice

Popular Science

The wireless neural transmitter safely delivers brain signals like a microchip. Breakthroughs, discoveries, and DIY tips sent every weekday. Today's neural implants are smaller than ever, but often remain cumbersome and prone to complications . According to researchers at Cornell University, a new iteration detailed this week in the journal may offer a novel path forward for brain implants. Small enough to fit on a grain of rice, the microscale optoelectronic tetherless electrode (or MOTE) is vastly smaller than similar implants and its design could be adapted to work in other delicate areas of the body.


I Tried These Brain-Tracking Headphones That Claim to Improve Focus

WIRED

Activity trackers have come a long way. No longer mere step-counters, they can monitor your heart rate, blood oxygen level, and skin temperature, and can even detect whether you suffer from sleep apnea. Now, there's a new wearable for your brain--and I've been testing it out for the past two weeks. Today, Boston-based company Neurable announced the launch of its smart headphones, dubbed the MW75 Neuro, which use electroencephalography, or EEG, and artificial intelligence to track the wearer's focus levels by reading their brain waves. The device sends this data to a mobile app, with the goal of helping the user tweak their habits to improve their work routine.


Connecting Algorithmic Fairness to Quality Dimensions in Machine Learning in Official Statistics and Survey Production

Schenk, Patrick Oliver, Kern, Christoph

arXiv.org Machine Learning

National Statistical Organizations (NSOs) increasingly draw on Machine Learning (ML) to improve the timeliness and cost-effectiveness of their products. When introducing ML solutions, NSOs must ensure that high standards with respect to robustness, reproducibility, and accuracy are upheld as codified, e.g., in the Quality Framework for Statistical Algorithms (QF4SA; Yung et al. 2022). At the same time, a growing body of research focuses on fairness as a pre-condition of a safe deployment of ML to prevent disparate social impacts in practice. However, fairness has not yet been explicitly discussed as a quality aspect in the context of the application of ML at NSOs. We employ Yung et al. (2022)'s QF4SA quality framework and present a mapping of its quality dimensions to algorithmic fairness. We thereby extend the QF4SA framework in several ways: we argue for fairness as its own quality dimension, we investigate the interaction of fairness with other dimensions, and we explicitly address data, both on its own and its interaction with applied methodology. In parallel with empirical illustrations, we show how our mapping can contribute to methodology in the domains of official statistics, algorithmic fairness, and trustworthy machine learning.


The Fight Over Which Uses of AI Europe Should Outlaw

#artificialintelligence

The system, called iBorderCtrl, analyzed facial movements to attempt to spot signs a person was lying to a border agent. The trial was propelled by nearly $5 million in European Union research funding, and almost 20 years of at Manchester Metropolitan University, in the UK. Polygraphs and other technologies built to detect lies from physical attributes have been widely declared unreliable by psychologists. Soon, errors were reported from iBorderCtrl, too. Media reports indicated that its [lie-prediction algorithm didn't and the project's own website that the technology "may imply risks for fundamental human rights."


The Fight Over Which Uses of AI Europe Should Outlaw

WIRED

The system, called iBorderCtrl, analyzed facial movements to attempt to spot signs a person was lying to a border agent. The trial was propelled by nearly $5 million in European Union research funding, and almost 20 years of research at Manchester Metropolitan University, in the UK. Polygraphs and other technologies built to detect lies from physical attributes have been widely declared unreliable by psychologists. Soon, errors were reported from iBorderCtrl, too. Media reports indicated that its lie-prediction algorithm didn't work, and the project's own website acknowledged that the technology "may imply risks for fundamental human rights."


These New Premium Headphones Use Artificial Intelligence to Help You Focus

#artificialintelligence

Think of Enten like an Apple watch for your mind. These high-tech headphones are equipped with sensors that scan your brain for electrical activity; the firm's proprietary AI then processes that data and produces a user-friendly reading via a Bluetooth-connected app. As a result, for example, Enten could increase the noise-canceling function if the sensors detect your distraction is rising. Perhaps instead you need music to focus--these can-do cans will suggest the songs that keep you in a flow state. After a few days of wearing them, you could even develop a daily routine that ensures you sidestep your sluggish hours: planning a workout for mid-afternoon when your brain is at its gooiest.


These New Premium Headphones Use Artificial Intelligence to Help You Focus

#artificialintelligence

Think of Enten like an Apple watch for your mind. These high-tech headphones are equipped with sensors that scan your brain for electrical activity; the firm's proprietary AI then processes that data and produces a user-friendly reading via a Bluetooth-connected app. As a result, for example, Enten could increase the noise-canceling function if the sensors detect your distraction is rising. Perhaps instead you need music to focus--these can-do cans will suggest the songs that keep you in a flow state. After a few days of wearing them, you could even develop a daily routine that ensures you sidestep your sluggish hours: planning a workout for mid-afternoon when your brain is at its gooiest.


Interpretable Machine Learning: The Free eBook - KDnuggets

#artificialintelligence

Interpretable machine learning is a genuine concern to stakeholders across the domain. No longer an esoteric consternation, or a "nice to have" for practitioners, the importance of interpretable machine learning and AI has been made known to more and more people over the past number of years for a wide array of different reasons. All of this could leave one wondering: where does one go to find a cache of quality reading material for learning such an important issue? Enter Interpretable Machine Learning, a free eBook by Christoph Molnar. First, what is the motivation for the book?


Transforming Feature Space to Interpret Machine Learning Models

Brenning, Alexander

arXiv.org Machine Learning

Interpreting complex nonlinear machine-learning models is an inherently difficult task. A common approach is the post-hoc analysis of black-box models for dataset-level interpretation (Murdoch et al. 2019) using model-agnostic techniques such as the permutation-based variable importance, and graphical displays such as partial dependence plots that visualize main effects while integrating over the remaining dimensions (Molnar, Casalicchio, and Bischl 2020). These tools are so far limited to displaying the relationship between the response and one (or sometimes two) predictor(s), while attempting to control for the influence of the other predictors. This can be rather unsatisfactory when dealing with a large number of highly correlated predictors, which are often semantically grouped. While the literature on explainable machine learning has often focused on dealing with dependencies affecting individual features, e.g. by introducing conditional diagnostics (Strobl et al. 2008; Molnar, König, Bischl, et al. 2020), no practical solutions are available yet for dealing with model interpretation in highdimensional feature spaces with strongly dependent features (Molnar, Casalicchio, and Bischl 2020; Molnar, König, Herbinger, et al. 2020). These situations routinely occur in environmental remote sensing and other geographical and ecological analyses (Landgrebe 2002; Zortea, Haertel, and Clarke 2007), which motivated the present proposal to enhance existing model interpretation tools by offering a new, transformed perspective. For example, vegetation'greenness' as a measure of photosynthetic activity is often used to classify landcover or land use from satellite imagery acquired at multiple time points throughout the growing season (Peña and Brenning 2015; Peña, Liao, and Brenning 2017). Spectral reflectances of equivalent spectral bands (the features) are usually strongly correlated within the same phenological stage since vegetation characteristics vary gradually.


Explainable AI: A guide for making black box machine learning models explainable

#artificialintelligence

Robots have moved off the assembly line and into warehouses, offices, hospitals, retail shops, and even our homes. ZDNet explores how the explosive growth in robotics is affecting specific industries, like healthcare and logistics, and the enterprise more broadly on issues like hiring and workplace safety. But machine learning (ML), which many people conflate with the broader discipline of artificial intelligence (AI), is not without its issues. ML works by feeding historical real world data to algorithms used to train models. ML models can then be fed new data and produce results of interest, based on the historical data used to train the model.