Goto

Collaborating Authors

 deeper look


Imputation Matters: A Deeper Look into an Overlooked Step in Longitudinal Health and Behavior Sensing Research

Choube, Akshat, Majethia, Rahul, Bhattacharya, Sohini, Swain, Vedant Das, Li, Jiachen, Mishra, Varun

arXiv.org Artificial Intelligence

Longitudinal passive sensing studies for health and behavior outcomes often have missing and incomplete data. Handling missing data effectively is thus a critical data processing and modeling step. Our formative interviews with researchers working in longitudinal health and behavior passive sensing revealed a recurring theme: most researchers consider imputation a low-priority step in their analysis and inference pipeline, opting to use simple and off-the-shelf imputation strategies without comprehensively evaluating its impact on study outcomes. Through this paper, we call attention to the importance of imputation. Using publicly available passive sensing datasets for depression, we show that prioritizing imputation can significantly impact the study outcomes -- with our proposed imputation strategies resulting in up to 31% improvement in AUROC to predict depression over the original imputation strategy. We conclude by discussing the challenges and opportunities with effective imputation in longitudinal sensing studies.


A deeper look at depth pruning of LLMs

Siddiqui, Shoaib Ahmed, Dong, Xin, Heinrich, Greg, Breuel, Thomas, Kautz, Jan, Krueger, David, Molchanov, Pavlo

arXiv.org Artificial Intelligence

Large Language Models (LLMs) are not only resource-intensive to train but even more costly to deploy in production. Therefore, recent work has attempted to prune blocks of LLMs based on cheap proxies for estimating block importance, effectively removing 10% of blocks in well-trained LLaMa-2 and Mistral 7b models without any significant degradation of downstream metrics. In this paper, we explore different block importance metrics by considering adaptive metrics such as Shapley value in addition to static ones explored in prior work. We show that adaptive metrics exhibit a trade-off in performance between tasks i.e., improvement on one task may degrade performance on the other due to differences in the computed block influences. Furthermore, we extend this analysis from a complete block to individual self-attention and feed-forward layers, highlighting the propensity of the self-attention layers to be more amendable to pruning, even allowing removal of upto 33% of the self-attention layers without incurring any performance degradation on MMLU for Mistral 7b (significant reduction in costly maintenance of KV-cache). Finally, we look at simple performance recovery techniques to emulate the pruned layers by training lightweight additive bias or low-rank linear adapters. Performance recovery using emulated updates avoids performance degradation for the initial blocks (up to 5% absolute improvement on MMLU), which is either competitive or superior to the learning-based technique.


Giving YOLOv8 a Second Look (Part 1)

#artificialintelligence

Welcome to the first part in our three part series on YOLOv8! In this series, we'll show you how to work with YOLOv8, from downloading the off-the-shelf models, to fine-tuning these models for specific use cases, and everything in between. Throughout the series, we will be using two libraries: FiftyOne, the open source computer vision toolkit, and Ultralytics, the library that will give us access to YOLOv8. In Part 1, you'll learn how to generate, load, and visualize YOLOv8 predictions. In Part 2, we'll show you how to evaluate the quality of YOLOv8 model predictions.


Can fairness be automated with AI? A deeper look at an essential debate

#artificialintelligence

In part one, I examined some noted ethicists' opinions about fairness measurement - and found some reasonable, and some incomplete (Can we measure fairness? In this article, I will begin with an example that was in dire need of fairness assessment. I will also introduce another method for fairness assessment. And finally, I'll try to resolve some different opinions between Reid Blackman, myself, and some Oxford scholars. I want to start with an example where the fairness measurement described in Part 1 could have avoided nearly catastrophic results.


Johns Hopkins Engineers Use AI for Deeper Look Into Brains of Mice

#artificialintelligence

A group of biomedical engineers at Johns Hopkins has developed an artificial intelligence (AI) training strategy to gain a deeper understanding of the brains of mice. The new strategy captures images of mouse brain cells as they are active.


CFPB warnings of bias in AI could spook lenders

#artificialintelligence

Rohit Chopra has seized on nearly every public opportunity as director of the Consumer Financial Protection Bureau to admonish companies about the potential misuse of artificial intelligence in lending decisions. Chopra has said that algorithms can never "be free of bias" and may result in credit determinations that are unfair to consumers. He claims machine learning can be anti-competitive and could lead to "digital redlining" and "robo discrimination." The message for banks and fast-moving fintechs is loud and clear: Enforcement actions related to the use of AI are coming, as is potential guidance tied to what makes alternative data such as utility and rent payments risky when used in marketing, pricing and underwriting products, experts say. "The focus on artificial intelligence and machine learning is explicit," said Stephen Hayes, a partner at Relman Colfax PLLC and a former CFPB senior counsel.


Cryptology ePrint Archive: Report 2021/287 - A Deeper Look at Machine Learning-Based Cryptanalysis

#artificialintelligence

In this article, we propose a detailed analysis and thorough explanations of the inherent workings of this new neural distinguisher. First, we studied the classified sets and tried to find some patterns that could guide us to better understand Gohr's results. We show with experiments that the neural distinguisher generally relies on the differential distribution on the ciphertext pairs, but also on the differential distribution in penultimate and antepenultimate rounds. In order to validate our findings, we construct a distinguisher for speck cipher based on pure cryptanalysis, without using any neural network, that achieves basically the same accuracy as Gohr's neural distinguisher and with the same efficiency (therefore improving over previous non-neural based distinguishers).


A Deeper Look into AI and Space: Benefits and Challenges

#artificialintelligence

On July 20, 1969, Neil Armstrong became the first person to walk on the moon. After almost a decade of designing, building, testing, and hypothesizing, we had succeeded -- a man was finally on the moon. Since then we have expanded our sights not only to the moon, but to planets, other moons, and asteroids. Modern space exploration is a vast field of possibilities beyond the limits of Earth's atmosphere where we can increase our knowledge of the cosmos and benefit humanity. As humans, we seek to answer the fundamental questions of our purpose in the universe: Is there life outside of Earth? What are we doing here? Where do we come from? To answer these seemingly endless questions, we look to space, an infinite?


Take A Deeper Look at Deep Learning - InformationWeek

#artificialintelligence

Last year, InformationWeek published a high-level introduction to deep learning that was meant to explain the basics of the technology to CIOs and IT managers. Since then, interest in deep learning has skyrocketed, so now seems like a good time to revisit the topic with a deeper dive into the technology. Enterprises have been spending a lot of money on deep learning and related technologies -- and they are about to spend much more. According to IDC, spending on artificial intelligence (AI), which includes deep learning, will likely grow from an estimated $24.0 billion in 2018 to $77.6 billion in 2022. In other words, AI investments will triple in just four years.


Can Artificial Intelligence Weed Out Unconscious Bias?

#artificialintelligence

Let me preface this by saying it has been my experience, that barring the obvious bad apples, most people are basically good and want to do the right thing. So in 2018, here in our comfortable Western (and litigious) society let me submit that a hiring manager is unlikely to look at a resume and say to himself, "I don't want a woman in this role." And let me finally submit that this otherwise decent hiring manager might look at the same resume and think enthusiastically that this female or African American candidate, "would be a great fit for another position" which happens to be lower level or less technical. It is called unconscious bias and it is the subject of growing interest in both academia and human resource departments. Likewise, battling this tendency within all humans is a new trend in software for HR -- there have been many vendors coming to market with AI-driven products that promise to weed out unconscious bias from the hiring system.