Goto

Collaborating Authors

 khan


NAD Supplement 101: Possible Benefits and Precautions Explained (2026)

WIRED

What NAD+? Here's how it works in your body, why it matters, and if supplementation is worth the hype. It's more than likely that the NAD+ supplement craze has already crossed your path. The Biebers have infused it. Joe Rogan has podcasted about it. Gwyneth Paltrow swears by it and, of course, sells her own Youth-Boost NAD+ Peptide Rich Cream . NAD+ (short for nicotinamide adenine dinucleotide) is a coenzyme that your body makes naturally--it contributes to energy production and immune function, among other things. It reflects a broader shift in how people think about healthy aging and extending their healthspan overall .


I'm a neurologist... here are three simple tricks to help you kick any bad habit

Daily Mail - Science & tech

Tom Homan pushes Border Patrol out of Minneapolis in sweeping shake-up as Trump's'little Napoleon' Greg Bovino faces humiliating exit Insiders reveal the REAL misstep that got Kristi Noem humiliatingly sidelined by Trump... and the weak excuse she's peddling to try and save her own skin Insidious secret life of promiscuous neurosurgeon found dead in his $2.5m mansion'He has no loyalty': The bitter secret fallout between One Direction star Harry Styles and his former bandmates - as insiders reveal for the first time what really happened at Liam Payne's funeral The $1 supplement that will protect you from winter viruses... including new'super flu' Is Angelina Jolie quitting America? Private struggles emerge... as actress weighs major lifestyle that threatens to rupture her family Young single mother's selfless final act after finding out she had just weeks to live Seven dead in private jet crash as audio reveals voice said'Let there be light' seconds before tragedy at snowy Maine airport Gisele Bundchen relaxes with new husband and baby on boat after ex Tom Brady admits divorce'took a lot out of me' Defiant Trump dismisses Alzheimer's fears as he struggles to recall name of disease in interview America's best and worst states to retire revealed - and why Florida is no longer the obvious winner I'm a neurologist... here are three simple tricks to help you kick any bad habit Furious family hit out at Kate Hudson's'abominable' Oscar nomination amid toxic feud NFL's'scripted' conspiracy theory resurfaces as fans find five-month old post hinting at Super Bowl 60 matchup I'm a neurologist... here are three simple tricks to help you kick any bad habit Some bad habits are small. But over time, they add up, and suddenly you're wondering how you ended up here. Now, a neurologist says three simple tricks can help break the cycles that quietly take over our lives. Dr Arif Khan, a pediatric neurologist, outlined'cue shift,' the'one-step rule' and'reward rewrite' as practical tools to stop negative patterns in their tracks.


Cal-DETR: Calibrated Detection Transformer

Neural Information Processing Systems

Albeit revealing impressive predictive performance for several computer vision tasks, deep neural networks (DNNs) are prone to making overconfident predictions. This limits the adoption and wider utilization of DNNs in many safety-critical applications. There have been recent efforts toward calibrating DNNs, however, almost all of them focus on the classification task. Surprisingly, very little attention has been devoted to calibrating modern DNN-based object detectors, especially detection transformers, which have recently demonstrated promising detection performance and are influential in many decision-making systems. In this work, we address the problem by proposing a mechanism for calibrated detection transformers (Cal-DETR), particularly for Deformable-DETR, UP-DETR, and DINO.


Cross-Domain Transferability of Adversarial Perturbations

Neural Information Processing Systems

Adversarial examples reveal the blind spots of deep neural networks (DNNs) and represent a major concern for security-critical applications. The transferability of adversarial examples makes real-world attacks possible in black-box settings, where the attacker is forbidden to access the internal parameters of the model. The underlying assumption in most adversary generation methods, whether learning an instance-specific or an instance-agnostic perturbation, is the direct or indirect reliance on the original domain-specific data distribution. In this work, for the first time, we demonstrate the existence of domain-invariant adversaries, thereby showing common adversarial space among different datasets and models. To this end, we propose a framework capable of launching highly transferable attacks that crafts adversarial patterns to mislead networks trained on wholly different domains. For instance, an adversarial function learned on Paintings, Cartoons or Medical images can successfully perturb ImageNet samples to fool the classifier, with success rates as high as $\sim$99\% ($\ell_{\infty} \le 10$). The core of our proposed adversarial function is a generative network that is trained using a relativistic supervisory signal that enables domain-invariant perturbations. Our approach sets the new state-of-the-art for fooling rates, both under the white-box and black-box scenarios. Furthermore, despite being an instance-agnostic perturbation function, our attack outperforms the conventionally much stronger instance-specific attack methods.


Random Path Selection for Continual Learning

Neural Information Processing Systems

Incremental life-long learning is a main challenge towards the long-standing goal of Artificial General Intelligence. In real-life settings, learning tasks arrive in a sequence and machine learning models must continually learn to increment already acquired knowledge. The existing incremental learning approaches fall well below the state-of-the-art cumulative models that use all training classes at once. In this paper, we propose a random path selection algorithm, called RPS-Net, that progressively chooses optimal paths for the new tasks while encouraging parameter sharing and reuse. Our approach avoids the overhead introduced by computationally expensive evolutionary and reinforcement learning based path selection strategies while achieving considerable performance gains. As an added novelty, the proposed model integrates knowledge distillation and retrospection along with the path selection strategy to overcome catastrophic forgetting. In order to maintain an equilibrium between previous and newly acquired knowledge, we propose a simple controller to dynamically balance the model plasticity. Through extensive experiments, we demonstrate that the proposed method surpasses the state-of-the-art performance on incremental learning and by utilizing parallel computation this method can run in constant time with nearly the same efficiency as a conventional deep convolutional neural network.


Intriguing Properties of Vision Transformers

Neural Information Processing Systems

Vision transformers (ViT) have demonstrated impressive performance across numerous machine vision tasks. These models are based on multi-head self-attention mechanisms that can flexibly attend to a sequence of image patches to encode contextual cues. An important question is how such flexibility (in attending image-wide context conditioned on a given patch) can facilitate handling nuisances in natural images e.g., severe occlusions, domain shifts, spatial permutations, adversarial and natural perturbations. We systematically study this question via an extensive set of experiments encompassing three ViT families and provide comparisons with a high-performing convolutional neural network (CNN). We show and analyze the following intriguing properties of ViT: (a)Transformers are highly robust to severe occlusions, perturbations and domain shifts, e.g., retain as high as 60% top-1 accuracy on ImageNet even after randomly occluding 80% of the image content.


RS-CA-HSICT: A Residual and Spatial Channel Augmented CNN Transformer Framework for Monkeypox Detection

Iqbal, Rashid, Khan, Saddam Hussain

arXiv.org Artificial Intelligence

This work proposes a hybrid deep learning approach, namely Residual and Spatial Learning based Channel Augmented Integrated CNN-Transformer architecture, that leverages the strengths of CNN and Transformer towards enhanced MPox detection. The proposed RS-CA-HSICT framework is composed of an HSICT block, a residual CNN module, a spatial CNN block, and a CA, which enhances the diverse feature space, detailed lesion information, and long-range dependencies. The new HSICT module first integrates an abstract representation of the stem CNN and customized ICT blocks for efficient multihead attention and structured CNN layers with homogeneous (H) and structural (S) operations. The customized ICT blocks learn global contextual interactions and local texture extraction. Additionally, H and S layers learn spatial homogeneity and fine structural details by reducing noise and modeling complex morphological variations. Moreover, inverse residual learning enhances vanishing gradient, and stage-wise resolution reduction ensures scale invariance. Furthermore, the RS-CA-HSICT framework augments the learned HSICT channels with the TL-driven Residual and Spatial CNN maps for enhanced multiscale feature space capturing global and localized structural cues, subtle texture, and contrast variations. These channels, preceding augmentation, are refined through the Channel-Fusion-and-Attention block, which preserves discriminative channels while suppressing redundant ones, thereby enabling efficient computation. Finally, the spatial attention mechanism refines pixel selection to detect subtle patterns and intra-class contrast variations in Mpox. Experimental results on both the Kaggle benchmark and a diverse MPox dataset reported classification accuracy as high as 98.30% and an F1-score of 98.13%, which outperforms the existing CNNs and ViTs.


Focal Modulation and Bidirectional Feature Fusion Network for Medical Image Segmentation

Safdar, Moin, Iqbal, Shahzaib, Mehmood, Mehwish, Ghafoor, Mubeen, Khan, Tariq M., Razzak, Imran

arXiv.org Artificial Intelligence

Medical image segmentation is essential for clinical applications such as disease diagnosis, treatment planning, and disease development monitoring because it provides precise morphological and spatial information on anatomical structures that directly influence treatment decisions. Convolutional neural networks significantly impact image segmentation; however, since convolution operations are local, capturing global contextual information and long-range dependencies is still challenging. Their capacity to precisely segment structures with complicated borders and a variety of sizes is impacted by this restriction. Since transformers use self-attention methods to capture global context and long-range dependencies efficiently, integrating transformer-based architecture with CNNs is a feasible approach to overcoming these challenges. To address these challenges, we propose the Focal Modulation and Bidirectional Feature Fusion Network for Medical Image Segmentation, referred to as FM-BFF-Net in the remainder of this paper. The network combines convolutional and transformer components, employs a focal modulation attention mechanism to refine context awareness, and introduces a bidirectional feature fusion module that enables efficient interaction between encoder and decoder representations across scales. Through this design, FM-BFF-Net enhances boundary precision and robustness to variations in lesion size, shape, and contrast. Extensive experiments on eight publicly available datasets, including polyp detection, skin lesion segmentation, and ultrasound imaging, show that FM-BFF-Net consistently surpasses recent state-of-the-art methods in Jaccard index and Dice coefficient, confirming its effectiveness and adaptability for diverse medical imaging scenarios.


Predicting life satisfaction using machine learning and explainable AI

Khan, Alif Elham, Hasan, Mohammad Junayed, Anjum, Humayra, Mohammed, Nabeel, Momen, Sifat

arXiv.org Artificial Intelligence

Life satisfaction is a crucial facet of human well-being. Hence, research on life satisfaction is incumbent for understanding how individuals experience their lives and influencing interventions targeted at enhancing mental health and well-being. Life satisfaction has traditionally been measured using analog, complicated, and frequently error-prone methods. These methods raise questions concerning validation and propagation. However, this study demonstrates the potential for machine learning algorithms to predict life satisfaction with a high accuracy of 93.80% and a 73.00% macro F1-score. The dataset comes from a government survey of 19000 people aged 16-64 years in Denmark. Using feature learning techniques, 27 significant questions for assessing contentment were extracted, making the study highly reproducible, simple, and easily interpretable. Furthermore, clinical and biomedical large language models (LLMs) were explored for predicting life satisfaction by converting tabular data into natural language sentences through mapping and adding meaningful counterparts, achieving an accuracy of 93.74% and macro F1-score of 73.21%. It was found that life satisfaction prediction is more closely related to the biomedical domain than the clinical domain. Ablation studies were also conducted to understand the impact of data resampling and feature selection techniques on model performance. Moreover, the correlation between primary determinants with different age brackets was analyzed, and it was found that health condition is the most important determinant across all ages. This study demonstrates how machine learning, large language models and XAI can jointly contribute to building trust and understanding in using AI to investigate human behavior, with significant ramifications for academics and professionals working to quantify and comprehend subjective well-being.


The FTC Is Disappearing Blog Posts About AI Published During Lina Khan's Tenure

WIRED

The FTC Is Disappearing Blog Posts About AI Published During Lina Khan's Tenure The Federal Trade Commission removed several blog posts in recent months about open source and potential risks to consumers from the rapid spread of commercial AI tools. Lina Khan, former chair of the Federal Trade Commission, arrives to testify before Congress in 2024. In late July 2024, Lina Khan, then the chair of the US Federal Trade Commission, gave a speech at an event hosted by the San Francisco startup accelerator Y Combinator in which she positioned herself as an advocate for open source artificial intelligence. The event took place as California lawmakers were considering a landmark bill called SB 1047 that would have imposed new testing and safety requirements on AI companies. Critics of the legislation, which was later vetoed by California governor Gavin Newsom, argued it would hamper the development and release of open source AI models.