Goto

Collaborating Authors

 khan


AI needs a strong data fabric to deliver business value

MIT Technology Review

A modern data fabric makes it possible to turn existing enterprise knowledge into a trusted foundation for AI. Artificial intelligence is moving quickly in the enterprise, from experimentation to everyday use. Organizations are deploying copilots, agents, and predictive systems across finance, supply chains, human resources, and customer operations. By the end of 2025, half of companies used AI in at least three business functions, according to a recent survey. But as AI becomes embedded in core workflows, business leaders are discovering that the biggest obstacle is not model performance or computing power but the quality and the context of the data on which those systems rely. AI essentially introduces a new requirement: Systems must not only access data -- they must understand the business context behind it.


Building a strong data infrastructure for AI agent success

MIT Technology Review

As companies race to adopt agentic AI to spur innovation and gain efficiency, building the right enterprise data infrastructure has become a critical component of success. In the race to adopt and show value from AI, enterprises are moving faster than ever to deploy agentic AI as copilots, assistants, and autonomous task-runners. In late 2025, nearly two-thirds of companies were experimenting with AI agents, while 88% were using AI in at least one business function, up from 78% in 2024, according to McKinsey's annual AI report . Yet, while early pilots often succeed, only one in 10 companies actually scaled their AI agents. One major issue: AI agents are only as effective as the data foundation supporting them. Experts argue that most companies are seeing delays in implementing AI, not because of shortcomings in the models, but because they lack data architectures that deliver business context to be reliably used by humans and agents.


NAD Supplement 101: Possible Benefits and Precautions Explained (2026)

WIRED

What NAD+? Here's how it works in your body, why it matters, and if supplementation is worth the hype. It's more than likely that the NAD+ supplement craze has already crossed your path. The Biebers have infused it. Joe Rogan has podcasted about it. Gwyneth Paltrow swears by it and, of course, sells her own Youth-Boost NAD+ Peptide Rich Cream . NAD+ (short for nicotinamide adenine dinucleotide) is a coenzyme that your body makes naturally--it contributes to energy production and immune function, among other things. It reflects a broader shift in how people think about healthy aging and extending their healthspan overall .


I'm a neurologist... here are three simple tricks to help you kick any bad habit

Daily Mail - Science & tech

Tom Homan pushes Border Patrol out of Minneapolis in sweeping shake-up as Trump's'little Napoleon' Greg Bovino faces humiliating exit Insiders reveal the REAL misstep that got Kristi Noem humiliatingly sidelined by Trump... and the weak excuse she's peddling to try and save her own skin Insidious secret life of promiscuous neurosurgeon found dead in his $2.5m mansion'He has no loyalty': The bitter secret fallout between One Direction star Harry Styles and his former bandmates - as insiders reveal for the first time what really happened at Liam Payne's funeral The $1 supplement that will protect you from winter viruses... including new'super flu' Is Angelina Jolie quitting America? Private struggles emerge... as actress weighs major lifestyle that threatens to rupture her family Young single mother's selfless final act after finding out she had just weeks to live Seven dead in private jet crash as audio reveals voice said'Let there be light' seconds before tragedy at snowy Maine airport Gisele Bundchen relaxes with new husband and baby on boat after ex Tom Brady admits divorce'took a lot out of me' Defiant Trump dismisses Alzheimer's fears as he struggles to recall name of disease in interview America's best and worst states to retire revealed - and why Florida is no longer the obvious winner I'm a neurologist... here are three simple tricks to help you kick any bad habit Furious family hit out at Kate Hudson's'abominable' Oscar nomination amid toxic feud NFL's'scripted' conspiracy theory resurfaces as fans find five-month old post hinting at Super Bowl 60 matchup I'm a neurologist... here are three simple tricks to help you kick any bad habit Some bad habits are small. But over time, they add up, and suddenly you're wondering how you ended up here. Now, a neurologist says three simple tricks can help break the cycles that quietly take over our lives. Dr Arif Khan, a pediatric neurologist, outlined'cue shift,' the'one-step rule' and'reward rewrite' as practical tools to stop negative patterns in their tracks.


AI is hitting UK harder than other big economies, study finds

The Guardian

British businesses reported an average 11.5% increase in productivity thanks to AI, the study found. British businesses reported an average 11.5% increase in productivity thanks to AI, the study found. The UK is losing more jobs than it is creating because of artificial intelligence and is being hit harder than rival large economies, new research suggests. British companies reported that AI had resulted in net job losses over the past 12 months, down 8% - the highest rate among other leading economies including the US, Japan, Germany and Australia, according to a study by the investment bank Morgan Stanley. The research, which was shared with Bloomberg, surveyed companies using AI for at least a year across five industries: consumer staples and retail, real estate, transport, healthcare equipment and cars.


Cal-DETR: Calibrated Detection Transformer

Neural Information Processing Systems

Albeit revealing impressive predictive performance for several computer vision tasks, deep neural networks (DNNs) are prone to making overconfident predictions. This limits the adoption and wider utilization of DNNs in many safety-critical applications. There have been recent efforts toward calibrating DNNs, however, almost all of them focus on the classification task. Surprisingly, very little attention has been devoted to calibrating modern DNN-based object detectors, especially detection transformers, which have recently demonstrated promising detection performance and are influential in many decision-making systems. In this work, we address the problem by proposing a mechanism for calibrated detection transformers (Cal-DETR), particularly for Deformable-DETR, UP-DETR, and DINO.


Cross-Domain Transferability of Adversarial Perturbations

Neural Information Processing Systems

Adversarial examples reveal the blind spots of deep neural networks (DNNs) and represent a major concern for security-critical applications. The transferability of adversarial examples makes real-world attacks possible in black-box settings, where the attacker is forbidden to access the internal parameters of the model. The underlying assumption in most adversary generation methods, whether learning an instance-specific or an instance-agnostic perturbation, is the direct or indirect reliance on the original domain-specific data distribution. In this work, for the first time, we demonstrate the existence of domain-invariant adversaries, thereby showing common adversarial space among different datasets and models. To this end, we propose a framework capable of launching highly transferable attacks that crafts adversarial patterns to mislead networks trained on wholly different domains. For instance, an adversarial function learned on Paintings, Cartoons or Medical images can successfully perturb ImageNet samples to fool the classifier, with success rates as high as $\sim$99\% ($\ell_{\infty} \le 10$). The core of our proposed adversarial function is a generative network that is trained using a relativistic supervisory signal that enables domain-invariant perturbations. Our approach sets the new state-of-the-art for fooling rates, both under the white-box and black-box scenarios. Furthermore, despite being an instance-agnostic perturbation function, our attack outperforms the conventionally much stronger instance-specific attack methods.


Random Path Selection for Continual Learning

Neural Information Processing Systems

Incremental life-long learning is a main challenge towards the long-standing goal of Artificial General Intelligence. In real-life settings, learning tasks arrive in a sequence and machine learning models must continually learn to increment already acquired knowledge. The existing incremental learning approaches fall well below the state-of-the-art cumulative models that use all training classes at once. In this paper, we propose a random path selection algorithm, called RPS-Net, that progressively chooses optimal paths for the new tasks while encouraging parameter sharing and reuse. Our approach avoids the overhead introduced by computationally expensive evolutionary and reinforcement learning based path selection strategies while achieving considerable performance gains. As an added novelty, the proposed model integrates knowledge distillation and retrospection along with the path selection strategy to overcome catastrophic forgetting. In order to maintain an equilibrium between previous and newly acquired knowledge, we propose a simple controller to dynamically balance the model plasticity. Through extensive experiments, we demonstrate that the proposed method surpasses the state-of-the-art performance on incremental learning and by utilizing parallel computation this method can run in constant time with nearly the same efficiency as a conventional deep convolutional neural network.


Intriguing Properties of Vision Transformers

Neural Information Processing Systems

Vision transformers (ViT) have demonstrated impressive performance across numerous machine vision tasks. These models are based on multi-head self-attention mechanisms that can flexibly attend to a sequence of image patches to encode contextual cues. An important question is how such flexibility (in attending image-wide context conditioned on a given patch) can facilitate handling nuisances in natural images e.g., severe occlusions, domain shifts, spatial permutations, adversarial and natural perturbations. We systematically study this question via an extensive set of experiments encompassing three ViT families and provide comparisons with a high-performing convolutional neural network (CNN). We show and analyze the following intriguing properties of ViT: (a)Transformers are highly robust to severe occlusions, perturbations and domain shifts, e.g., retain as high as 60% top-1 accuracy on ImageNet even after randomly occluding 80% of the image content.


Focal Modulation and Bidirectional Feature Fusion Network for Medical Image Segmentation

Safdar, Moin, Iqbal, Shahzaib, Mehmood, Mehwish, Ghafoor, Mubeen, Khan, Tariq M., Razzak, Imran

arXiv.org Artificial Intelligence

Medical image segmentation is essential for clinical applications such as disease diagnosis, treatment planning, and disease development monitoring because it provides precise morphological and spatial information on anatomical structures that directly influence treatment decisions. Convolutional neural networks significantly impact image segmentation; however, since convolution operations are local, capturing global contextual information and long-range dependencies is still challenging. Their capacity to precisely segment structures with complicated borders and a variety of sizes is impacted by this restriction. Since transformers use self-attention methods to capture global context and long-range dependencies efficiently, integrating transformer-based architecture with CNNs is a feasible approach to overcoming these challenges. To address these challenges, we propose the Focal Modulation and Bidirectional Feature Fusion Network for Medical Image Segmentation, referred to as FM-BFF-Net in the remainder of this paper. The network combines convolutional and transformer components, employs a focal modulation attention mechanism to refine context awareness, and introduces a bidirectional feature fusion module that enables efficient interaction between encoder and decoder representations across scales. Through this design, FM-BFF-Net enhances boundary precision and robustness to variations in lesion size, shape, and contrast. Extensive experiments on eight publicly available datasets, including polyp detection, skin lesion segmentation, and ultrasound imaging, show that FM-BFF-Net consistently surpasses recent state-of-the-art methods in Jaccard index and Dice coefficient, confirming its effectiveness and adaptability for diverse medical imaging scenarios.