dash
Pruning neural network models for gene regulatory dynamics using data and domain knowledge
It is common to assess a model's merit for scientific discovery, and thus novel insights, by how well it aligns with already available domain knowledge - a dimension that is currently largely disregarded in the comparison of neural network models. While pruning can simplify deep neural network architectures and excels in identifying sparse models, as we show in the context of gene regulatory network inference, state-of-the-art techniques struggle with biologically meaningful structure learning. To address this issue, we propose DASH, a generalizable framework that guides network pruning by using domain-specific structural information in model fitting and leads to sparser, better interpretable models that are more robust to noise. Using both synthetic data with ground truth information, as well as real-world gene expression data, we show that DASH, using knowledge about gene interaction partners within the putative regulatory network, outperforms general pruning methods by a large margin and yields deeper insights into the biological systems being studied.
DASH: Warm-Starting Neural Network Training in Stationary Settings without Loss of Plasticity
Warm-starting neural network training by initializing networks with previously learned weights is appealing, as practical neural networks are often deployed under a continuous influx of new data. However, it often leads to, where the network loses its ability to learn new information, resulting in worse generalization than training from scratch. This occurs even under stationary data distributions, and its underlying mechanism is poorly understood. We develop a framework emulating real-world neural network training and identify noise memorization as the primary cause of plasticity loss when warm-starting on stationary data.
Efficient Architecture Search for Diverse Tasks
While neural architecture search (NAS) has enabled automated machine learning (AutoML) for well-researched areas, its application to tasks beyond computer vision is still under-explored. As less-studied domains are precisely those where we expect AutoML to have the greatest impact, in this work we study NAS for efficiently solving diverse problems. Seeking an approach that is fast, simple, and broadly applicable, we fix a standard convolutional network (CNN) topology and propose to search for the right kernel sizes and dilations its operations should take on. This dramatically expands the model's capacity to extract features at multiple resolutions for different types of data while only requiring search over the operation space. To overcome the efficiency challenges of naive weight-sharing in this search space, we introduce DASH, a differentiable NAS algorithm that computes the mixture-of-operations using the Fourier diagonalization of convolution, achieving both a better asymptotic complexity and an up-to-10x search time speedup in practice. We evaluate DASH on ten tasks spanning a variety of application domains such as PDE solving, protein folding, and heart disease detection. DASH outperforms state-of-the-art AutoML methods in aggregate, attaining the best-known automated performance on seven tasks. Meanwhile, on six of the ten tasks, the combined search and retraining time is less than 2x slower than simply training a CNN backbone that is far less accurate.
DASH: A Meta-Attack Framework for Synthesizing Effective and Stealthy Adversarial Examples
Nafi, Abdullah Al Nomaan, Rahaman, Habibur, Haider, Zafaryab, Mahfuz, Tanzim, Suya, Fnu, Bhunia, Swarup, Chakraborty, Prabuddha
Numerous techniques have been proposed for generating adversarial examples in white-box settings under strict Lp-norm constraints. However, such norm-bounded examples often fail to align well with human perception, and only recently have a few methods begun specifically exploring perceptually aligned adversarial examples. Moreover, it remains unclear whether insights from Lp-constrained attacks can be effectively leveraged to improve perceptual efficacy. In this paper, we introduce DAASH, a fully differentiable meta-attack framework that generates effective and perceptually aligned adversarial examples by strategically composing existing Lp-based attack methods. DAASH operates in a multi-stage fashion: at each stage, it aggregates candidate adversarial examples from multiple base attacks using learned, adaptive weights and propagates the result to the next stage. A novel meta-loss function guides this process by jointly minimizing misclassification loss and perceptual distortion, enabling the framework to dynamically modulate the contribution of each base attack throughout the stages. We evaluate DAASH on adversarially trained models across CIFAR-10, CIFAR-100, and ImageNet. Despite relying solely on Lp-constrained based methods, DAASH significantly outperforms state-of-the-art perceptual attacks such as AdvAD -- achieving higher attack success rates (e.g., 20.63\% improvement) and superior visual quality, as measured by SSIM, LPIPS, and FID (improvements $\approx$ of 11, 0.015, and 5.7, respectively). Furthermore, DAASH generalizes well to unseen defenses, making it a practical and strong baseline for evaluating robustness without requiring handcrafted adaptive attacks for each new defense.
- North America > United States > Tennessee > Knox County > Knoxville (0.14)
- North America > United States > Maine > Penobscot County > Orono (0.14)
- North America > United States > Florida > Alachua County > Gainesville (0.14)
- North America > Canada > Ontario > Toronto (0.04)
- Information Technology > Security & Privacy (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (0.93)
- Information Technology > Sensing and Signal Processing > Image Processing (0.93)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.46)
- North America > United States > Massachusetts > Suffolk County > Boston (0.04)
- Europe > Germany > Saarland > Saarbrücken (0.04)
- Research Report > Experimental Study (1.00)
- Research Report > New Finding (0.92)
- Health & Medicine > Therapeutic Area > Oncology (1.00)
- Health & Medicine > Pharmaceuticals & Biotechnology (1.00)
OpenAI has fixed ChatGPT's infamous 'em dash' obsession (somewhat)
When you purchase through links in our articles, we may earn a small commission. The em dash is seen by some as a dead giveaway of AI-generated text, mainly because ChatGPT loves to use it. OpenAI CEO Sam Altman shared in a social media post that the company has now fixed ChatGPT's overuse of the "em dash," which is the extra-long hyphen that's commonly seen in AI-generated text. In the past, ChatGPT was overzealous in its use of the em dash, to the point where it'd continue to include them even when users asked it not to. Now, with the fix, a user can instruct ChatGPT to not use em dashes and it will respect the instruction.
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.63)
- Workflow (1.00)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Education (0.67)
- Information Technology (0.46)
- Energy (0.45)
- North America > United States > Massachusetts > Suffolk County > Boston (0.04)
- Europe > Germany > Saarland > Saarbrücken (0.04)
- Research Report > Experimental Study (1.00)
- Research Report > New Finding (0.92)
- Health & Medicine > Therapeutic Area > Oncology (1.00)
- Health & Medicine > Pharmaceuticals & Biotechnology (1.00)
- Information Technology (0.67)
- Workflow (1.00)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Information Technology (0.68)
- Education (0.67)
- Energy (0.45)
ChatGPT Has Ruined the Em Dash
Candice Lim and Kate Lindsay get into the war between em dashes and artificial intelligence. Back in 2024, what started as a developer question became an all-out grammar war, with the use of em dashes becoming a possible indicator that something was written using ChatGPT. In the past week alone, several writers have published their defenses of the em dash and how we shouldn't let ChatGPT ruin our favorite keyboard shortcut. However, the em dash may be a symptom of a bigger issue: have our AI detection skills gotten worse? Or, are we all doomed to be tricked by a hyphen or two?