tron
- North America > United States > Virginia (0.04)
- North America > United States > California > Yolo County > Davis (0.04)
- North America > Canada > Quebec > Capitale-Nationale Region > Québec (0.04)
- (4 more...)
Reviewer # 3
We thank all the reviewers for their time. In what follows, reviewer comments are italicized and proceeded by our response in blue. We thank the reviewer for the helpful references. Importantly, we note that the SVM GPU-speedup paper by Catanzaro et al. is for Does that mean there is a trade-off between memory/computation and communication. Probably not appropriate to just report the speedup given the comparison is based on different platforms.
'Tron: Ares' Wants to Gaslight You About the Future of AI
The latest film in the franchise seems to have not learned any lessons from sci-fi movies past--or from current reality. All products featured on WIRED are independently selected by our editors. However, we may receive compensation from retailers and/or from purchases of products through these links. Ares, named after the Greek god of war, was built to be an AI super-soldier. Then he found out about, started listening to Depeche Mode, and realized the tech bro who made him might be a hack.
- North America > United States > Ohio (0.05)
- North America > United States > California (0.05)
- North America > Canada > Ontario > Toronto (0.05)
- (2 more...)
- Media > Film (1.00)
- Leisure & Entertainment (1.00)
- Government > Military > Army (0.36)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.74)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (0.66)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.50)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.32)
- North America > United States > Virginia (0.04)
- North America > United States > California > Yolo County > Davis (0.04)
- North America > Canada > Quebec > Capitale-Nationale Region > Québec (0.04)
- (4 more...)
VOLTRON: Detecting Unknown Malware Using Graph-Based Zero-Shot Learning
Akdeniz, M. Tahir, Yeşilkaya, Zeynep, Köse, İ. Enes, Ünal, İ. Ulaş, Şen, Sevil
The persistent threat of Android malware presents a serious challenge to the security of millions of users globally. While many machine learning-based methods have been developed to detect these threats, their reliance on large labeled datasets limits their effectiveness against emerging, previously unseen malware families, for which labeled data is scarce or nonexistent. To address this challenge, we introduce a novel zero-shot learning framework that combines Variational Graph Auto-Encoders (VGAE) with Siamese Neural Networks (SNN) to identify malware without needing prior examples of specific malware families. Our approach leverages graph-based representations of Android applications, enabling the model to detect subtle structural differences between benign and malicious software, even in the absence of labeled data for new threats. Experimental results show that our method outperforms the state-of-the-art MaMaDroid, especially in zero-day malware detection. Our model achieves 96.24% accuracy and 95.20% recall for unknown malware families, highlighting its robustness against evolving Android threats.
- Asia > Middle East > Republic of Türkiye > Ankara Province > Ankara (0.05)
- North America > United States (0.04)
- Europe > Germany > North Rhine-Westphalia > Cologne Region > Bonn (0.04)
From Proxies to Fields: Spatiotemporal Reconstruction of Global Radiation from Sparse Sensor Sequences
Kobayashi, Kazuma, Roy, Samrendra, Koric, Seid, Abueidda, Diab, Alam, Syed Bahauddin
Accurate reconstruction of latent environmental fields from sparse and indirect observations is a foundational challenge across scientific domains-from atmospheric science and geophysics to public health and aerospace safety. Traditional approaches rely on physics-based simulators or dense sensor networks, both constrained by high computational cost, latency, or limited spatial coverage. We present the Temporal Radiation Operator Network (TRON), a spatiotemporal neural operator architecture designed to infer continuous global scalar fields from sequences of sparse, non-uniform proxy measurements. Unlike recent forecasting models that operate on dense, gridded inputs to predict future states, TRON addresses a more ill-posed inverse problem: reconstructing the current global field from sparse, temporally evolving sensor sequences, without access to future observations or dense labels. Demonstrated on global cosmic radiation dose reconstruction, TRON is trained on 22 years of simulation data and generalizes across 65,341 spatial locations, 8,400 days, and sequence lengths from 7 to 90 days. It achieves sub-second inference with relative L2 errors below 0.1%, representing a >58,000X speedup over Monte Carlo-based estimators. Though evaluated in the context of cosmic radiation, TRON offers a domain-agnostic framework for scientific field reconstruction from sparse data, with applications in atmospheric modeling, geophysical hazard monitoring, and real-time environmental risk forecasting.
- North America > United States > Illinois > Champaign County > Urbana (0.14)
- Asia > Middle East > UAE > Abu Dhabi Emirate > Abu Dhabi (0.14)
- North America > Canada (0.04)
- (13 more...)
- Health & Medicine > Nuclear Medicine (1.00)
- Energy (1.00)
- Health & Medicine > Therapeutic Area > Oncology (0.48)
- Government > Regional Government > North America Government > United States Government (0.46)
GPU-Accelerated Primal Learning for Extremely Fast Large-Scale Classification
One of the most efficient methods to solve L2 -regularized primal problems, such as logistic regression and linear support vector machine (SVM) classification, is the widely used trust region Newton algorithm, TRON. While TRON has recently been shown to enjoy substantial speedups on shared-memory multi-core systems, exploiting graphical processing units (GPUs) to speed up the method is significantly more difficult, owing to the highly complex and heavily sequential nature of the algorithm. In this work, we show that using judicious GPU-optimization principles, TRON training time for different losses and feature representations may be drastically reduced. For sparse feature sets, we show that using GPUs to train logistic regression classifiers in LIBLINEAR is up to an order-of-magnitude faster than solely using multithreading. For dense feature sets–which impose far more stringent memory constraints–we show that GPUs substantially reduce the lengthy SVM learning times required for state-of-the-art proteomics analysis, leading to dramatic improvements over recently proposed speedups.
The next Tron game is an isometric action adventure due out in 2025
The next Tron game is a follow-up to Tron: Identity, but it's also something completely new. Where Tron: Identity was a visual novel, Tron: Catalyst is an isometric action game with a looping narrative, and it's coming to PC, PlayStation 5, Xbox Series X/S and Switch in 2025. Tron: Catalyst is in development at Bithell Games, the award-winning studio behind Tron: Identity, John Wick Hex and Thomas Was Alone. In Tron: Catalyst, players return to the Arq Grid, a virtual world that's evolved without human input, creating a siloed, Galapagos Islands type of space populated by sentient computer programs. The protagonist is Exo, a program who's able to relive segments of time by exploiting a system-level glitch that no one else can sense.
GPU-Accelerated Primal Learning for Extremely Fast Large-Scale Classification
One of the most efficient methods to solve L2 -regularized primal problems, such as logistic regression and linear support vector machine (SVM) classification, is the widely used trust region Newton algorithm, TRON. While TRON has recently been shown to enjoy substantial speedups on shared-memory multi-core systems, exploiting graphical processing units (GPUs) to speed up the method is significantly more difficult, owing to the highly complex and heavily sequential nature of the algorithm. In this work, we show that using judicious GPU-optimization principles, TRON training time for different losses and feature representations may be drastically reduced. For sparse feature sets, we show that using GPUs to train logistic regression classifiers in LIBLINEAR is up to an order-of-magnitude faster than solely using multithreading. For dense feature sets–which impose far more stringent memory constraints–we show that GPUs substantially reduce the lengthy SVM learning times required for state-of-the-art proteomics analysis, leading to dramatic improvements over recently proposed speedups.