surface area
Could AI Data Centers Be Moved to Outer Space?
Could AI Data Centers Be Moved to Outer Space? Massive data centers for generative AI are bad for the Earth. Data centers are being built at a frantic pace all over the world, driven by the AI boom. These facilities consume staggering amounts of electricity. By 2028, AI servers alone may use as much energy as 22 percent of US households.
- North America > United States > Louisiana (0.04)
- North America > United States > California (0.04)
- Europe > United Kingdom > Scotland (0.04)
- (4 more...)
- Information Technology > Services (1.00)
- Energy > Renewable > Solar (0.31)
- Information Technology > Cloud Computing (1.00)
- Information Technology > Artificial Intelligence (1.00)
- Information Technology > Communications > Social Media (0.71)
Process Reward Models That Think
Khalifa, Muhammad, Agarwal, Rishabh, Logeswaran, Lajanugen, Kim, Jaekyeom, Peng, Hao, Lee, Moontae, Lee, Honglak, Wang, Lu
Step-by-step verifiers -- also known as process reward models (PRMs) -- are a key ingredient for test-time scaling. PRMs require step-level supervision, making them expensive to train. This work aims to build data-efficient PRMs as verbalized step-wise reward models that verify every step in the solution by generating a verification chain-of-thought (CoT). We propose ThinkPRM, a long CoT verifier fine-tuned on orders of magnitude fewer process labels than those required by discriminative PRMs. Our approach capitalizes on the inherent reasoning abilities of long CoT models, and outperforms LLM-as-a-Judge and discriminative verifiers -- using only 1% of the process labels in PRM800K -- across several challenging benchmarks. Specifically, ThinkPRM beats the baselines on ProcessBench, MATH-500, and AIME '24 under best-of-N selection and reward-guided search. In an out-of-domain evaluation on a subset of GPQA-Diamond and LiveCodeBench, our PRM surpasses discriminative verifiers trained on the full PRM800K by 8% and 4.5%, respectively. Lastly, under the same token budget, ThinkPRM scales up verification compute more effectively compared to LLM-as-a-Judge, outperforming it by 7.2% on a subset of ProcessBench. Our work highlights the value of generative, long CoT PRMs that can scale test-time compute for verification while requiring minimal supervision for training. Our code, data, and models are released at https://github.com/mukhal/thinkprm.
- North America > United States > Michigan (0.04)
- North America > United States > Illinois > Champaign County > Urbana (0.04)
- Asia > Thailand > Bangkok > Bangkok (0.04)
- (3 more...)
- Workflow (1.00)
- Research Report (0.82)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
Tighter Truncated Rectangular Prism Approximation for RNN Robustness Verification
Lin, Xingqi, Chen, Liangyu, Wu, Min, Zhang, Min, Zeng, Zhenbing
Robustness verification is a promising technique for rigorously proving Recurrent Neural Networks (RNNs) robustly. A key challenge is to over-approximate the nonlinear activation functions with linear constraints, which can transform the verification problem into an efficiently solvable linear programming problem. Existing methods over-approximate the nonlinear parts with linear bounding planes individually, which may cause significant over-estimation and lead to lower verification accuracy. In this paper, in order to tightly enclose the three-dimensional nonlinear surface generated by the Hadamard product, we propose a novel truncated rectangular prism formed by two linear relaxation planes and a refinement-driven method to minimize both its volume and surface area for tighter over-approximation. Based on this approximation, we implement a prototype DeepPrism for RNN robustness verification. The experimental results demonstrate that \emph{DeepPrism} has significant improvement compared with the state-of-the-art approaches in various tasks of image classification, speech recognition and sentiment analysis.
- Research Report > New Finding (0.66)
- Research Report > Promising Solution (0.54)
A Multimodal Deep Learning Approach for White Matter Shape Prediction in Diffusion MRI Tractography
Lo, Yui, Chen, Yuqian, Liu, Dongnan, Zekelman, Leo, Rushmore, Jarrett, Rathi, Yogesh, Makris, Nikos, Golby, Alexandra J., Zhang, Fan, Cai, Weidong, O'Donnell, Lauren J.
Shape measures have emerged as promising descriptors of white matter tractography, offering complementary insights into anatomical variability and associations with cognitive and clinical phenotypes. However, conventional methods for computing shape measures are computationally expensive and time-consuming for large-scale datasets due to reliance on voxel-based representations. We propose Tract2Shape, a novel multimodal deep learning framework that leverages geometric (point cloud) and scalar (tabular) features to predict ten white matter tractography shape measures. To enhance model efficiency, we utilize a dimensionality reduction algorithm for the model to predict five primary shape components. The model is trained and evaluated on two independently acquired datasets, the HCP-YA dataset, and the PPMI dataset. We evaluate the performance of Tract2Shape by training and testing it on the HCP-YA dataset and comparing the results with state-of-the-art models. To further assess its robustness and generalization ability, we also test Tract2Shape on the unseen PPMI dataset. Tract2Shape outperforms SOTA deep learning models across all ten shape measures, achieving the highest average Pearson's r and the lowest nMSE on the HCP-YA dataset. The ablation study shows that both multimodal input and PCA contribute to performance gains. On the unseen testing PPMI dataset, Tract2Shape maintains a high Pearson's r and low nMSE, demonstrating strong generalizability in cross-dataset evaluation. Tract2Shape enables fast, accurate, and generalizable prediction of white matter shape measures from tractography data, supporting scalable analysis across datasets. This framework lays a promising foundation for future large-scale white matter shape analysis.
- North America > United States > California > San Francisco County > San Francisco (0.14)
- North America > United States > New York > New York County > New York City (0.04)
- Oceania > Australia > New South Wales > Sydney (0.04)
- (2 more...)
- Research Report > New Finding (1.00)
- Research Report > Promising Solution (0.66)
- Health & Medicine > Therapeutic Area > Neurology (1.00)
- Health & Medicine > Diagnostic Medicine > Imaging (1.00)
- Health & Medicine > Health Care Technology (0.93)
Learning from B Cell Evolution: Adaptive Multi-Expert Diffusion for Antibody Design via Online Optimization
Feng, Hanqi, Qiu, Peng, Zhang, Mengchun, Tao, Yiran, Fan, You, Xu, Jingtao, Poczos, Barnabas
Recent advances in diffusion models have shown remarkable potential for antibody design, yet existing approaches apply uniform generation strategies that cannot adapt to each antigen's unique requirements. Inspired by B cell affinity maturation--where antibodies evolve through multi-objective optimization balancing affinity, stability, and self-avoidance--we propose the first biologically-motivated framework that leverages physics-based domain knowledge within an online meta-learning system. Our method employs multiple specialized experts (van der Waals, molecular recognition, energy balance, and interface geometry) whose parameters evolve during generation based on iterative feedback, mimicking natural antibody refinement cycles. Instead of fixed protocols, this adaptive guidance discovers personalized optimization strategies for each target. Our experiments demonstrate that this approach: (1) discovers optimal SE(3)-equivariant guidance strategies for different antigen classes without pre-training, preserving molecular symmetries throughout optimization; (2) significantly enhances hotspot coverage and interface quality through target-specific adaptation, achieving balanced multi-objective optimization characteristic of therapeutic antibodies; (3) establishes a paradigm for iterative refinement where each antibody-antigen system learns its unique optimization profile through online evaluation; (4) generalizes effectively across diverse design challenges, from small epitopes to large protein interfaces, enabling precision-focused campaigns for individual targets.
A Machine Learning Framework for Predicting Microphysical Properties of Ice Crystals from Cloud Particle Imagery
Ko, Joseph, Harrington, Jerry, Sulia, Kara, Przybylo, Vanessa, van Lier-Walqui, Marcus, Lamb, Kara
The microphysical properties of ice crystals are important because they significantly alter the radiative properties and spatiotemporal distributions of clouds, which in turn strongly affect Earth's climate. However, it is challenging to measure key properties of ice crystals, such as mass or morphological features. Here, we present a framework for predicting three-dimensional (3D) microphysical properties of ice crystals from in situ two-dimensional (2D) imagery. First, we computationally generate synthetic ice crystals using 3D modeling software along with geometric parameters estimated from the 2021 Ice Cryo-Encapsulation Balloon (ICEBall) field campaign. Then, we use synthetic crystals to train machine learning (ML) models to predict effective density ($ρ_{e}$), effective surface area ($A_e$), and number of bullets ($N_b$) from synthetic rosette imagery. When tested on unseen synthetic images, we find that our ML models can predict microphysical properties with high accuracy. For $ρ_{e}$ and $A_e$, respectively, our best-performing single view models achieved $R^2$ values of 0.99 and 0.98. For $N_b$, our best single view model achieved a balanced accuracy and F1 score of 0.91. We also quantify the marginal prediction improvements from incorporating a second view. A stereo view ResNet-18 model reduced RMSE by 40% for both $ρ_e$ and $A_e$, relative to a single view ResNet-18 model. For $N_b$, we find that a stereo view ResNet-18 model improved the F1 score by 8%. This work provides a novel ML-driven framework for estimating ice microphysical properties from in situ imagery, which will allow for downstream constraints on microphysical parameterizations, such as the mass-size relationship.
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > Pennsylvania > Centre County > University Park (0.04)
- North America > United States > New York > Albany County > Albany (0.04)
- (3 more...)
- Energy (1.00)
- Government > Regional Government > North America Government > United States Government (0.67)
- Health & Medicine > Diagnostic Medicine > Imaging (0.46)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.68)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning > Regression (0.49)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (0.46)
LengthLogD: A Length-Stratified Ensemble Framework for Enhanced Peptide Lipophilicity Prediction via Multi-Scale Feature Integration
Wu, Shuang, Wang, Meijie, Yu, Lun
Peptide compounds demonstrate considerable potential as therapeutic agents due to their high target affinity and low toxicity, yet their drug development is constrained by their low membrane permeability. Molecular weight and peptide length have significant effects on the logD of peptides, which in turn influences their ability to cross biological membranes. However, accurate prediction of peptide logD remains challenging due to the complex interplay between sequence, structure, and ionization states. This study introduces LengthLogD, a predictive framework that establishes specialized models through molecular length stratification while innovatively integrating multi-scale molecular representations. We constructed feature spaces across three hierarchical levels: atomic (10 molecular descriptors), structural (1024-bit Morgan fingerprints), and topological (3 graph-based features including Wiener index), optimized through stratified ensemble learning. An adaptive weight allocation mechanism specifically developed for long peptides significantly enhances model generalizability. Experimental results demonstrate superior performance across all categories: short peptides (R^2=0.855), medium peptides (R^2=0.816), and long peptides (R^2=0.882), with a 34.7% reduction in prediction error for long peptides compared to conventional single-model approaches. Ablation studies confirm: 1) The length-stratified strategy contributes 41.2% to performance improvement; 2) Topological features account for 28.5% of predictive importance. Compared to state-of-the-art models, our method maintains short peptide prediction accuracy while achieving a 25.7% increase in the coefficient of determination (R^2) for long peptides. This research provides a precise logD prediction tool for peptide drug development, particularly demonstrating unique value in optimizing long peptide lead compounds.
- Europe > United Kingdom (0.14)
- North America > United States > California > San Francisco County > San Francisco (0.04)
From Low Field to High Value: Robust Cortical Mapping from Low-Field MRI
Gopinath, Karthik, Sorby-Adams, Annabel, Ramirez, Jonathan W., Zemlyanker, Dina, Guo, Jennifer, Hunt, David, Mac Donald, Christine L., Keene, C. Dirk, Coalson, Timothy, Glasser, Matthew F., Van Essen, David, Rosen, Matthew S., Puonti, Oula, Kimberly, W. Taylor, Iglesias, Juan Eugenio
Three-dimensional reconstruction of cortical surfaces from MRI for morphometric analysis is fundamental for understanding brain structure. While high-field MRI (HF-MRI) is standard in research and clinical settings, its limited availability hinders widespread use. Low-field MRI (LF-MRI), particularly portable systems, offers a cost-effective and accessible alternative. However, existing cortical surface analysis tools are optimized for high-resolution HF-MRI and struggle with the lower signal-to-noise ratio and resolution of LF-MRI. In this work, we present a machine learning method for 3D reconstruction and analysis of portable LF-MRI across a range of contrasts and resolutions. Our method works "out of the box" without retraining. It uses a 3D U-Net trained on synthetic LF-MRI to predict signed distance functions of cortical surfaces, followed by geometric processing to ensure topological accuracy. We evaluate our method using paired HF/LF-MRI scans of the same subjects, showing that LF-MRI surface reconstruction accuracy depends on acquisition parameters, including contrast type (T1 vs T2), orientation (axial vs isotropic), and resolution. A 3mm isotropic T2-weighted scan acquired in under 4 minutes, yields strong agreement with HF-derived surfaces: surface area correlates at r=0.96, cortical parcellations reach Dice=0.98, and gray matter volume achieves r=0.93. Cortical thickness remains more challenging with correlations up to r=0.70, reflecting the difficulty of sub-mm precision with 3mm voxels. We further validate our method on challenging postmortem LF-MRI, demonstrating its robustness. Our method represents a step toward enabling cortical surface analysis on portable LF-MRI. Code is available at https://surfer.nmr.mgh.harvard.edu/fswiki/ReconAny
- North America > United States > California (0.28)
- North America > United States > Massachusetts (0.04)
- North America > United States > Florida > Hillsborough County > University (0.04)
- (5 more...)
- Health & Medicine > Pharmaceuticals & Biotechnology (1.00)
- Health & Medicine > Health Care Technology (1.00)
- Health & Medicine > Diagnostic Medicine > Imaging (1.00)
- Health & Medicine > Therapeutic Area > Neurology > Alzheimer's Disease (0.70)
Rolling Horizon Coverage Control with Collaborative Autonomous Agents
Papaioannou, Savvas, Kolios, Panayiotis, Theocharides, Theocharis, Panayiotou, Christos G., Polycarpou, Marios M.
A.2024.0146 1 Rolling Horizon Coverage Control with Collaborative Autonomous Agents Savvas Papaioannou, Panayiotis Kolios, Theocharis Theocharides, Christos G. Panayiotou and Marios M. Polycarpou Abstract This work proposes a coverage controller that enables an aerial team of distributed autonomous agents to collaboratively generate non-myopic coverage plans over a rolling finite horizon, aiming to cover specific points on the surface area of a 3D object of interest. The collaborative coverage problem, formulated, as a distributed model predictive control problem, optimizes the agents' motion and camera control inputs, while considering inter-agent constraints aiming at reducing work redundancy. The proposed coverage controller integrates constraints based on light-path propagation techniques to predict the parts of the object's surface that are visible with regard to the agents' future anticipated states. This work also demonstrates how complex, non-linear visibility assessment constraints can be converted into logical expressions that are embedded as binary constraints into a mixed-integer optimization framework. The proposed approach has been demonstrated through simulations and practical applications for inspecting buildings with unmanned aerial vehicles (UA Vs). I NTRODUCTION The interest in swarm systems such as systems utilizing multiple autonomous unmanned aerial vehicles (UA Vs) has skyrocketed over the last few decades. Rapid advancements in robotics, automation and artificial intelligence coupled with the decreasing costs of electronic components have fuelled a remarkable surge in interest towards the technologies and applications of swarming systems. This work addresses the challenge of coverage planning and control using multiple collaborative intelligent autonomous agents, specifically autonomous UA Vs. Coverage planning [1] is crucial in several application domains including search and rescue operations and critical infrastructure inspections. It is one of the essential functionalities that can notably enhance the autonomy of existing swarming systems enabling them to execute fully automated missions in the aforementioned scenarios. In coverage planning our objective is to design trajectories that allow a team of autonomous mobile agents to comprehensively cover a designated area or points of interest. Concurrently we aim to optimize a specific mission goal such as minimizing the mission's duration and energy consumption of the agents. This work introduces a coverage control framework that optimizes both the kinematic and camera control inputs of multiple UA V agents simultaneously.
- Asia > Singapore (0.04)
- South America > Brazil > Maranhão (0.04)
- Europe > Middle East > Cyprus > Nicosia > Nicosia (0.04)
- Energy (1.00)
- Information Technology > Robotics & Automation (0.68)
- Aerospace & Defense > Aircraft (0.54)
- Government > Military (0.48)
Using 3D reconstruction from image motion to predict total leaf area in dwarf tomato plants
Usenko, Dmitrii, Helman, David, Giladi, Chen
Accurate estimation of total leaf area (TLA) is essential for assessing plant growth, photosynthetic activity, and transpiration but remains a challenge for bushy plants like dwarf tomatoes. Traditional destructive methods and imaging-based techniques often fall short due to labor intensity, plant damage, or the inability to capture complex canopies. This study evaluated a non-destructive method combining sequential 3D reconstructions from RGB images and machine learning to estimate TLA for three dwarf tomato cultivars-- Mohamed, Hahms Gelbe Topftomate, and Red Robin--grown under controlled greenhouse conditions. Two experiments, conducted in spring-summer and autumn-winter, included 73 plants, yielding 418 TLA measurements using an "onion" approach, where layers of leaves were sequentially removed and scanned. High-resolution videos were recorded from multiple angles for each plant, and 500 frames were extracted per plant for 3D reconstruction. Point clouds were created and processed, four reconstruction algorithms (Alpha Shape, Marching Cubes, Poisson's, and Ball Pivoting) were tested, and meshes were evaluated using seven regression models: Multivariable Linear Regression (MLR), Lasso Regression (Lasso), Ridge Regression (Ridge-Reg), Elastic Net Regression (ENR), Random Forest (RF), extreme gradient boosting (XGBoost), and Multilayer Perceptron (MLP). The Alpha Shape reconstruction (α = 3) combined with XGBoost yielded the best performance, achieving an R of 0.80 and MAE of 489 cm These findings demonstrate the robustness of our approach across variable environmental conditions and canopy structures. This scalable, automated TLA estimation method is particularly suited for urban farming and precision agriculture, offering practical implications for automated pruning, improved resource efficiency, and sustainable food production. Keywords: Total leaf area, dwarf tomato, point cloud, mesh reconstruction, machine learning, precision agriculture 1. Introduction Total leaf area (TLA) is a comprehensive metric describing the plant's growth and functioning. It is a primary metric that describes the plant's photosynthetic activity and transpiration capacity. Normalized by the plant's surface area, TLA may provide information on the canopy structure, which is crucial for understanding the plant's energy and resource efficiency. For example, reduced TLA is a sign of stress (Dong et al., 2019), while excessive biomass, indicated by a higher TLA, signifies lower water use efficiency (Glenn et al., 2006). Farmers often use pruning to reduce TLA in commercial crops to increase crop productivity (Budiarto et al., 2023). However, measuring and finding the optimum TLA of the crop are challenging tasks.
- Asia > Middle East > Israel > Jerusalem District > Jerusalem (0.04)
- Europe > Switzerland > Basel-City > Basel (0.04)
- Europe > Netherlands > North Holland > Amsterdam (0.04)
- Asia > Middle East > Israel > Southern District > Ashdod (0.04)
- Research Report > New Finding (0.48)
- Research Report > Experimental Study (0.46)