jupiter
We're about to simulate a human brain on a supercomputer
We're about to simulate a human brain on a supercomputer The world's most powerful supercomputers can now run simulations of billions of neurons, and researchers hope such models will offer unprecedented insights into how our brains work What would it mean to simulate a human brain? Today's most powerful computing systems now contain enough computational firepower to run simulations of billions of neurons, comparable to the sophistication of real brains. We increasingly understand how these neurons are wired together, too, leading to brain simulations that researchers hope will reveal secrets of brain function that were previously hidden. Researchers have long tried to isolate specific parts of the brain, modelling smaller regions with a computer to explain particular brain functions. But "we have never been able to bring them all together into one place, into one larger brain model where we can check whether these ideas are at all consistent", says Markus Diesmann at the Jülich Research Centre in Germany.
- Europe > Germany (0.26)
- North America > United States (0.05)
- North America > Greenland (0.05)
- Arctic Ocean (0.05)
Religious leader issues doomsday warning for the end of 2025: 'The last day of this world'
Sports broadcaster's wife suffers unimaginable tragedy just before he goes on air Bethany MaGee's family issue heartbreaking statement about her injuries after devout Christian, 26, was set ablaze'by 72-time arrestee' on Chicago train Couple left red-faced after buying $25K'dirt alley' at auction thinking it was bargain San Francisco home LIZ JONES: Sorry, but it's now time for Kate to stop making excuses Troubled 350lb son of Hollywood icon is forced to humiliating new low... as his movie star brother luxuriates in $7m Montecito mansion Ina Garten, 77, vulnerably addresses her decision not to have children: 'I can't imagine my life any other way' Doctors appalled by North West's new body modification warn parents to stop children from chasing the dangerous fad Alex appeared to have the dream Manhattan mom life. But she was hiding a dark secret... and it almost killed her Shocking extent America has turned on ICE is revealed as Joe Rogan breaks from conservatives still cheering Trump's army of masked men Sir Richard Branson's wife Joan dies: 'Heartbroken' Virgin tycoon pays tribute to his'best friend' after she passed away Trump gives Thanksgiving turkeys scathing nicknames and calls Pritzker a'fat slob' in fiery White House holiday speech How to tell if a man is using'therapy speak' to manipulate you: If he says any of these 15 toxic phrases, run for the hills... I'll tell you what he REALLY means: JANA HOCKING I know why Usha Vance ditched her wedding ring. Most women would do the same if they'd suffered her humiliation: KENNEDY A comet has been predicted to strike the Earth by the end of the year, on what a controversial religious leader called'the last day of this world.' The doomsday warning came from the writings of Riaz Ahmed Gohar Shahi, a Pakistani spiritual leader and mystic, who claimed that God was sending a comet to collide with Earth because humanity had strayed too far from spiritual truths . He founded several organizations to spread his teachings of'divine love,' including the spiritual movement called Anjuman Serfaroshan-e-Islam and the Messiah Foundation International (MFI).
- North America > United States > Illinois > Cook County > Chicago (0.24)
- North America > United States > California > San Francisco County > San Francisco (0.24)
- North America > Canada > Alberta (0.14)
- (11 more...)
- Media > Television (1.00)
- Media > Music (1.00)
- Media > Film (1.00)
- (3 more...)
October Stargazing: A supermoon, new comet, and a whole lot of meteors
Comet C/2025 A6 (Lemmon) was only discovered in January 2025. Breakthroughs, discoveries, and DIY tips sent every weekday. Stargazers will be happy to know that October will see the cosmos compensating for a couple of relatively lean months.There will be a whole bunch of celestial bodies to see over the next month, including the year's largest and brightest full moon, the arrival of a brand new comet, two meteor showers and a good chance to see our solar system's favorite big fella in all his glory. October's full moon finds our closest celestial companion at its perigee, i.e. the point at which it's closest to the Earth. This means that this month's full moon will be [drum roll] a supermoon!
- North America > United States > New York (0.05)
- North America > United States > Hawaii > Maui County > Lahaina (0.05)
Jupiter: Fast and Resource-Efficient Collaborative Inference of Generative LLMs on Edge Devices
Ye, Shengyuan, Ouyang, Bei, Zeng, Liekang, Qian, Tianyi, Chu, Xiaowen, Tang, Jian, Chen, Xu
--Generative large language models (LLMs) have garnered significant attention due to their exceptional capabilities in various AI tasks. Traditionally deployed in cloud datacenters, LLMs are now increasingly moving towards more accessible edge platforms to protect sensitive user data and ensure privacy preservation. The limited computational resources of individual edge devices, however, can result in excessively prolonged inference latency and overwhelmed memory usage. While existing research has explored collaborative edge computing to break the resource wall of individual devices, these solutions yet suffer from massive communication overhead and under-utilization of edge resources. Furthermore, they focus exclusively on optimizing the prefill phase, neglecting the crucial autoregressive decoding phase for generative LLMs. T o address that, we propose Jupiter, a fast, scalable, and resource-efficient collaborative edge AI system for generative LLM inference. Jupiter introduces a flexible pipelined architecture as a principle and differentiates its system design according to the differentiated characteristics of the prefill and decoding phases. For prefill phase, Jupiter submits a novel intra-sequence pipeline parallelism and develops a meticulous parallelism planning strategy to maximize resource efficiency; For decoding, Jupiter devises an effective outline-based pipeline parallel decoding mechanism combined with speculative decoding, which further magnifies inference acceleration. Extensive evaluation based on realistic implementation demonstrates that Jupiter remarkably outperforms state-of-the-art approaches under various edge environment setups, achieving up to 26. 1 end-to-end latency reduction while rendering on-par generation quality. I NTRODUCTION The emergence of generative large language models (LLMs) has attracted widespread attention from both industry and academia owing to their exceptional capabilities in a wide range of artificial intelligence (AI) tasks. These models, widely deployed in cloud datacenters equipped with powerful server-grade GPUs, have driven increasing intelligent edge applications such as ChatBot [1] and smart-home AI agent [2].
- North America > United States (0.04)
- Asia > China > Guangdong Province > Guangzhou (0.04)
- Asia > China > Hong Kong (0.04)
- (2 more...)
- Research Report > New Finding (0.46)
- Research Report > Promising Solution (0.34)
Accurate and thermodynamically consistent hydrogen equation of state for planetary modeling with flow matching
Xie, Hao, Howard, Saburo, Mazzola, Guglielmo
Accurate determination of the equation of state of dense hydrogen is essential for understanding gas giants. Currently, there is still no consensus on methods for calculating its entropy, which play a fundamental role and can result in qualitatively different predictions for Jupiter's interior. Here, we investigate various aspects of entropy calculation for dense hydrogen based on ab initio molecular dynamics simulations. Specifically, we employ the recently developed flow matching method to validate the accuracy of the traditional thermodynamic integration approach. We then clearly identify pitfalls in previous attempts and propose a reliable framework for constructing the hydrogen equation of state, which is accurate and thermodynamically consistent across a wide range of temperature and pressure conditions. This allows us to conclusively address the long-standing discrepancies in Jupiter's adiabat among earlier studies, demonstrating the potential of our approach for providing reliable equations of state of diverse materials.
- North America > United States (0.46)
- Europe > Switzerland (0.28)
Characterizing Jupiter's interior using machine learning reveals four key structures
Ziv, Maayan, Galanti, Eli, Howard, Saburo, Guillot, Tristan, Kaspi, Yohai
The internal structure of Jupiter is constrained by the precise gravity field measurements by NASA's Juno mission, atmospheric data from the Galileo entry probe, and Voyager radio occultations. Not only are these observations few compared to the possible interior setups and their multiple controlling parameters, but they remain challenging to reconcile. As a complex, multidimensional problem, characterizing typical structures can help simplify the modeling process. We used NeuralCMS, a deep learning model based on the accurate concentric Maclaurin spheroid (CMS) method, coupled with a fully consistent wind model to efficiently explore a wide range of interior models without prior assumptions. We then identified those consistent with the measurements and clustered the plausible combinations of parameters controlling the interior. We determine the plausible ranges of internal structures and the dynamical contributions to Jupiter's gravity field. Four typical interior structures are identified, characterized by their envelope and core properties. This reduces the dimensionality of Jupiter's interior to only two effective parameters. Within the reduced 2D phase space, we show that the most observationally constrained structures fall within one of the key structures, but they require a higher 1 bar temperature than the observed value. We provide a robust framework for characterizing giant planet interiors with consistent wind treatment, demonstrating that for Jupiter, wind constraints strongly impact the gravity harmonics while the interior parameter distribution remains largely unchanged. Importantly, we find that Jupiter's interior can be described by two effective parameters that clearly distinguish the four characteristic structures and conclude that atmospheric measurements may not fully represent the entire envelope.
- Europe > Switzerland > Zürich > Zürich (0.14)
- Europe > France > Provence-Alpes-Côte d'Azur (0.04)
- Asia > Middle East > Israel (0.04)
- Energy (0.46)
- Government > Space Agency (0.34)
NeuralCMS: A deep learning approach to study Jupiter's interior
Ziv, Maayan, Galanti, Eli, Sheffer, Amir, Howard, Saburo, Guillot, Tristan, Kaspi, Yohai
NASA's Juno mission provided exquisite measurements of Jupiter's gravity field that together with the Galileo entry probe atmospheric measurements constrains the interior structure of the giant planet. Inferring its interior structure range remains a challenging inverse problem requiring a computationally intensive search of combinations of various planetary properties, such as the cloud-level temperature, composition, and core features, requiring the computation of ~10^9 interior models. We propose an efficient deep neural network (DNN) model to generate high-precision wide-ranged interior models based on the very accurate but computationally demanding concentric MacLaurin spheroid (CMS) method. We trained a sharing-based DNN with a large set of CMS results for a four-layer interior model of Jupiter, including a dilute core, to accurately predict the gravity moments and mass, given a combination of interior features. We evaluated the performance of the trained DNN (NeuralCMS) to inspect its predictive limitations. NeuralCMS shows very good performance in predicting the gravity moments, with errors comparable with the uncertainty due to differential rotation, and a very accurate mass prediction. This allowed us to perform a broad parameter space search by computing only ~10^4 actual CMS interior models, resulting in a large sample of plausible interior structures, and reducing the computation time by a factor of 10^5. Moreover, we used a DNN explainability algorithm to analyze the impact of the parameters setting the interior model on the predicted observables, providing information on their nonlinear relation.
- Europe > Switzerland > Zürich > Zürich (0.14)
- Europe > France > Provence-Alpes-Côte d'Azur (0.04)
- Europe > United Kingdom > England > Greater London > London (0.04)
- Asia > Middle East > Israel (0.04)
Reconstructions of Jupiter's magnetic field using physics informed neural networks
Livermore, Philip W., Wu, Leyuan, Chen, Longwei, de Ridder, Sjoerd A. L.
Magnetic sounding using data collected from the Juno mission can be used to provide constraints on Jupiter's interior. However, inwards continuation of reconstructions assuming zero electrical conductivity and a representation in spherical harmonics are limited by the enhancement of noise at small scales. Here we describe new reconstructions of Jupiter's internal magnetic field based on physics-informed neural networks and either the first 33 (PINN33) or the first 50 (PINN50) of Juno's orbits. The method can resolve local structures, and allows for weak ambient electrical currents. Our models are not hampered by noise amplification at depth, and offer a much clearer picture of the interior structure. We estimate that the dynamo boundary is at a fractional radius of 0.8. At this depth, the magnetic field is arranged into longitudinal bands, and strong local features such as the great blue spot appear to be rooted in neighbouring structures of oppositely signed flux.
- North America > United States > Georgia > Chatham County > Savannah (0.04)
- Europe > United Kingdom > England > West Yorkshire > Leeds (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- (2 more...)
Exploring Large Language Models to Facilitate Variable Autonomy for Human-Robot Teaming
Lakhnati, Younes, Pascher, Max, Gerken, Jens
In a rapidly evolving digital landscape autonomous tools and robots are becoming commonplace. Recognizing the significance of this development, this paper explores the integration of Large Language Models (LLMs) like Generative pre-trained transformer (GPT) into human-robot teaming environments to facilitate variable autonomy through the means of verbal human-robot communication. In this paper, we introduce a novel framework for such a GPT-powered multi-robot testbed environment, based on a Unity Virtual Reality (VR) setting. This system allows users to interact with robot agents through natural language, each powered by individual GPT cores. By means of OpenAI's function calling, we bridge the gap between unstructured natural language input and structure robot actions. A user study with 12 participants explores the effectiveness of GPT-4 and, more importantly, user strategies when being given the opportunity to converse in natural language within a multi-robot environment. Our findings suggest that users may have preconceived expectations on how to converse with robots and seldom try to explore the actual language and cognitive capabilities of their robot collaborators. Still, those users who did explore where able to benefit from a much more natural flow of communication and human-like back-and-forth. We provide a set of lessons learned for future research and technical implementations of similar systems.
- North America > United States > New York > New York County > New York City (0.14)
- Europe > Germany > North Rhine-Westphalia > Arnsberg Region > Dortmund (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Health & Medicine > Therapeutic Area (0.46)
- Information Technology > Services (0.46)
- Information Technology > Artificial Intelligence > Robots (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
NVIDIA announces its next generation of AI supercomputer chips
NVIDIA has launched its next-generation of AI supercomputer chips that will likely play a large role in future breakthroughs in deep learning and large language models (LLMs) like OpenAI's GPT-4, the company announced. The technology represents a significant leap over the last generation and is poised to be used in data centers and supercomputers -- working on tasks like weather and climate prediction, drug discovery, quantum computing and more. The key product is the HGX H200 GPU based on NVIDIA's "Hopper" architecture, a replacement for the popular H100 GPU. It's the company's first chip to use HBM3e memory that's faster and has more capacity, thus making it better suited for large language models. "With HBM3e, the NVIDIA H200 delivers 141GB of memory at 4.8 terabytes per second, nearly double the capacity and 2.4x more bandwidth compared with its predecessor, the NVIDIA A100," the company wrote.
- Information Technology > Services (1.00)
- Information Technology > Hardware (1.00)