Goto

Collaborating Authors

 boston


Boston Celtics star Jaylen Brown tells ESPN's Stephen A Smith to 'be quiet and retire'

FOX News

Here's why the coaches association's 24-team College Football Playoff could ruin the sport President Trump on $1,000 World Cup ticket prices: 'I wouldn't pay it either, to be honest' Pirates vs. Diamondbacks betting preview targets the under as both offenses go cold in series Former LSU coach Brian Kelly uses AI to prepare for job interviews, proving he's just like the rest of us Newsom office source responds to planned protest against trans athlete at state playoff girls' track meet Iranians are fearful of facing the regime's frustration and anger after the war, activist says'This can touch anyone': Gorman family speaks following loss of Sheridan'Project Freedom' could soon resume: Report Iranian people are not citizens, but'subjects' of the regime: Middle East expert Vice Admiral Robert Harward weighs in on restarting'Project Freedom' in Strait of Hormuz Largest teachers' union accused of antisemitism in federal civil rights complaint McEnany's URGENT plea: 'Be Spencer Pratt!' WHO doesn't expect large Hantavirus outbreak OutKick Boston Celtics star Jaylen Brown tells ESPN's Stephen A Smith to'be quiet and retire' The viral exchange on X adds Brown to a list of NBA stars, including LeBron James and Kevin Durant, who've feuded with Smith ESPN commentator Stephen A. Smith is no stranger to having beef with NBA stars. It's time to add Celtics guard Jaylen Brown to the list. The latest dust-up started, naturally, on First Take, where Smith took aim at Brown for his comments following Boston's playoff collapse . Brown recently said this was his favorite year, despite the Celtics blowing a 3-1 series lead to the Philadelphia 76ers and getting eliminated in the first round of the NBA playoffs. That didn't sit well with Smith, who made it very clear Thursday that he thought Brown should have kept that to himself.


I did a speedrun through Under Armour's innovation labs to learn how a marathon supershoe crosses the finish line

Popular Science

Gear Outdoor Gear I did a speedrun through Under Armour's innovation labs to learn how a marathon supershoe crosses the finish line More information Adding us as a Preferred Source in Google by using this link indicates that you would like to see more of our content in Google News results. We may earn revenue from the products available on this page and participate in affiliate programs. Baltimore speaks before anyone at Under Armour gets to say a word. Driving along the seams of the Baltimore Peninsula, the city does what it does so well, giving off stubborn grit and industrial sprawl. Pulling off I-95, freight trucks, not tour buses, share the road with me. Like much of the city, it's a waterfront neighborhood (re)shaped by salvage and second acts.


e8dbeb1c947a30576c699e7f5c73d3e3-Supplemental-Conference.pdf

Neural Information Processing Systems

However, within this specific application domain, existing VAE methods are restricted by using only one layer of latent variables andstrictly Gaussian posterior approximations.


97785e0500ad16c18574c64189ccf4b4-Supplemental.pdf

Neural Information Processing Systems

Bayesian predictive intervals are conditioned on the specific observed sequenceZ1:n and make statements on the next value[Yn+1 | Xn+1]. Subjective Bayesian statements on predictions are non-refutable, and are in this sense unscientific, but are optimal according to decision theoretic foundations. However,tomakesuch strong statements, the Bayesian must usually make the strict assumption of the model being well-specified. Asmentionedearlier,computingtheAOI interval is an efficient matrix-vector multiplication, whereas the LOO interval requires expensive broadcastingtoconstructthe ngrid T nISweightarray. We use the same Bayesian model as in (10), again consideringc=1,0.02.


Semantic Multiplexing

Abdi, Mohammad, Meneghello, Francesca, Restuccia, Francesco

arXiv.org Artificial Intelligence

Mobile devices increasingly require the parallel execution of several computing tasks offloaded at the wireless edge. Existing communication systems only support parallel transmissions at the bit level, which fundamentally limits the number of tasks that can be concurrently processed. To address this bottleneck, this paper introduces the new concept of Semantic Multiplexing. Our approach shifts stream multiplexing from bits to tasks by merging multiple task-related compressed representations into a single semantic representation. As such, Semantic Multiplexing can multiplex more tasks than the number of physical channels without adding antennas or widening bandwidth by extending the effective degrees of freedom at the semantic layer, without contradicting Shannon capacity rules. We have prototyped Semantic Multiplexing on an experimental testbed with Jetson Orin Nano and millimeter-wave software-defined radios and tested its performance on image classification and sentiment analysis while comparing to several existing baselines in semantic communications. Our experiments demonstrate that Semantic Multiplexing allows jointly processing multiple tasks at the semantic level while maintaining sufficient task accuracy. For example, image classification accuracy drops by less than 4% when increasing from 2 to 8 the number of tasks multiplexed over a 4$\times$4 channel. Semantic Multiplexing reduces latency, energy consumption, and communication load respectively by up to 8$\times$, 25$\times$, and 54$\times$ compared to the baselines while keeping comparable performance. We pledge to publicly share the complete software codebase and the collected datasets for reproducibility.


OceanAI: A Conversational Platform for Accurate, Transparent, Near-Real-Time Oceanographic Insights

Chen, Bowen, Gajbhar, Jayesh, Dusek, Gregory, Redmon, Rob, Hogan, Patrick, Liu, Paul, Bohnenstiehl, DelWayne, Xu, Dongkuan, He, Ruoying

arXiv.org Artificial Intelligence

Artificial intelligence is transforming the sciences, yet general conversational AI systems often generate unverified "hallucinations" undermining scientific rigor. We present OceanAI, a conversational platform that integrates the natural-language fluency of open-source large language models (LLMs) with real-time, parameterized access to authoritative oceanographic data streams hosted by the National Oceanic and Atmospheric Administration (NOAA). Each query such as "What was Boston Harbor's highest water level in 2024?" triggers real-time API calls that identify, parse, and synthesize relevant datasets into reproducible natural-language responses and data visualizations. In a blind comparison with three widely used AI chat-interface products, only OceanAI produced NOAA-sourced values with original data references; others either declined to answer or provided unsupported results. Designed for extensibility, OceanAI connects to multiple NOAA data products and variables, supporting applications in marine hazard forecasting, ecosystem assessment, and water-quality monitoring. By grounding outputs and verifiable observations, OceanAI advances transparency, reproducibility, and trust, offering a scalable framework for AI-enabled decision support within the oceans. A public demonstration is available at https://oceanai.ai4ocean.xyz.



Academic Vibe Coding: Opportunities for Accelerating Research in an Era of Resource Constraint

Crowson, Matthew G, Celi, Leo Celi A.

arXiv.org Artificial Intelligence

Academic laboratories face mounting resource constraints: budgets are tightening, grant overheads are potentially being capped, and the market rate for data-science talent significantly outstrips university compensation. Vibe coding, which is structured, prompt-driven code generation with large language models (LLMs) embedded in reproducible workflows, offers one pragmatic response. It aims to compress the idea-to-analysis timeline, reduce staffing pressure on specialized data roles, and maintain rigorous, version-controlled outputs. This article defines the vibe coding concept, situates it against the current academic resourcing crisis, details a beginner-friendly toolchain for its implementation, and analyzes inherent limitations that necessitate governance and mindful application.


These four charts show where AI companies could go next in the US

MIT Technology Review

While the impact of AI on tech hubs like San Francisco and Boston is already being felt, AI proponents believe it will transform work everywhere, and in every industry. The report uses various proxies for what the researchers call "AI readiness" to document how unevenly this supposed transformation is taking place. Here are four charts to help understand where that could matter. Brookings divides US cities into five categories based on how ready they are to adopt AI-related industries and job offerings. To do so, it looked at local talent pool development, innovations in local institutions, and adoption potential among local companies.


A Weakly Supervised Transformer to Support Rare Disease Diagnosis from Electronic Health Records: Methods and Applications in Rare Pulmonary Disease

Greco, Kimberly F., Yang, Zongxin, Li, Mengyan, Tong, Han, Sweet, Sara Morini, Geva, Alon, Mandl, Kenneth D., Raby, Benjamin A., Cai, Tianxi

arXiv.org Machine Learning

Rare diseases affect an estimated 300-400 million people worldwide, yet individual conditions often remain poorly characterized and difficult to diagnose due to their low prevalence and limited clinician familiarity. While computational phenotyping algorithms show promise for automating rare disease detection, their development is hindered by the scarcity of labeled data and biases in existing label sources. Gold-standard labels from registries and expert chart reviews are highly accurate but constrained by selection bias and the cost of manual review. In contrast, labels derived from electronic health records (EHRs) cover a broader range of patients but can introduce substantial noise. To address these challenges, we propose a weakly supervised, transformer-based framework that combines a small set of gold-standard labels with a large volume of iteratively updated silver-standard labels derived from EHR data. This hybrid approach enables the training of a highly accurate and generalizable phenotyping model that scales rare disease detection beyond the scope of individual clinical expertise. Our method is initialized by learning embeddings of medical concepts based on their semantic meaning or co-occurrence patterns in EHRs, which are then refined and aggregated into patient-level representations via a multi-layer transformer architecture. Using two rare pulmonary diseases as a case study, we validate our model on EHR data from Boston Children's Hospital. Our framework demonstrates notable improvements in phenotype classification, identification of clinically meaningful subphenotypes through patient clustering, and prediction of disease progression compared to baseline methods. These results highlight the potential of our approach to enable scalable identification and stratification of rare disease patients for clinical care and research applications.