Goto

Collaborating Authors

 boston



Semantic Multiplexing

Abdi, Mohammad, Meneghello, Francesca, Restuccia, Francesco

arXiv.org Artificial Intelligence

Mobile devices increasingly require the parallel execution of several computing tasks offloaded at the wireless edge. Existing communication systems only support parallel transmissions at the bit level, which fundamentally limits the number of tasks that can be concurrently processed. To address this bottleneck, this paper introduces the new concept of Semantic Multiplexing. Our approach shifts stream multiplexing from bits to tasks by merging multiple task-related compressed representations into a single semantic representation. As such, Semantic Multiplexing can multiplex more tasks than the number of physical channels without adding antennas or widening bandwidth by extending the effective degrees of freedom at the semantic layer, without contradicting Shannon capacity rules. We have prototyped Semantic Multiplexing on an experimental testbed with Jetson Orin Nano and millimeter-wave software-defined radios and tested its performance on image classification and sentiment analysis while comparing to several existing baselines in semantic communications. Our experiments demonstrate that Semantic Multiplexing allows jointly processing multiple tasks at the semantic level while maintaining sufficient task accuracy. For example, image classification accuracy drops by less than 4% when increasing from 2 to 8 the number of tasks multiplexed over a 4$\times$4 channel. Semantic Multiplexing reduces latency, energy consumption, and communication load respectively by up to 8$\times$, 25$\times$, and 54$\times$ compared to the baselines while keeping comparable performance. We pledge to publicly share the complete software codebase and the collected datasets for reproducibility.


OceanAI: A Conversational Platform for Accurate, Transparent, Near-Real-Time Oceanographic Insights

Chen, Bowen, Gajbhar, Jayesh, Dusek, Gregory, Redmon, Rob, Hogan, Patrick, Liu, Paul, Bohnenstiehl, DelWayne, Xu, Dongkuan, He, Ruoying

arXiv.org Artificial Intelligence

Artificial intelligence is transforming the sciences, yet general conversational AI systems often generate unverified "hallucinations" undermining scientific rigor. We present OceanAI, a conversational platform that integrates the natural-language fluency of open-source large language models (LLMs) with real-time, parameterized access to authoritative oceanographic data streams hosted by the National Oceanic and Atmospheric Administration (NOAA). Each query such as "What was Boston Harbor's highest water level in 2024?" triggers real-time API calls that identify, parse, and synthesize relevant datasets into reproducible natural-language responses and data visualizations. In a blind comparison with three widely used AI chat-interface products, only OceanAI produced NOAA-sourced values with original data references; others either declined to answer or provided unsupported results. Designed for extensibility, OceanAI connects to multiple NOAA data products and variables, supporting applications in marine hazard forecasting, ecosystem assessment, and water-quality monitoring. By grounding outputs and verifiable observations, OceanAI advances transparency, reproducibility, and trust, offering a scalable framework for AI-enabled decision support within the oceans. A public demonstration is available at https://oceanai.ai4ocean.xyz.



Academic Vibe Coding: Opportunities for Accelerating Research in an Era of Resource Constraint

Crowson, Matthew G, Celi, Leo Celi A.

arXiv.org Artificial Intelligence

Academic laboratories face mounting resource constraints: budgets are tightening, grant overheads are potentially being capped, and the market rate for data-science talent significantly outstrips university compensation. Vibe coding, which is structured, prompt-driven code generation with large language models (LLMs) embedded in reproducible workflows, offers one pragmatic response. It aims to compress the idea-to-analysis timeline, reduce staffing pressure on specialized data roles, and maintain rigorous, version-controlled outputs. This article defines the vibe coding concept, situates it against the current academic resourcing crisis, details a beginner-friendly toolchain for its implementation, and analyzes inherent limitations that necessitate governance and mindful application.


These four charts show where AI companies could go next in the US

MIT Technology Review

While the impact of AI on tech hubs like San Francisco and Boston is already being felt, AI proponents believe it will transform work everywhere, and in every industry. The report uses various proxies for what the researchers call "AI readiness" to document how unevenly this supposed transformation is taking place. Here are four charts to help understand where that could matter. Brookings divides US cities into five categories based on how ready they are to adopt AI-related industries and job offerings. To do so, it looked at local talent pool development, innovations in local institutions, and adoption potential among local companies.


A Weakly Supervised Transformer to Support Rare Disease Diagnosis from Electronic Health Records: Methods and Applications in Rare Pulmonary Disease

Greco, Kimberly F., Yang, Zongxin, Li, Mengyan, Tong, Han, Sweet, Sara Morini, Geva, Alon, Mandl, Kenneth D., Raby, Benjamin A., Cai, Tianxi

arXiv.org Machine Learning

Rare diseases affect an estimated 300-400 million people worldwide, yet individual conditions often remain poorly characterized and difficult to diagnose due to their low prevalence and limited clinician familiarity. While computational phenotyping algorithms show promise for automating rare disease detection, their development is hindered by the scarcity of labeled data and biases in existing label sources. Gold-standard labels from registries and expert chart reviews are highly accurate but constrained by selection bias and the cost of manual review. In contrast, labels derived from electronic health records (EHRs) cover a broader range of patients but can introduce substantial noise. To address these challenges, we propose a weakly supervised, transformer-based framework that combines a small set of gold-standard labels with a large volume of iteratively updated silver-standard labels derived from EHR data. This hybrid approach enables the training of a highly accurate and generalizable phenotyping model that scales rare disease detection beyond the scope of individual clinical expertise. Our method is initialized by learning embeddings of medical concepts based on their semantic meaning or co-occurrence patterns in EHRs, which are then refined and aggregated into patient-level representations via a multi-layer transformer architecture. Using two rare pulmonary diseases as a case study, we validate our model on EHR data from Boston Children's Hospital. Our framework demonstrates notable improvements in phenotype classification, identification of clinically meaningful subphenotypes through patient clustering, and prediction of disease progression compared to baseline methods. These results highlight the potential of our approach to enable scalable identification and stratification of rare disease patients for clinical care and research applications.


How AI can help make cities work better for residents

MIT Technology Review

Shortly after joining MIT in 2012, Williams created the Civic Data Design Lab to bridge that divide. Over the years, she and her colleagues have pushed the narrative and expository bounds of urban planning data using the latest technologies available--making numbers vivid and accessible through human stories and striking graphics. One project she was involved in, on rates of incarceration in New York City by neighborhood, is now in the permanent collection of the Museum of Modern Art in New York. Williams's other projects have tracked the spread and impact of air pollution in Beijing using air quality monitors and mapped the daily commutes of Nairobi residents using geographic information systems. Cities should be transparent in how they're using AI and what its limitations are.


Interpreting core forms of urban morphology linked to urban functions with explainable graph neural network

Chen, Dongsheng, Feng, Yu, Li, Xun, Qu, Mingya, Luo, Peng, Meng, Liqiu

arXiv.org Artificial Intelligence

Understanding the high-order relationship between urban form and function is essential for modeling the underlying mechanisms of sustainable urban systems. Nevertheless, it is challenging to establish an accurate data representation for complex urban forms that are readily explicable in human terms. This study proposed the concept of core urban morphology representation and developed an explainable deep learning framework for explicably symbolizing complex urban forms into the novel representation, which we call CoMo. By interpretating the well-trained deep learning model with a stable weighted F1-score of 89.14%, CoMo presents a promising approach for revealing links between urban function and urban form in terms of core urban morphology representation. Using Boston as a study area, we analyzed the core urban forms at the individual-building, block, and neighborhood level that are important to corresponding urban functions. The residential core forms follow a gradual morphological pattern along the urban spine, which is consistent with a center-urban-suburban transition. Furthermore, we prove that urban morphology directly affects land use efficiency, which has a significantly strong correlation with the location (R2=0.721, p<0.001). Overall, CoMo can explicably symbolize urban forms, provide evidence for the classic urban location theory, and offer mechanistic insights for digital twins.


2 Massachusetts men arrested for flying drone 'dangerously close' to Boston airport

FOX News

Belleville, New Jersey mayor Michael Melham joins'Fox News Live' to discuss growing concern over mysterious drone sightings. Two Massachusetts men who flew a drone "dangerously close" to Logan International Airport in Boston are facing charges, police say. Robert Duffy, 42, of Boston's Charlestown neighborhood and Jeremy Folcik, 32, of Bridgewater were taken into custody late Saturday night on Long Island, which is located on the approach to the airport, according to the Boston Police Department. "The incident began earlier that evening, at 4:30 PM, when a Boston Police Officer specializing in real-time crime surveillance detected an Unmanned Aircraft System (UAS) operating dangerously close to Logan International Airport," police said in a statement. "Leveraging advanced UAS monitoring technology, the Officer identified the drone's location, altitude, flight history, and the operators' position on Long Island."