substitute
What technology takes from us – and how to take it back
Decisions outsourced, chatbots for friends, the natural world an afterthought: Silicon Valley is giving us life void of connection. There is a way out - but it's going to take collective effort Summer after summer, I used to descend into a creek that had carved a deep bed shaded by trees and lined with blackberry bushes whose long thorny canes arced down from the banks, dripping with sprays of fruit. Down in that creek, I'd spend hours picking until I had a few gallons of berries, until my hands and wrists were covered in scratches from the thorns and stained purple from the juice, until the tranquillity of that place had soaked into me. The berries on a single spray might range from green through shades of red to the darkness that gives the fruit its name. Partly by sight and partly by touch, I determined which berries were too hard and which too soft, picking only the ones in between, while listening to birds and the hum of bees, to the music of water flowing, noticing small jewel-like insects among the berries, dragonflies in the open air, water striders in the creek's calm stretches. I went there for berries, but I also went there for the quiet, the calm, the feeling of cool water on my feet and sometimes up to my knees as I waded in where the picking was good. At home I made jars of jam. When I gave them away I was trying to give not just my jam - which was admittedly runny and seedy - but something of the peace of that creek, of summer itself.
- Oceania > Australia (0.04)
- North America > United States > Texas (0.04)
- North America > United States > New York (0.04)
- (4 more...)
- Health & Medicine (0.93)
- Leisure & Entertainment > Sports (0.68)
- Information Technology > Communications > Social Media (0.94)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.36)
Bridge the Modality and Capability Gaps in Vision-Language Model Selection
Vision Language Models (VLMs) excel in zero-shot image classification by pairing images with textual category names. The expanding variety of Pre-Trained VLMs enhances the likelihood of identifying a suitable VLM for specific tasks. To better reuse the VLM resource and fully leverage its potential on different zero-shot image classification tasks, a promising strategy is selecting appropriate Pre-Trained VLMs from the VLM Zoo, relying solely on the text data of the target dataset without access to the dataset's images. In this paper, we analyze two inherent challenges in assessing the ability of a VLM in this Language-Only VLM selection: the "Modality Gap"--the disparity in VLM's embeddings across two different modalities, making text a less reliable substitute for images; and the "Capability Gap"-- the discrepancy between the VLM's overall ranking and its ranking for target dataset, hindering direct prediction of a model's dataset-specific performance from its general performance. We propose VLM Selection With gAp Bridging (SWAB) to mitigate the negative impact of two gaps. SWAB first adopts optimal transport to capture the relevance between open-source and target datasets with a transportation matrix. It then uses this matrix to transfer useful statistics of VLMs from open-source datasets to the target dataset for bridging two gaps. By bridging two gaps to obtain better substitutes for test images, SWAB can accurately predict the performance ranking of different VLMs on the target task without the need for the dataset's images.
Training for Obsolescence? The AI-Driven Education Trap
Artificial intelligence is simultaneously transforming the production function of human capital in schools and the return to skills in the labor market. We develop a theoretical model to analyze the potential for misallocation when these two forces are considered in isolation. We study an educational planner who observes AI's immediate productivity benefits in teaching specific skills but fails to fully internalize the technology's future wage-suppressing effects on those same skills. Motivated by a pre-registered pilot study suggesting a positive correlation between a skill's "teachability" by AI and its vulnerability to automation, we show that this information friction leads to a systematic skill mismatch. The planner over-invests in skills destined for obsolescence, a distortion that increases monotonically with AI prevalence. Extensions demonstrate that this mismatch is exacerbated by the neglect of unpriced non-cognitive skills and by the endogenous over-adoption of educational technology. Our findings caution that policies promoting AI in education, if not paired with forward-looking labor market signals, may paradoxically undermine students' long-term human capital, such as by crowding out skills like persistence that are forged through intellectual struggle.
- North America > United States > Tennessee > Davidson County > Nashville (0.04)
- North America > United States > Massachusetts > Suffolk County > Boston (0.04)
- Europe > Switzerland (0.04)
- (2 more...)
- Research Report > Experimental Study (1.00)
- Research Report > New Finding (0.66)
- Education > Educational Technology (0.88)
- Banking & Finance > Economy (0.88)
- Information Technology > Artificial Intelligence > Machine Learning (0.93)
- Information Technology > Artificial Intelligence > Natural Language (0.68)
- Information Technology > Artificial Intelligence > Representation & Reasoning (0.68)
- Information Technology > Artificial Intelligence > Cognitive Science (0.68)
FAR: Function-preserving Attention Replacement for IMC-friendly Inference
Ren, Yuxin, Collins, Maxwell D, Hu, Miao, Yang, Huanrui
While transformers dominate modern vision and language models, their attention mechanism remains poorly suited for in-memory computing (IMC) devices due to intensive activation-to-activation multiplications and non-local memory access, leading to substantial latency and bandwidth overhead on ReRAM-based accelerators. To address this mismatch, we propose FAR, a Function-preserving Attention Replacement framework that substitutes all attention in pretrained DeiTs with sequential modules inherently compatible with IMC dataflows. Specifically, FAR replaces self-attention with a multi-head bidirectional LSTM architecture via block-wise distillation to retain functional equivalence while enabling linear-time computation and localized weight reuse. We further incorporate structured pruning on FAR models, enabling flexible adaptation to resource-constrained IMC arrays while maintaining functional fidelity. Evaluations on the DeiT family demonstrate that FAR maintains comparable accuracy to the original attention-based models on ImageNet and multiple downstream tasks with reduced parameters and latency. Further analysis shows that FAR preserves the semantic token relationships learned by attention while improving computational efficiency, highlighting its potential for energy-efficient transformer inference on IMC-based edge accelerators.
- North America > United States > Arizona > Pima County > Tucson (0.14)
- North America > Canada > Ontario > Toronto (0.14)
- North America > United States > California > Santa Clara County > San Jose (0.04)
- (2 more...)
- Asia > Myanmar > Tanintharyi Region > Dawei (0.04)
- Asia > China > Guangdong Province > Shenzhen (0.04)
- Asia > China > Hong Kong (0.05)
- North America > United States > California > San Diego County > San Diego (0.04)
BuddyMoE: Exploiting Expert Redundancy to Accelerate Memory-Constrained Mixture-of-Experts Inference
Wang, Yun, Yang, Lingyun, Yu, Senhao, Wang, Yixiao, Li, Ruixing, Wei, Zhixiang, Yen, James, Qi, Zhengwei
Mixture-of-Experts (MoE) architectures scale language models by activating only a subset of specialized expert networks for each input token, thereby reducing the number of floating-point operations. However, the growing size of modern MoE models causes their full parameter sets to exceed GPU memory capacity; for example, Mixtral-8x7B has 45 billion parameters and requires 87 GB of memory even though only 14 billion parameters are used per token. Existing systems alleviate this limitation by offloading inactive experts to CPU memory, but transferring experts across the PCIe interconnect incurs significant latency (about 10 ms). Prefetching heuristics aim to hide this latency by predicting which experts are needed, but prefetch failures introduce significant stalls and amplify inference latency. In the event of a prefetch failure, prior work offers two primary solutions: either fetch the expert on demand, which incurs a long stall due to the PCIe bottleneck, or drop the expert from the computation, which significantly degrades model accuracy. The critical challenge, therefore, is to maintain both high inference speed and model accuracy when prefetching fails.
- North America > United States > New York > New York County > New York City (0.05)
- North America > United States > Nevada > Clark County > Las Vegas (0.05)
- Asia > China > Shanghai > Shanghai (0.04)
Query Answering in Object Oriented Knowledge Bases in Logic Programming: Description and Challenge for ASP
Chaudhri, Vinay K., Heymans, Stijn, Wessel, Michael, Son, Tran Cao
Research on developing efficient and scalable ASP solvers can substantially benefit by the availability of data sets to experiment with. KB Bio 101 contains knowledge from a biology textbook, has been developed as part of Project Halo, and has recently become available for research use. KB Bio 101 is one of the largest KBs available in ASP and the reasoning with it is undecidable in general. We give a description of this KB and ASP programs for a suite of queries that have been of practical interest. We explain why these queries pose significant practical challenges for the current ASP solvers.
- North America > United States > New Mexico (0.04)
- North America > United States > California > San Mateo County > Menlo Park (0.04)
- Asia > Myanmar > Tanintharyi Region > Dawei (0.04)
- Asia > China > Guangdong Province > Shenzhen (0.04)
- North America > United States > Texas (0.05)
- North America > United States > New York (0.05)
- North America > United States > California (0.05)
- Water & Waste Management > Solid Waste Management (0.70)
- Media (0.70)
- Materials (0.69)
- Government > Military (0.47)