Goto

Collaborating Authors

 customization


We Asked Coffee Pros to Blind Test Coffee Machines. The Results Were Surprising

WIRED

We Asked Coffee Pros to Blind Test Coffee Machines. For our latest WIRED Blind Test, we sat coffee industry professionals down to rank leading do-it-all coffee machines--and the winner wasn't what anyone expected. What do you love about coffee? Is it the caffeine boost in the morning, the creamy sweetness of a cappuccino or latte, the bucket of filter coffee you can sip on all day, or the quick kick of a good espresso? Or is it the zen-like ritual of it all, the measuring of beans and the precision of the perfect extraction? Good thing it's much better for you than science previously realized.


Will fusion power get cheap? Don't count on it.

MIT Technology Review

Will fusion power get cheap? New research suggests that cost declines could be slow for the technology. Fusion power could provide a steady, zero-emissions source of electricity in the future--if companies can get plants built and running. But a new study suggests that even if that future arrives, it might not come cheap. Technologies tend to get less expensive over time. Lithium-ion batteries are now about 90% cheaper than they were in 2013.






VisionLLM: Large Language Model is also an Open-Ended Decoder for Vision-Centric Tasks

Neural Information Processing Systems

Large language models (LLMs) have notably accelerated progress towards artificial general intelligence (AGI), with their impressive zero-shot capacity for user-tailored tasks, endowing them with immense potential across a range of applications. However, in the field of computer vision, despite the availability of numerous powerful vision foundation models (VFMs), they are still restricted to tasks in a pre-defined form, struggling to match the open-ended task capabilities of LLMs. In this work, we present an LLM-based framework for vision-centric tasks, termed VisionLLM. This framework provides a unified perspective for vision and language tasks by treating images as a foreign language and aligning vision-centric tasks with language tasks that can be flexibly defined and managed using language instructions. An LLM-based decoder can then make appropriate predictions based on these instructions for open-ended tasks. Extensive experiments show that the proposed VisionLLM can achieve different levels of task customization through language instructions, from fine-grained object-level to coarse-grained task-level customization, all with good results. It's noteworthy that, with a generalist LLM-based framework, our model can achieve over 60% mAP on COCO, on par with detection-specific models. We hope this model can set a new baseline for generalist vision and language models. The code shall be released.


Customizable Image Synthesis with Multiple Subjects

Neural Information Processing Systems

Synthesizing images with user-specified subjects has received growing attention due to its practical applications. Despite the recent success in single subject customization, existing algorithms suffer from high training cost and low success rate along with increased number of subjects. Towards controllable image synthesis with multiple subjects as the constraints, this work studies how to efficiently represent a particular subject as well as how to appropriately compose different subjects. We find that the text embedding regarding the subject token already serves as a simple yet effective representation that supports arbitrary combinations without any model tuning. Through learning a residual on top of the base embedding, we manage to robustly shift the raw subject to the customized subject given various text conditions. We then propose to employ layout, a very abstract and easy-to-obtain prior, as the spatial guidance for subject arrangement.


LOCUS: A System and Method for Low-Cost Customization for Universal Specialization

Sundararaman, Dhanasekar, Li, Keying, Xiong, Wayne, Garg, Aashna

arXiv.org Artificial Intelligence

We present LOCUS (LOw-cost Customization for Universal Specialization), a pipeline that consumes few-shot data to streamline the construction and training of NLP models through targeted retrieval, synthetic data generation, and parameter-efficient tuning. With only a small number of labeled examples, LOCUS discovers pertinent data in a broad repository, synthesizes additional training samples via in-context data generation, and fine-tunes models using either full or low-rank (LoRA) parameter adaptation. Our approach targets named entity recognition (NER) and text classification (TC) benchmarks, consistently outperforming strong baselines (including GPT-4o) while substantially lowering costs and model sizes. Our resultant memory-optimized models retain 99% of fully fine-tuned accuracy while using barely 5% of the memory footprint, also beating GPT-4o on several benchmarks with less than 1% of its parameters.


The Rapid Growth of AI Foundation Model Usage in Science

Trišović, Ana, Fogelson, Alex, Sivaloganathan, Janakan, Thompson, Neil

arXiv.org Artificial Intelligence

We present the first large-scale analysis of AI foundation model usage in science - not just citations or keywords. We find that adoption has grown rapidly, at nearly-exponential rates, with the highest uptake in Linguistics, Computer Science, and Engineering. Vision models are the most used foundation models in science, although language models' share is growing. Open-weight models dominate. As AI builders increase the parameter counts of their models, scientists have followed suit but at a much slower rate: in 2013, the median foundation model built was 7.7x larger than the median one adopted in science, by 2024 this had jumped to 26x. We also present suggestive evidence that scientists' use of these smaller models may be limiting them from getting the full benefits of AI-enabled science, as papers that use larger models appear in higher-impact journals and accrue more citations.