Goto

Collaborating Authors

 nuc




How iconic NUC mini-PCs are being reimagined for a new era

PCWorld

More than a year has passed since Asus' acquisition of the NUC brand from Intel, which marked the first major change the brand had seen since Intel launched it back in 2013. After more than a decade of continuity -- including last year's transition year where Intel still had a say on design -- this will be the real first year in which Asus has done most of the groundwork, fronting up with its own designs and innovations. So how is the NUC different now in this new era? I spoke to Kuo Wei Chao, general manager of Asus IoT business unit, to find out. The Asus NUC lineup announced at CES 2025 in Las Vegas included the NUC 14 AI and the more premium NUC 14 Pro AI with 48 TOPS NPU AI power and a dedicated Copilot button for quick access to the AI assistant. They were on display alongside two new powerful mini-PCs for everyday use featuring the latest Intel Core Ultra (Series 2) chips: the NUC 15 and NUC 15 Pro .


InfiniPot: Infinite Context Processing on Memory-Constrained LLMs

Kim, Minsoo, Shim, Kyuhong, Choi, Jungwook, Chang, Simyung

arXiv.org Artificial Intelligence

Handling long input contexts remains a significant challenge for Large Language Models (LLMs), particularly in resource-constrained environments such as mobile devices. Our work aims to address this limitation by introducing InfiniPot, a novel KV cache control framework designed to enable pre-trained LLMs to manage extensive sequences within fixed memory constraints efficiently, without requiring additional training. InfiniPot leverages Continual Context Distillation (CCD), an iterative process that compresses and retains essential information through novel importance metrics, effectively maintaining critical data even without access to future context. Our comprehensive evaluations indicate that InfiniPot significantly outperforms models trained for long contexts in various NLP tasks, establishing its efficacy and versatility. This work represents a substantial advancement toward making LLMs applicable to a broader range of real-world scenarios.


Improving Structural Diversity of Blackbox LLMs via Chain-of-Specification Prompting

Young, Halley, Zeng, Yimeng, Gardner, Jacob, Bastani, Osbert

arXiv.org Artificial Intelligence

The capability to generate diverse text is a key challenge facing large language models (LLMs). Thus far, diversity has been studied via metrics such as $n$-gram diversity or diversity of BERT embeddings. However, for these kinds of diversity, the user has little control over the dimensions along which diversity is considered. For example, in the poetry domain, one might desire diversity in terms of rhyme and meter, whereas in the code domain, one might desire diversity in terms of the kinds of expressions used to solve a problem. We propose a diversity metric called structural diversity, where the user provides a mapping from generated text to features capturing the kinds of diversity that they care about. In addition, we propose a novel strategy called chain-of-specification (CoS) prompting for improving diversity by first having the LLM generate a specification encoding one instance of structural features, and then prompting the LLM to generate text that satisfies these features; notably, our strategy works with blackbox LLMs. In our experiments, we show that for structural diversity in the poetry and code domains, CoS significantly improves diversity compared to several baselines.


Can we obtain significant success in RST discourse parsing by using Large Language Models?

Maekawa, Aru, Hirao, Tsutomu, Kamigaito, Hidetaka, Okumura, Manabu

arXiv.org Artificial Intelligence

Recently, decoder-only pre-trained large language models (LLMs), with several tens of billion parameters, have significantly impacted a wide range of natural language processing (NLP) tasks. While encoder-only or encoder-decoder pre-trained language models have already proved to be effective in discourse parsing, the extent to which LLMs can perform this task remains an open research question. Therefore, this paper explores how beneficial such LLMs are for Rhetorical Structure Theory (RST) discourse parsing. Here, the parsing process for both fundamental top-down and bottom-up strategies is converted into prompts, which LLMs can work with. We employ Llama 2 and fine-tune it with QLoRA, which has fewer parameters that can be tuned. Experimental results on three benchmark datasets, RST-DT, Instr-DT, and the GUM corpus, demonstrate that Llama 2 with 70 billion parameters in the bottom-up strategy obtained state-of-the-art (SOTA) results with significant differences. Furthermore, our parsers demonstrated generalizability when evaluated on RST-DT, showing that, in spite of being trained with the GUM corpus, it obtained similar performances to those of existing parsers trained with RST-DT.


Mobile Manipulation Platform for Autonomous Indoor Inspections in Low-Clearance Areas

Pearson, Erik, Szenher, Paul, Huang, Christine, Englot, Brendan

arXiv.org Artificial Intelligence

Mobile manipulators have been used for inspection, maintenance and repair tasks over the years, but there are some key limitations. Stability concerns typically require mobile platforms to be large in order to handle far-reaching manipulators, or for the manipulators to have drastically reduced workspaces to fit onto smaller mobile platforms. Therefore we propose a combination of two widely-used robots, the Clearpath Jackal unmanned ground vehicle and the Kinova Gen3 six degree-of-freedom manipulator. The Jackal has a small footprint and works well in low-clearance indoor environments. Extensive testing of localization, navigation and mapping using LiDAR sensors makes the Jackal a well developed mobile platform suitable for mobile manipulation. The Gen3 has a long reach with reasonable power consumption for manipulation tasks. A wrist camera for RGB-D sensing and a customizable end effector interface makes the Gen3 suitable for a myriad of manipulation tasks. Typically these features would result in an unstable platform, however with a few minor hardware and software modifications, we have produced a stable, high-performance mobile manipulation platform with significant mobility, reach, sensing, and maneuverability for indoor inspection tasks, without degradation of the component robots' individual capabilities. These assertions were investigated with hardware via semi-autonomous navigation to waypoints in a busy indoor environment, and high-precision self-alignment alongside planar structures for intervention tasks.


One-sided Matrix Completion from Two Observations Per Row

Cao, Steven, Liang, Percy, Valiant, Gregory

arXiv.org Artificial Intelligence

However, most of our understanding is restricted to settings where each Given only a few observed entries from a lowrank row and each column have more observations than the rank matrix X, matrix completion is the problem of the underlying matrix. It is natural that past work operated of imputing the missing entries, and it formalizes under this assumption because full matrix completion a wide range of real-world settings that involve is impossible without it: for a rank-r matrix X with estimating missing data. However, when shape m d, one can show that estimating the matrix is there are too few observed entries to complete impossible with o(r(m + d)) observations. Nonetheless, the matrix, what other aspects of the underlying many important applications do not satisfy this assumption: matrix can be reliably recovered? We study one for example, in low-coverage genotype imputation (Li such problem setting, that of "one-sided" matrix et al., 2009), we might sequence d = 2,000 people for completion, where our goal is to recover the 10,000 genetic variants each, out of the m = 10,000,000 right singular vectors of X, even in the regime genetic variants in humans. Represented as a matrix, we where recovering the left singular vectors is impossible, have a 10,000,000 2,000 matrix with 2,000 10,000 = which arises when there are more rows 20,000,000 total observations, or about two observations than columns and very few observations. We propose per row on average, which is certainly much less than the a natural algorithm that involves imputing rank of the matrix.


Intel shows how Movidius AI chips and Windows ML will let PCs anticipate your needs

PCWorld

Intel envisions a future where your PC will simply anticipate your habits and act accordingly. But it's not clear when that future will arrive, how realistic that vision will be, or whether consumers will tolerate a computer that predicts your every move. What we know is this: Intel's building a future version of its tiny desktop PCs, the NUCs, with Amazon's Alexa assistant built in. The Intel "Bean Canyon" NUC--Bean for "coffee bean," or the "Coffee Lake" chip built inside of it--will arrive later this year. Meanwhile, Intel is adapting its Movidius chips into "AI chips" that will power these intelligent, future experiences.


Denoising Linear Models with Permuted Data

Pananjady, Ashwin, Wainwright, Martin J., Courtade, Thomas A.

arXiv.org Machine Learning

The multivariate linear regression model with shuffled data and additive Gaussian noise arises in various correspondence estimation and matching problems. Focusing on the denoising aspect of this problem, we provide a characterization the minimax error rate that is sharp up to logarithmic factors. We also analyze the performance of two versions of a computationally efficient estimator, and establish their consistency for a large range of input parameters. Finally, we provide an exact algorithm for the noiseless problem and demonstrate its performance on an image point-cloud matching task. Our analysis also extends to datasets with outliers.