assistant
Google Assistant will stick around a bit longer than expected for some Android users
LG TVs add'delete' option for Copilot The transition from Assistant to Gemini will continue into 2026. Google wanted to remove Assistant from most Android phones by the end of 2025 and replace it with Gemini. But now the company has announced that it needs a bit more time to make its AI assistant the new default digital helper for most of its users. Google said that it's adjusting its previously announced timeline to make sure [it delivers] a seamless transition and that updates to convert Assistant to Gemini on Android devices will continue into the next year. The company also said that it's sharing more details in the coming months, so it's possible that the transition will go past early 2026. Assistant's retirement was pretty much expected the moment Google launched Gemini and started giving it Assistant's capabilities, such as the ability to control smart devices connected to your phone.
ImF: Implicit Fingerprint for Large Language Models
jiaxuan, Wu, Wanli, Peng, hang, Fu, Yiming, Xue, juan, Wen
Training large language models (LLMs) is resource-intensive and expensive, making intellectual property (IP) protection essential. Most existing model fingerprint methods inject fingerprints into LLMs to protect model ownership. These methods create fingerprint pairs with weak semantic correlations, lacking the contextual coherence and semantic relatedness founded in normal question-answer (QA) pairs in LLMs. In this paper, we propose a Generation Revision Intervention (GRI) attack that can effectively exploit this flaw to erase fingerprints, highlighting the need for more secure model fingerprint methods. Thus, we propose a novel injected fingerprint paradigm called Implicit Fingerprints (ImF). ImF constructs fingerprint pairs with strong semantic correlations, disguising them as natural QA pairs within LLMs. This ensures the fingerprints are consistent with normal model behavior, making them indistinguishable and robust against detection and removal. Our experiment on multiple LLMs demonstrates that ImF retains high verification success rates under adversarial conditions, offering a reliable solution for protecting LLM ownership.
- North America > United States > Florida > Miami-Dade County > Miami (0.04)
- Asia > China > Beijing > Beijing (0.04)
- Information Technology > Security & Privacy (0.69)
- Government > Regional Government (0.68)
Hey Robot! Personalizing Robot Navigation through Model Predictive Control with a Large Language Model
Martinez-Baselga, Diego, de Groot, Oscar, Knoedler, Luzia, Alonso-Mora, Javier, Riazuelo, Luis, Montano, Luis
Robot navigation methods allow mobile robots to operate in applications such as warehouses or hospitals. While the environment in which the robot operates imposes requirements on its navigation behavior, most existing methods do not allow the end-user to configure the robot's behavior and priorities, possibly leading to undesirable behavior (e.g., fast driving in a hospital). We propose a novel approach to adapt robot motion behavior based on natural language instructions provided by the end-user. Our zero-shot method uses an existing Visual Language Model to interpret a user text query or an image of the environment. This information is used to generate the cost function and reconfigure the parameters of a Model Predictive Controller, translating the user's instruction to the robot's motion behavior. This allows our method to safely and effectively navigate in dynamic and challenging environments. We extensively evaluate our method's individual components and demonstrate the effectiveness of our method on a ground robot in simulation and real-world experiments, and across a variety of environments and user specifications.
- North America > United States > New York (0.14)
- Europe > Spain (0.14)
- Europe > Netherlands (0.14)
Dual-Layer Training and Decoding of Large Language Model with Simultaneously Thinking and Speaking
Xi, Ningyuan, Wang, Xiaoyu, Wu, Yetao, Chen, Teng, Gu, Qingqing, Qu, Jinxian, Jiang, Zhonglin, Chen, Yong, Ji, Luo
Large Language Model can reasonably understand and generate human expressions but may lack of thorough thinking and reasoning mechanisms. Recently there have been several studies which enhance the thinking ability of language models but most of them are not data-driven or training-based. In this paper, we are motivated by the cognitive mechanism in the natural world, and design a novel model architecture called TaS which allows it to first consider the thoughts and then express the response based upon the query. We design several pipelines to annotate or generate the thought contents from prompt-response samples, then add language heads in a middle layer which behaves as the thinking layer. We train the language model by the thoughts-augmented data and successfully let the thinking layer automatically generate reasonable thoughts and finally output more reasonable responses. Both qualitative examples and quantitative results validate the effectiveness and performance of TaS. Our code is available at https://anonymous.4open.science/r/TadE.
LLM Granularity for On-the-Fly Robot Control
Wang, Peng, Robbiani, Mattia, Guo, Zhihao
Assistive robots have attracted significant attention due to their potential to enhance the quality of life for vulnerable individuals like the elderly. The convergence of computer vision, large language models, and robotics has introduced the `visuolinguomotor' mode for assistive robots, where visuals and linguistics are incorporated into assistive robots to enable proactive and interactive assistance. This raises the question: \textit{In circumstances where visuals become unreliable or unavailable, can we rely solely on language to control robots, i.e., the viability of the `linguomotor` mode for assistive robots?} This work takes the initial steps to answer this question by: 1) evaluating the responses of assistive robots to language prompts of varying granularities; and 2) exploring the necessity and feasibility of controlling the robot on-the-fly. We have designed and conducted experiments on a Sawyer cobot to support our arguments. A Turtlebot robot case is designed to demonstrate the adaptation of the solution to scenarios where assistive robots need to maneuver to assist. Codes will be released on GitHub soon to benefit the community.
q2d: Turning Questions into Dialogs to Teach Models How to Search
Bitton, Yonatan, Cohen-Ganor, Shlomi, Hakimi, Ido, Lewenberg, Yoad, Aharoni, Roee, Weinreb, Enav
One of the exciting capabilities of recent language models for dialog is their ability to independently search for relevant information to ground a given dialog response. However, obtaining training data to teach models how to issue search queries is time and resource consuming. In this work, we propose q2d: an automatic data generation pipeline that generates information-seeking dialogs from questions. We prompt a large language model (PaLM) to create conversational versions of question answering datasets, and use it to improve query generation models that communicate with external search APIs to ground dialog responses. Unlike previous approaches which relied on human written dialogs with search queries, our method allows to automatically generate query-based grounded dialogs with better control and scale. Our experiments demonstrate that: (1) For query generation on the QReCC dataset, models trained on our synthetically-generated data achieve 90%--97% of the performance of models trained on the human-generated data; (2) We can successfully generate data for training dialog models in new domains without any existing dialog data as demonstrated on the multi-hop MuSiQue and Bamboogle QA datasets. (3) We perform a thorough analysis of the generated dialogs showing that humans find them of high quality and struggle to distinguish them from human-written dialogs.
- North America > United States > North Dakota (0.68)
- Asia > Middle East (0.28)
- Europe (0.28)
- (3 more...)
- Government > Regional Government > North America Government > United States Government (1.00)
- Energy > Oil & Gas (1.00)
- Media > Film (0.93)
- Leisure & Entertainment > Sports (0.68)
Truecaller's call screening AI is now available in the U.S.
Truecaller, a popular spam reporting and caller ID app, has launched a new feature that scrutinizes calls, responds to basic queries, and even handles scut work. Called Truecaller Assistant, the feature takes over the tedious parts of phone calls, especially when you don't want to deal with spam -- or worse -- scam calls. Truecaller's announcement comes just days after the Federal Communications Commission (FCC) confirmed it had approved a long-standing proposal to block unwanted promotional texts in response to countless complaints. The federal agency will work with mobile operators to culminate these spam texts at the origination level, flag spamming numbers, and attempt to educate users. However, the approach is focused on texts and does not elicit action against spam calls.
- Telecommunications (1.00)
- Information Technology > Security & Privacy (0.59)
- Information Technology > Security & Privacy > Spam Filtering (0.59)
- Information Technology > Artificial Intelligence (0.55)
- Information Technology > Communications > Mobile (0.38)
With artificial intelligence against a technological conspiracy: "Assistant"
The AI song that gains awareness and turns against its creators is now an old song. Before machines finally catch up with us, the next technology revolution has yet to come. In the game ARG "Acolyte" is already at the door. The assistants are female and female AI assistants. Designed according to the wants and needs of users, they plan appointments or comb the Internet for inquiries or conversations.
Apple Car: Concept art imagines how the tech giant's vehicle might look based on patent filings
Like a heat mirage shimmering over the road ahead, Apple's much-awaited contribution to the electric car market has been teasing us from the horizon since rumours of its development first emerged back in late 2014. Despite having the potential to be the California-based firm's biggest project yet, both figuratively and literally, precious little has been officially revealed about the plans for the Apple Car. Nevertheless, signs of development are abound, from the firm's apparent ongoing tests of self-driving software around Cupertino via a fleet of sensor-laden Lexus SUVs to the filing of an assortment of suggestive patents. Based on these, experts have anticipated what the Apple Car could look like and the revolutionary features it might sport, from a customisable touchscreen dashboard to a Siri-like'intelligent automated assistant'. Brought to life by artists with the UK car leasing firm Vanarama, the gorgeous mock-up has the sleek, minimalist lines that make Apple's tech offerings so distinctive, down to the glowing Apple logo on the radiator grille.
- North America > United States > California (0.26)
- North America > United States > Arizona > Maricopa County > Phoenix (0.05)
- Transportation > Ground > Road (1.00)
- Automobiles & Trucks > Manufacturer (1.00)