Goto

Collaborating Authors

 assistant


Google Assistant will stick around a bit longer than expected for some Android users

Engadget

LG TVs add'delete' option for Copilot The transition from Assistant to Gemini will continue into 2026. Google wanted to remove Assistant from most Android phones by the end of 2025 and replace it with Gemini. But now the company has announced that it needs a bit more time to make its AI assistant the new default digital helper for most of its users. Google said that it's adjusting its previously announced timeline to make sure [it delivers] a seamless transition and that updates to convert Assistant to Gemini on Android devices will continue into the next year. The company also said that it's sharing more details in the coming months, so it's possible that the transition will go past early 2026. Assistant's retirement was pretty much expected the moment Google launched Gemini and started giving it Assistant's capabilities, such as the ability to control smart devices connected to your phone.



Supplementary File for ConvBench: A Multi-Turn Conversation Evaluation Benchmark with Hierarchical Evaluation Capability for Large Vision-Language Models

Neural Information Processing Systems

We calculate the agreement of human judgment and our automatic evaluation (i.e., ConvBenchEval()) and find it reaches 81.83% (seeing Table 3 - 6 for detailed agreement of each turn of overall). It demonstrates the effectiveness of ConvBenchEval(), which uses ChatGPT. The agreement between ChatGPT and GPT4 is very high at 87.38%. It demonstrates that using different LLMs as judges slightly influences the evaluation results. ConvBenchEval() armed with ChatGPT can is reliable and low-cost. From the above tables, we also observe that though GPT4V is expensive and can capture images, its judgment performs worse than GPT4's judgment.







c1f0b856a35986348ab3414177266f75-Paper-Conference.pdf

Neural Information Processing Systems

Large language models are now tuned to align with the goals of their creators, namely to be "helpful and harmless." These models should respond helpfully to user questions, but refuse to answer requests that could cause harm. However, adversarial users can construct inputs which circumvent attempts at alignment. In this work, we study adversarial alignment, and ask to what extent these models remain aligned when interacting with an adversarial user who constructs worst-case inputs (adversarial examples). These inputs are designed to cause the model to emit harmful content that would otherwise be prohibited. We show that existing NLP-based optimization attacks are insufficiently powerful to reliably attack aligned text models: even when current NLP-based attacks fail, we can find adversarial inputs with brute force.