Not enough data to create a plot.
Try a different view from the menu above.
Is your therapist AI? ChatGPT goes viral on social media for its role as Gen Z's new therapist
AI chatbots are stepping into the therapist's chair โ and not everyone is thrilled about it. In March alone, 16.7 million posts from TikTok users discussed using ChatGPT as a therapist, but mental health professionals are raising red flags over the growing trend that sees artificial intelligence tools being used in their place to treat anxiety, depression and other mental health challenges. "ChatGPT singlehandedly has made me a less anxious person when it comes to dating, when it comes to health, when it comes to career," user @christinazozulya shared in a TikTok video posted to her profile last month. "Any time I have anxiety, instead of bombarding my parents with texts like I used to or texting a friend or crashing out essentiallyโฆ before doing that, I always voice memo my thoughts into ChatGPT, and it does a really good job at calming me down and providing me with that immediate relief that unfortunately isn't as accessible to everyone." The ChatGPT logo on a laptop computer arranged in New York, US, on Thursday, March 9, 2023.
On the Exploitability of Instruction Tuning Jiongxiao Wang 2 Chen Zhu 3 Jonas Geiping
Instruction tuning is an effective technique to align large language models (LLMs) with human intents. In this work, we investigate how an adversary can exploit instruction tuning by injecting specific instruction-following examples into the training data that intentionally changes the model's behavior. For example, an adversary can achieve content injection by injecting training examples that mention target content and eliciting such behavior from downstream models. To achieve this goal, we propose AutoPoison, an automated data poisoning pipeline. It naturally and coherently incorporates versatile attack goals into poisoned data with the help of an oracle LLM. We showcase two example attacks: content injection and over-refusal attacks, each aiming to induce a specific exploitable behavior. We quantify and benchmark the strength and the stealthiness of our data poisoning scheme. Our results show that AutoPoison allows an adversary to change a model's behavior by poisoning only a small fraction of data while maintaining a high level of stealthiness in the poisoned examples. We hope our work sheds light on how data quality affects the behavior of instruction-tuned models and raises awareness of the importance of data quality for responsible deployments of LLMs.
Calibrated Stackelberg Games: Learning Optimal Commitments Against Calibrated Agents
In this paper, we introduce a generalization of the standard Stackelberg Games (SGs) framework: Calibrated Stackelberg Games (CSGs). In CSGs, a principal repeatedly interacts with an agent who (contrary to standard SGs) does not have direct access to the principal's action but instead best-responds to calibrated forecasts about it. CSG is a powerful modeling tool that goes beyond assuming that agents use ad hoc and highly specified algorithms for interacting in strategic settings and thus more robustly addresses real-life applications that SGs were originally intended to capture. Along with CSGs, we also introduce a stronger notion of calibration, termed adaptive calibration, that provides fine-grained any-time calibration guarantees against adversarial sequences. We give a general approach for obtaining adaptive calibration algorithms and specialize them for finite CSGs. In our main technical result, we show that in CSGs, the principal can achieve utility that converges to the optimum Stackelberg value of the game both in finite and continuous settings, and that no higher utility is achievable. Two prominent and immediate applications of our results are the settings of learning in Stackelberg Security Games and strategic classification, both against calibrated agents.
Appendix
In this section, we demonstrate the ฮฉ-RIP condition on the power state estimation problem and illustrate the success rate of the randomly initialized gradient descent method. Given the number of buses N, the power network G = (V, E) is randomly generated by the Erdรถs-Rรฉnyi random graph model with parameter p (0, 1].
I finally tried Samsung's XR headset, and it beats my Apple Vision Pro in meaningful ways
Putting on Project Moohan, an upcoming XR headset developed by Google, Samsung, and Qualcomm, for the first time felt strangely familiar. From twisting the head-strap knob on the back to slipping the standalone battery pack into my pants pocket, my mind was transported back to February 2024, when I tried on the Apple Vision Pro on launch day. Also: I tried Google's XR glasses and they already beat my Meta Ray-Bans in 3 ways Only this time, the headset was powered by Android XR, Google's newest operating system built around Gemini, the same AI model that dominated the Google I/O headlines throughout this week. The difference in software was immediately noticeable -- from the home grid of Google apps like Photos, Maps, and YouTube (which VisionOS still lacks) to prompting for Gemini instead of Siri with a long press of the headset's multifunctional key. While my demo with Project Moohan lasted only about 10 minutes, it gave me a clear understanding of how it's challenging Apple's Vision Pro and how Google, Samsung, and Qualcomm plan to convince the masses that the future of spatial computing does, in fact, live in a bulkier space-helmet-like device.