Goto

Collaborating Authors

 personalize


Fine-Tuning Robot Policies While Maintaining User Privacy

Christie, Benjamin A., Parekh, Sagar, Losey, Dylan P.

arXiv.org Artificial Intelligence

Recent works introduce general-purpose robot policies. These policies provide a strong prior over how robots should behave -- e.g., how a robot arm should manipulate food items. But in order for robots to match an individual person's needs, users typically fine-tune these generalized policies -- e.g., showing the robot arm how to make their own preferred dinners. Importantly, during the process of personalizing robots, end-users leak data about their preferences, habits, and styles (e.g., the foods they prefer to eat). Other agents can simply roll-out the fine-tuned policy and see these personally-trained behaviors. This leads to a fundamental challenge: how can we develop robots that personalize actions while keeping learning private from external agents? We here explore this emerging topic in human-robot interaction and develop PRoP, a model-agnostic framework for personalized and private robot policies. Our core idea is to equip each user with a unique key; this key is then used to mathematically transform the weights of the robot's network. With the correct key, the robot's policy switches to match that user's preferences -- but with incorrect keys, the robot reverts to its baseline behaviors. We show the general applicability of our method across multiple model types in imitation learning, reinforcement learning, and classification tasks. PRoP is practically advantageous because it retains the architecture and behaviors of the original policy, and experimentally outperforms existing encoder-based approaches. See videos and code here: https://prop-icra26.github.io.


What the heck is a gaming browser, and do you need one?

PCWorld

Even some of the most elite gamers haven't heard of the term "gaming browser." Even so, most browsers can be set up specifically to accentuate your gaming experience. But is it worth going to all that trouble? Why not just jump on the bandwagon and use Google Chrome instead? A simple glimpse of the features available in Opera GX makes it clear that it's no ordinary browser designed to just surf the web.


FedL2P: Federated Learning to Personalize

Neural Information Processing Systems

Federated learning (FL) research has made progress in developing algorithms for distributed learning of global models, as well as algorithms for local personalization of those common models to the specifics of each client's local data distribution. However, different FL problems may require different personalization strategies, and it may not even be possible to define an effective one-size-fits-all personalization strategy for all clients: Depending on how similar each client's optimal predictor is to that of the global model, different personalization strategies may be preferred. In this paper, we consider the federated meta-learning problem of learning personalization strategies. Specifically, we consider meta-nets that induce the batch-norm and learning rate parameters for each client given local data statistics. By learning these meta-nets through FL, we allow the whole FL network to collaborate in learning a customized personalization strategy for each client. Empirical results show that this framework improves on a range of standard hand-crafted personalization baselines in both label and feature shift situations.


PECAN: Personalizing Robot Behaviors through a Learned Canonical Space

Nemlekar, Heramb, Sanchez, Robert Ramirez, Losey, Dylan P.

arXiv.org Artificial Intelligence

Robots should personalize how they perform tasks to match the needs of individual human users. Today's robot achieve this personalization by asking for the human's feedback in the task space. For example, an autonomous car might show the human two different ways to decelerate at stoplights, and ask the human which of these motions they prefer. This current approach to personalization is indirect: based on the behaviors the human selects (e.g., decelerating slowly), the robot tries to infer their underlying preference (e.g., defensive driving). By contrast, our paper develops a learning and interface-based approach that enables humans to directly indicate their desired style. We do this by learning an abstract, low-dimensional, and continuous canonical space from human demonstration data. Each point in the canonical space corresponds to a different style (e.g., defensive or aggressive driving), and users can directly personalize the robot's behavior by simply clicking on a point. Given the human's selection, the robot then decodes this canonical style across each task in the dataset -- e.g., if the human selects a defensive style, the autonomous car personalizes its behavior to drive defensively when decelerating, passing other cars, or merging onto highways. We refer to our resulting approach as PECAN: Personalizing Robot Behaviors through a Learned Canonical Space. Our simulations and user studies suggest that humans prefer using PECAN to directly personalize robot behavior (particularly when those users become familiar with PECAN), and that users find the learned canonical space to be intuitive and consistent. See videos here: https://youtu.be/wRJpyr23PKI


Masters, IBM enhancing fan experience with Hole Insights to track tournament shots in real time

FOX News

Fox News Flash top sports headlines are here. Check out what's clicking on Foxnews.com. Whether it's your 10th time playing or your first, the Masters at Augusta National Golf Club is a daunting task for every golfer. It's the only major of the golfing season that's continuously played at the same course, yet golfers sometimes take weeks off between tournaments just to prepare for it. Like any sport, analytics factor into a golfer's preparation, with statisticians used by almost everyone on Tour, helping them track previous rounds on any given course to figure out a game plan each week.


Help, My Friend Got Me a Dumb AI-Generated Present

WIRED

"An artist friend of mine got me an AI-generated painting as a gift. I can see she tried to personalize the concept, and it's nicely framed, but part of me still feels a little cheated. For timely guidance on encounters with technology, open a support ticket via email; or register and post a comment below. There's something implicitly paradoxical about feeling "cheated" by a present. A gift is, by definition, something that comes into your possession at no cost or effort, an object that exists outside the economic concepts of debt and fair exchange.


How to use Apple's new Journal app with the iOS 17.2 update

Engadget

Apple's AI-powered Journal app is finally here. The new diary entry writing tool was first teased for iOS 17 back in June, but it only became available on Monday with the new iPhone update -- nearly three months after iOS 17 itself came out. After Apple released iOS 17.2, iPhone users can now access to the Journal app, which allows users to jot down their thoughts in a digital diary. Journaling is a practice that can improve mental wellbeing and it can also be used to fuel creative projects. You can create traditional text entries, add voice recordings to your notes, or include recent videos or pictures.


FedL2P: Federated Learning to Personalize

Lee, Royson, Kim, Minyoung, Li, Da, Qiu, Xinchi, Hospedales, Timothy, Huszár, Ferenc, Lane, Nicholas D.

arXiv.org Artificial Intelligence

Federated learning (FL) research has made progress in developing algorithms for distributed learning of global models, as well as algorithms for local personalization of those common models to the specifics of each client's local data distribution. However, different FL problems may require different personalization strategies, and it may not even be possible to define an effective one-size-fits-all personalization strategy for all clients: depending on how similar each client's optimal predictor is to that of the global model, different personalization strategies may be preferred. In this paper, we consider the federated meta-learning problem of learning personalization strategies. Specifically, we consider meta-nets that induce the batch-norm and learning rate parameters for each client given local data statistics. By learning these meta-nets through FL, we allow the whole FL network to collaborate in learning a customized personalization strategy for each client. Empirical results show that this framework improves on a range of standard hand-crafted personalization baselines in both label and feature shift situations.


FedPerfix: Towards Partial Model Personalization of Vision Transformers in Federated Learning

Sun, Guangyu, Mendieta, Matias, Luo, Jun, Wu, Shandong, Chen, Chen

arXiv.org Artificial Intelligence

Personalized Federated Learning (PFL) represents a promising solution for decentralized learning in heterogeneous data environments. Partial model personalization has been proposed to improve the efficiency of PFL by selectively updating local model parameters instead of aggregating all of them. However, previous work on partial model personalization has mainly focused on Convolutional Neural Networks (CNNs), leaving a gap in understanding how it can be applied to other popular models such as Vision Transformers (ViTs). In this work, we investigate where and how to partially personalize a ViT model. Specifically, we empirically evaluate the sensitivity to data distribution of each type of layer. Based on the insights that the self-attention layer and the classification head are the most sensitive parts of a ViT, we propose a novel approach called FedPerfix, which leverages plugins to transfer information from the aggregated model to the local client as a personalization. Finally, we evaluate the proposed approach on CIFAR-100, OrganAMNIST, and Office-Home datasets and demonstrate its effectiveness in improving the model's performance compared to several advanced PFL methods.


Teach LLMs to Personalize -- An Approach inspired by Writing Education

Li, Cheng, Zhang, Mingyang, Mei, Qiaozhu, Wang, Yaqing, Hombaiah, Spurthi Amba, Liang, Yi, Bendersky, Michael

arXiv.org Artificial Intelligence

Personalized text generation is an emerging research area that has attracted much attention in recent years. Most studies in this direction focus on a particular domain by designing bespoke features or models. In this work, we propose a general approach for personalized text generation using large language models (LLMs). Inspired by the practice of writing education, we develop a multistage and multitask framework to teach LLMs for personalized generation. In writing instruction, the task of writing from sources is often decomposed into multiple steps that involve finding, evaluating, summarizing, synthesizing, and integrating information. Analogously, our approach to personalized text generation consists of multiple stages: retrieval, ranking, summarization, synthesis, and generation. In addition, we introduce a multitask setting that helps the model improve its generation ability further, which is inspired by the observation in education that a student's reading proficiency and writing ability are often correlated. We evaluate our approach on three public datasets, each of which covers a different and representative domain. Our results show significant improvements over a variety of baselines.