assisted learning
- North America > United States > Massachusetts (0.04)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
Assisted Learning: A Framework for Multi-Organization Learning
In an increasing number of AI scenarios, collaborations among different organizations or agents (e.g., human and robots, mobile units) are often essential to accomplish an organization-specific mission. However, to avoid leaking useful and possibly proprietary information, organizations typically enforce stringent security constraints on sharing modeling algorithms and data, which significantly limits collaborations. In this work, we introduce the Assisted Learning framework for organizations to assist each other in supervised learning tasks without revealing any organization's algorithm, data, or even task. An organization seeks assistance by broadcasting task-specific but nonsensitive statistics and incorporating others' feedback in one or more iterations to eventually improve its predictive performance. Theoretical and experimental studies, including real-world medical benchmarks, show that Assisted Learning can often achieve near-oracle learning performance as if data and training processes were centralized.
- North America > United States > Minnesota (0.04)
- North America > United States > Massachusetts > Suffolk County > Boston (0.04)
- North America > Canada (0.04)
- Asia > Middle East > Israel (0.04)
- Information Technology > Security & Privacy (1.00)
- Health & Medicine (1.00)
- Information Technology > Data Science > Data Mining (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning > Regression (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (0.93)
Review for NeurIPS paper: Assisted Learning: A Framework for Multi-Organization Learning
Weaknesses: The paper states that model selection or model averaging approaches will not significantly improve over the best of the models (Alice's or Bob's) used in the assisted learning procedure because they fail to utilize the full data (the union of Alice's and Bob's features). However, ensemble techniques such as stacked regression (Breiman 1996) are often successfully used to improve predictive performance by combining not only different models trained on the same set of features, but also by combining different models trained on different subsets of features. In all experiments performed in the paper, only comparisons between assisted learning and the oracle model were presented. The paper would be considerably stronger if it was able to show that assisted learning compared favorably against (for instance) a stacked model generated with the predictions obtained from the different models on modules M_1, …, M_m (trained with the original public responses). Note that under the assumptions made by the paper, that the labels/response (as well as, some sort of identifier needed to collate the labels/response to the features) are public available, a simpler ensemble approach (such as stacking) could also be directly used to improve learning without sharing the private feature data.
Assisted Learning: A Framework for Multi-Organization Learning
In an increasing number of AI scenarios, collaborations among different organizations or agents (e.g., human and robots, mobile units) are often essential to accomplish an organization-specific mission. However, to avoid leaking useful and possibly proprietary information, organizations typically enforce stringent security constraints on sharing modeling algorithms and data, which significantly limits collaborations. In this work, we introduce the Assisted Learning framework for organizations to assist each other in supervised learning tasks without revealing any organization's algorithm, data, or even task. An organization seeks assistance by broadcasting task-specific but nonsensitive statistics and incorporating others' feedback in one or more iterations to eventually improve its predictive performance. Theoretical and experimental studies, including real-world medical benchmarks, show that Assisted Learning can often achieve near-oracle learning performance as if data and training processes were centralized.
Decentralized Multi-Target Cross-Domain Recommendation for Multi-Organization Collaborations
Diao, Enmao, Tarokh, Vahid, Ding, Jie
Recommender Systems (RSs) are operated locally by different organizations in many realistic scenarios. If various organizations can fully share their data and perform computation in a centralized manner, they may significantly improve the accuracy of recommendations. However, collaborations among multiple organizations in enhancing the performance of recommendations are primarily limited due to the difficulty of sharing data and models. To address this challenge, we propose Decentralized Multi-Target Cross-Domain Recommendation (DMTCDR) with Multi-Target Assisted Learning (MTAL) and Assisted AutoEncoder (AAE). Our method can help multiple organizations collaboratively improve their recommendation performance in a decentralized manner without sharing sensitive assets. Consequently, it allows decentralized organizations to collaborate and form a community of shared interest. We conduct extensive experiments to demonstrate that the new method can significantly outperform locally trained RSs and mitigate the cold start problem.
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- North America > United States > North Carolina > Durham County > Durham (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- (2 more...)
Assisted Learning and Imitation Privacy
Xian, Xun, Wang, Xinran, Ding, Jie, Ghanadan, Reza
Motivated by the emerging needs of decentralized learners with personalized learning objectives, we present an Assisted Learning framework where a service provider Bob assists a learner Alice with supervised learning tasks without transmitting Bob's private algorithm or data. Bob assists Alice either by building a predictive model using Alice's labels, or by improving Alice's private learning through iterative communications where only relevant statistics are transmitted. The proposed learning framework is naturally suitable for distributed, personalized, and privacy-aware scenarios. For example, it is shown in some scenarios that two suboptimal learners could achieve much better performance through Assisted Learning. Moreover, motivated by privacy concerns in Assisted Learning, we present a new notion of privacy to quantify the privacy leakage at learning level instead of data level. This new privacy, named imitation privacy, is particularly suitable for a market of statistical learners each holding private learning algorithms as well as data.
- Asia > Middle East > Jordan (0.04)
- North America > United States > Wisconsin > Dane County > Madison (0.04)
- North America > United States > Minnesota (0.04)
- Information Technology > Security & Privacy (1.00)
- Education (1.00)