Mitra, Saayan
Privacy Aware Experiments without Cookies
Shankar, Shiv, Sinha, Ritwik, Mitra, Saayan, Swaminathan, Viswanathan, Mahadevan, Sridhar, Sinha, Moumita
Consider two brands that want to jointly test alternate web experiences for their customers with an A/B test. Such collaborative tests are today enabled using third-party cookies, where each brand has information on the identity of visitors to another website. With the imminent elimination of third-party cookies, such A/B tests will become untenable. We propose a two-stage experimental design, where the two brands only need to agree on high-level aggregate parameters of the experiment to test the alternate experiences. Our design respects the privacy of customers. We propose an estimator of the Average Treatment Effect (ATE), show that it is unbiased and theoretically compute its variance. Our demonstration describes Figure 1: Proposed experimental design for two brands to how a marketer for a brand can design such an experiment and estimate the average treatment effect (ATE) without third analyze the results. On real and simulated data, we show that the party cookies. The brands only agree on a definition of clusters approach provides valid estimate of the ATE with low variance (1 & 2), the two treatments (orange and blue "buy" and is robust to the proportion of visitors overlapping across the buttons), and agree to randomize in proportions (& (1)) brands.
DiPS: Differentiable Policy for Sketching in Recommender Systems
Ghosh, Aritra, Mitra, Saayan, Lan, Andrew
In sequential recommender system applications, it is important to develop models that can capture users' evolving interest over time to successfully recommend future items that they are likely to interact with. For users with long histories, typical models based on recurrent neural networks tend to forget important items in the distant past. Recent works have shown that storing a small sketch of past items can improve sequential recommendation tasks. However, these works all rely on static sketching policies, i.e., heuristics to select items to keep in the sketch, which are not necessarily optimal and cannot improve over time with more training data. In this paper, we propose a differentiable policy for sketching (DiPS), a framework that learns a data-driven sketching policy in an end-to-end manner together with the recommender system model to explicitly maximize recommendation quality in the future. We also propose an approximate estimator of the gradient for optimizing the sketching algorithm parameters that is computationally efficient. We verify the effectiveness of DiPS on real-world datasets under various practical settings and show that it requires up to $50\%$ fewer sketch items to reach the same predictive quality than existing sketching policies.