Goto

Collaborating Authors

 cohort



root

Erik Miehling

Neural Information Processing Systems

The high-level architecture of our simulator is illustrated in Figure 1 of Section 4. Additional details (with references to objects in the source code) are provided below. Simulations were run in Python 3.8 on an Intel(R) Xeon(R) CPU E5-2667 One direction is to extend the feature description of the ads (beyond topic) to include features that reflect ad quality and location. Baseline parameters are: µ = 0 . The resulting cohort errors are consistent with Figure 1 of Section 5.1. For the fully informative prior, the agent is completely certain of users' cohorts for Lastly, for the uninformative prior, revelation of a user's cookie does not inform the The agent's ability to distinguish users based on their responses depends on the similarities of affinities across users in different cohorts.



Improved Inference for CSDID Using the Cluster Jackknife

Karim, Sunny R., Nielsen, Morten Ørregaard, MacKinnon, James G., Webb, Matthew D.

arXiv.org Machine Learning

Obtaining reliable inferences with traditional difference-in-differences (DiD) methods can be difficult. Problems can arise when both outcomes and errors are serially correlated, when there are few clusters or few treated clusters, when cluster sizes vary greatly, and in various other cases. In recent years, recognition of the ``staggered adoption'' problem has shifted the focus away from inference towards consistent estimation of treatment effects. One of the most popular new estimators is the CSDID procedure of Callaway and Sant'Anna (2021). We find that the issues of over-rejection with few clusters and/or few treated clusters are at least as severe for CSDID as for traditional DiD methods. We also propose using a cluster jackknife for inference with CSDID, which simulations suggest greatly improves inference. We provide software packages in Stata csdidjack and R didjack to calculate cluster-jackknife standard errors easily.



DynamicInverseReinforcementLearningfor CharacterizingAnimalBehavior

Neural Information Processing Systems

While many models have been developed for characterizing behavior in binary decision-making and bandit tasks, comparatively little work has focused onanimal decision-making inmorecomplextasks,suchasnavigationthrough a maze.



Participatory Personalization in Classification Supplementary Material

Neural Information Processing Systems

The performance of participatory systems will depend on individual reporting decisions. Thus, flat and sequential systems will perform better than a minimal system. The best-case performance of any participatory system will exceed the performance of any of its components. Given a participatory system, we can conduct this evaluation by simulating the parameters in the individual disclosure model shown above. The sequential system outperforms static personalized systems when all group attributes are reported.



Analysis of Variance of Multiple Causal Networks

Neural Information Processing Systems

Constructing a directed cyclic graph (DCG) is challenged by both algorithmic difficulty and computational burden. Comparing multiple DCGs is even more difficult, compounded by the need to identify dynamic causalities across graphs.