Collaborating Authors

Fast detection of multiple change-points shared by many signals using group LARS

Neural Information Processing Systems

We present a fast algorithm for the detection of multiple change-points when each is frequently shared by members of a set of co-occurring one-dimensional signals. We give conditions on consistency of the method when the number of signals increases, and provide empirical evidence to support the consistency results. Papers published at the Neural Information Processing Systems Conference.

Human-Object Interaction Detection


Humans interacting with objects often create intricate and complex scenarios for detection. For example, the two entities can be a human and an object or a human and her environment. Therefore, it is crucial to determine whether they are working in unison or if some interaction is happening to better understand the scenarios that a model must account for.

TMBuD: A dataset for urban scene building detection Artificial Intelligence

Computer Vision (CV) aims to create computational models that can mimic the human visual system. From an engineering point of view, CV aims to build autonomous systems which could perform some of the tasks that the human visual system is able to accomplish [1]. Urban scenarios reconstruction and understanding of it is an area of research with several applications nowadays: entertainment industry, computer gaming, movie making, digital mapping for mobile devices, digital mapping for car navigation, urban planning, driving. Understanding urban scenarios has become much more important with the evolution of Augmented Reality (AR). AR is successfully exploited in many domains nowadays, one of them being culture and tourism, an area in which the authors of this paper carried multiple research projects [2], [3], [4].

HCVRD: A Benchmark for Large-Scale Human-Centered Visual Relationship Detection

AAAI Conferences

Visual relationship detection aims to capture interactions between pairs of objects in images. Relationships between objects and humans represent a particularly important subset of this problem, with implications for challenges such as understanding human behavior, and identifying affordances, amongst others. In addressing this problem we first construct a large-scale human-centric visual relationship detection dataset (HCVRD), which provides many more types of relationship annotations (nearly 10K categories) than the previous released datasets. This large label space better reflects the reality of human-object interactions, but gives rise to a long-tail distribution problem, which in turn demands a zero-shot approach to labels appearing only in the test set. This is the first time this issue has been addressed. We propose a webly-supervised approach to these problems and demonstrate that the proposed model provides a strong baseline on our HCVRD dataset.

Measurement of the neutron lifetime using a magneto-gravitational trap and in situ detection


Unlike the proton, whose lifetime is longer than the age of the universe, a free neutron decays with a lifetime of about 15 minutes. Measuring the exact lifetime of neutrons is surprisingly tricky; putting them in a container and monitoring their decay can lead to errors because some neutrons will be lost owing to interactions with the container walls. To overcome this problem, Pattie et al. measured the lifetime in a trap where ultracold polarized neutrons were levitated by magnetic fields, precluding interactions with the trap walls (see the Perspective by Mumm). This more precise determination of the neutron lifetime will aid our understanding of how the first nuclei formed after the Big Bang. Science, this issue p. 627; see also p. 605