These Birds Make Their Own Citrus-Scented Cologne

National Geographic News

On Alaska's rugged Shumagin Islands, no fruit tree dares take root, yet the smell of fresh lemons and tangerines hangs heavy in the air. A nearby colony of lovesick crested auklets are the culprits: Males produce a citrus odor to attract females. And as new research in Behavioral Ecology reveals, it's the strength of their citrusy scent along with the size of their crests that really matters. Researchers have long known that male auklets with larger crests--a tuft of head feathers that curl forward--have more sexual appeal. But unlike an elaborate display or vocalization, these ornaments cost little energy to produce, making them an unreliable signal of fitness to females looking for a robust male.


Domain-Invariant Projection Learning for Zero-Shot Recognition

Neural Information Processing Systems

Zero-shot learning (ZSL) aims to recognize unseen object classes without any training samples, which can be regarded as a form of transfer learning from seen classes to unseen ones. This is made possible by learning a projection between a feature space and a semantic space (e.g. Key to ZSL is thus to learn a projection function that is robust against the often large domain gap between the seen and unseen classes. In this paper, we propose a novel ZSL model termed domain-invariant projection learning (DIPL). Our model has two novel components: (1) A domain-invariant feature self-reconstruction task is introduced to the seen/unseen class data, resulting in a simple linear formulation that casts ZSL into a min-min optimization problem.


Puffins are donning sexy little sunglasses in the name of science

Popular Science

Dunning already knew that crested auklets, another kind of seabird with a much crazier look to them, have beaks that fluoresce when placed under UV light. He was a man with a hunch and, as he told Newsweek, also "the kind of guy that people send dead birds to." That's not a vote of confidence in most circles, but in the ornithology world it means you've got a bunch of frozen specimens stashed away for testing such hunches. Sure enough, when he plopped a puffin under the UV light, its beak glowed. We think of things that show up under black light as secret, hidden messages, but that's only because our eyes don't perceive UV light as visible.


AMS-SFE: Towards an Alignment of Manifold Structures via Semantic Feature Expansion for Zero-shot Learning

arXiv.org Machine Learning

Zero-shot learning (ZSL) aims at recognizing unseen classes with knowledge transferred from seen classes. This is typically achieved by exploiting a semantic feature space (FS) shared by both seen and unseen classes, i.e., attributes or word vectors, as the bridge. However, due to the mutually disjoint of training (seen) and testing (unseen) data, existing ZSL methods easily and commonly suffer from the domain shift problem. To address this issue, we propose a novel model called AMS-SFE. It considers the Alignment of Manifold Structures by Semantic Feature Expansion. Specifically, we build up an autoencoder based model to expand the semantic features and joint with an alignment to an embedded manifold extracted from the visual FS of data. It is the first attempt to align these two FSs by way of expanding semantic features. Extensive experiments show the remarkable performance improvement of our model compared with other existing methods.


Zero-Shot Learning via Semantic Similarity Embedding

arXiv.org Machine Learning

In this paper we consider a version of the zero-shot learning problem where seen class source and target domain data are provided. The goal during test-time is to accurately predict the class label of an unseen target domain instance based on revealed source domain side information (\eg attributes) for unseen classes. Our method is based on viewing each source or target data as a mixture of seen class proportions and we postulate that the mixture patterns have to be similar if the two instances belong to the same unseen class. This perspective leads us to learning source/target embedding functions that map an arbitrary source/target domain data into a same semantic space where similarity can be readily measured. We develop a max-margin framework to learn these similarity functions and jointly optimize parameters by means of cross validation. Our test results are compelling, leading to significant improvement in terms of accuracy on most benchmark datasets for zero-shot recognition.