Goto

Collaborating Authors

 zoo


Watch: Cow astonishes scientists with rare use of tools

BBC News

Scientists are rethinking what cattle are capable of after an Austrian cow named Veronika was found to use tools with impressive skill. The discovery, reported by researchers in Vienna, suggests cows may have far greater cognitive abilities than previously assumed. Veronika, a cow living in a mountain village in the Austrian countryside, has spent years perfecting the art of scratching herself using sticks, rakes, and brooms. Word of her behaviour eventually reached animal intelligence specialists in Vienna, who found Veronika used both ends of the same object for different tasks. If it were her back or another tough area that warranted a good scratch, she would use the bristle end of a broom.


Donated Christmas trees get a second life at the zoo

Popular Science

The evergreen trees give kangaroos, bison, lions, and more extra shelter and fun. Capybaras use donated Christmas trees as wind breaks to protect their habitats. Breakthroughs, discoveries, and DIY tips sent every weekday. The presents are unwrapped, the cookies are crumbs, and that real Christmas tree will become a fire hazard soon enough. Most of us haul it out to the curb for our local sanitation departments to take care of, but some lucky trees make it into the paws of animals living in zoos.


A Model Zoo Generation Details

Neural Information Processing Systems

In our model zoos, we use three architectures. The code to generate the models can be found on www.modelzoos.cc. Data Management and Documentation: To ensure that every zoo is reproducible, expandable, and understandable, we document each zoo. For each zoo, a Readme file is generated, displaying basic information about the zoo. A second json file contains the the performance metrics during training.


A Model Zoo Details Table 5: Model zoo overview

Neural Information Processing Systems

The hyperpa-rameter choices for each of the population are listed in Table 7. Figure 8: Schematic of the auto-encoder architecture to learn hyper-representations. Hyper-representations are learned with an autoencoder based on multi-head self-attention. The architecture is outlined in Figure 8. Convolutional and fully connected neurons are embedded to A learned compression token (CLS) is appended to the sequence of token embeddings. The sequence is passed through another stack of multi-head self-attention, which is symmetric to the encoder. One is trained trained with the baseline hyper-representation MSE, the other with layer-wise-normalization.


Self-Supervised Representation Learning on Neural Network Weights for Model Characteristic Prediction Appendix

Neural Information Processing Systems

In the following, we provide the Appendix as part of the supplementary material to the main paper. Section C contains additional content about the model zoos. We also provide visualizations of some of the properties of our model zoo for better intuition. Consider a common, fully-connected feed-forward neural network (FFN). Training of neural networks is defined as an optimization against a objective function on a given dataset, i.e. their weights and biases are chosen to minimize a cost function, usually called loss, denoted by Subsequent earlier layer's error are computed with δ L, (6) where β is a positive learning rate.