Goto

Collaborating Authors

 Gaddy, David


Gemini: A Family of Highly Capable Multimodal Models

arXiv.org Artificial Intelligence

This report introduces a new family of multimodal models, Gemini, that exhibit remarkable capabilities across image, audio, video, and text understanding. The Gemini family consists of Ultra, Pro, and Nano sizes, suitable for applications ranging from complex reasoning tasks to on-device memory-constrained use-cases. Evaluation on a broad range of benchmarks shows that our most-capable Gemini Ultra model advances the state of the art in 30 of 32 of these benchmarks - notably being the first model to achieve human-expert performance on the well-studied exam benchmark MMLU, and improving the state of the art in every one of the 20 multimodal benchmarks we examined. We believe that the new capabilities of Gemini models in cross-modal reasoning and language understanding will enable a wide variety of use cases and we discuss our approach toward deploying them responsibly to users.


Pre-Learning Environment Representations for Data-Efficient Neural Instruction Following

arXiv.org Artificial Intelligence

However, neural networks' powerful abilities to induce complex representations have come at the cost of data efficiency. Indeed, compared to earlier logical form-based methods, neural networks can sometimes require orders of magnitude more data. The data-hungriness of neural approaches is not surprising - starting with classic logical forms improves data efficiency by presenting a system with pre-made abstractions, where end-to-end neural approaches must do the hard work of inducing abstractions on their own. In this paper, we aim to combine the power of neural networks with the data-efficiency of logical forms by pre-learning abstractions in a semi-supervised way, satiating part of the network's data hunger on cheaper unlabeled data from the environment. When neural nets have only limited data that Figure 1: After seeing this transition, a neural net might generalize this action as stack red blocks to the right of blue blocks except for on brown blocks, but a generalization like stack red blocks on orange blocks is more plausible and generally applicable. We aim to guide our model towards more plausible generalizations by pre-learning inductive biases from observations of the environment.