new precision
AI-SDV 2021: Francisco Webber - Efficiency is the New Precision
The global data sphere, consisting of machine data and human data, is growing exponentially reaching the order of zettabytes. In comparison, the processing power of computers has been stagnating for many years. Artificial Intelligence – a newer variant of Machine Learning – bypasses the need to understand a system when modelling it; however, this convenience comes with extremely high energy consumption. The complexity of language makes statistical Natural Language Understanding (NLU) models particularly energy hungry. Since most of the zettabyte data sphere consists of human data, such as texts or social networks, we face four major obstacles: 1. Findability of Information – when truth is hard to find, fake news rule 2. Von Neumann Gap – when processors cannot process faster, then we need more of them (energy) 3. Stuck in the Average – when statistical models generate a bias toward the majority, innovation has a hard time 4. Privacy – if user profiles are created "passively" on the server side instead of "actively" on the client side, we lose control The current approach to overcoming these limitations is to use larger and larger data sets on more and more processing nodes for training.