Goto

Collaborating Authors

Why TinyML is a giant opportunity

#artificialintelligence

The world is about to get a whole lot smarter. As the new decade begins, we're hearing predictions on everything from fully remote workforces to quantum computing. However, one emerging trend is scarcely mentioned on tech blogs – one that may be small in form but has the potential to be massive in implication. There are 250 billion microcontrollers in the world today. Perhaps we are getting a bit ahead of ourselves though, because you may not know exactly what we mean by microcontrollers.


Eta Introduces TENSAI Flow for Machine Learning in Low Power IoT Devices

#artificialintelligence

Eta Compute, a machine learning company, recently announced its new TENSAI Flow software, which is designed to complement the company's existing development resources and enable design from concept to firmware in IoT and low power edge devices. "Neural network and embedded software designers are seeking practical ways to make developing machine learning for edge applications less frustrating and time-consuming," said Ted Tewksbury, CEO, Eta Compute. Now, designers can optimize neural networks by reducing memory size, the number of operations, and power consumption, and embedded software designers can reduce the complexities of adding AI to embedded edge devices, saving months of development time." "In order to best unlock the benefits of TinyML we need highly optimized hardware and algorithms. Eta Compute's TENSAI provides an ideal combination of highly efficient ML hardware, coupled with an optimized neural network compiler," said Zach Shelby, CEO, Edge Impulse. "Together with Edge Impulse and the TENSAI Sensor Board this is the best possible solution to achieve extremely low-power ML applications." It includes a neural network compiler, a neural network zoo, and middleware comprising FreeRTOS, HAL and frameworks for sensors, as well as IoT/cloud enablement. "Google and the TensorFlow team have been dedicated in bringing machine learning with the tiniest devices.


Embedded ML for All Developers

#artificialintelligence

Over the next decade, embedded is going to experiencing the kind of innovation we haven't seen since the late 2000s when open wireless, protocols and cryptography (and as a result, 32-bit MCUs) were introduced. Today most people think about Machine Learning as highly complex, large, and extremely memory and compute hungry -- with clusters of GPUs/TPUs heating whole towns... Now the age of tinyML has come -- we can already run meaningful ML inference on Cortex-M equivalent hardware. Rapid improvements in modern 32-bit MCU compute power efficiency and math capabilities (FPU, vector extensions), together with advancements in neural operators, architecture and quantization along with better open source tooling like TensorFlow Lite Micro are making this possible. For example, we recently built a complete DSP, Anomaly Detection and NN classifier for complex events on real-time 3-axis accelerometer data in software on a standard Cortex-M4 in just 6.6 kB of RAM and 20 kB of Flash. We are experiencing the start of what I call the "3rd wave of embedded compute."


Home :: Books :: TinyML: Machine Learning with TensorFlow Lite on Arduino and Ultra-Low-Power Microcontrollers

#artificialintelligence

All Indian Reprints of O'Reilly are printed in Grayscale Deep learning networks are getting smaller. The Google Assistant team can detect words with a model just 14 kilobytes in size small enough to run on a microcontroller. With this practical book you'll enter the field of TinyML, where deep learning and embedded systems combine to make astounding things possible with tiny devices. Pete Warden and Daniel Situnayake explain how you can train models small enough to fit into any environment. Ideal for software and hardware developers who want to build embedded systems using machine learning, this guide walks you through creating a series of TinyML projects, step-by-step.


NEW PRODUCT – TinyML: Machine Learning with TensorFlow Lite – Pete Warden & Daniel Situnayake

#artificialintelligence

Deep learning networks are getting smaller. The Google Assistant team can detect words with a model just 14 kilobytes in size--small enough to run on a microcontroller. With this practical book, you'll enter the field of TinyML, where deep learning and embedded systems combine to make astounding things possible with tiny devices. Pete Warden and Daniel Situnayake explain how you can train models small enough to fit into any environment. Ideal for software and hardware developers who want to build embedded systems using machine learning, this guide walks you through creating a series of TinyML projects, step-by-step.