Edge computing is booming, with estimates ranging up to $61 billion in value in 2028. While definitions vary, edge computing is about taking compute power out of the data center and bringing it as close as possible to the device where analytics can run. The devices can be standalone IoT sensors, drones, or autonomous vehicles. Increasingly, data generated at the edge are used to feed applications powered by machine learning models," stated George Anadiotis, analyst, engineer and founder of Linked Data Orchestration of Berlin, Germany, working on the intersection of technology, media and data, writing in a recent account in ZDnet. However, "There's just one problem: machine learning models were never designed to be deployed at the edge.
There's been an ongoing trend of people wanting increasingly capable and smaller tech devices. Those desires have spurred progress in a segment of artificial intelligence (AI) called TinyML. Here's a look at how it could enhance future possibilities. It's already widely known that processing data directly on a device speeds things up compared to sending the information to the cloud. TinyML centers on optimizing machine learning models so microcontrollers on endpoint devices can run them.
Until now building machine learning (ML) algorithms for hardware meant complex mathematical mode s based on sample data, known as " training data," in order to make predictions or decisions without being explicitly programmed to do so. And if this sounds complex and expensive to build, it is. On top of that, traditionally ML related tasks were translated to the cloud, creating latency, consuming scarce power, and putting machines at the mercy of connection speeds. Combined, these constraints made computing at the Edge slower, more expensive, and less predictable. Tiny Machine Learning (TinyML) is the latest embedded software technology that moves hardware into an almost magical realm, where machines can automatically learn and grow through use, like a primitive human brain.
Since the HAL9000 and Star Trek's M-5 Multitronic, the power and capabilities of AI have always been oversold by both Hollywood and Silicon Valley. Although we're still waiting on machines that can carry on an intelligent conversation, AI has been creeping into many objects in our everyday lives behind the scenes, making them more useful and proactive. People are most familiar with the intelligent assistants built into devices like the Amazon Echo, Google Nest Hub and Apple HomePod, but as I wrote more than three years ago, these rely on cloud backend services for most of their smarts, using local hardware primarily to recognize their wake word and listen for follow-up questions. The combination allows surprisingly sophisticated deep and machine learning models to run on embedded systems. Until recently, shoehorning AI software into a battery-powered device has required data scientists skilled in working with the constraints of an embedded SoC, but recent advances in AI development and automation frameworks, categorically termed TinyML, greatly expands the realm of smart devices.
The world is about to get a whole lot smarter. As the new decade begins, we're hearing predictions on everything from fully remote workforces to quantum computing. However, one emerging trend is scarcely mentioned on tech blogs – one that may be small in form but has the potential to be massive in implication. There are 250 billion microcontrollers in the world today. Perhaps we are getting a bit ahead of ourselves though, because you may not know exactly what we mean by microcontrollers.