Machine Learning at the Edge
I'm really excited to talk about advances in federated learning at the edge with you. When I think about the edge, I often think about small embedded devices, IoT, other types of things that might have a small computer in them, and I might not even realize that. I recently learned that these little scooters that are all over my city in Berlin, Germany, and maybe even yours as well, that they are collecting quite a lot of data and sending it. When I think about the data they might be collecting, and when I put on my data science and machine learning hat, and I think about the problems that they might want to solve, they might want to know about maintenance. They might want to know about road and weather conditions. They might want to know about driver performance. Really, the ultimate question they're trying to answer is this last one, which is, is this going to result in some problem for the scooter, or for the human, or for the other things around the scooter and the human? These are the types of questions we ask when we think about data and machine learning. When we think about it on the edge, or with embedded small systems, this often becomes a problem because traditional machine learning needs quite a lot of extra information to answer these questions. Let's take a look at a traditional machine learning system and investigate how it might go about collecting this data and answering this question. First, all the data would have to be aggregated and collected into a data lake. It might need to be standardized, or munged, or cleaned, or something done with it beforehand. Then, eventually, that data is pulled usually by a data science team or by scripts written by data engineering, or data scientists on the team.
May-27-2022, 12:33:21 GMT