Goto

Collaborating Authors

 passthrough


MaLV-OS: Rethinking the Operating System Architecture for Machine Learning in Virtualized Clouds

Bitchebe, Stella, Balmau, Oana

arXiv.org Artificial Intelligence

A large body of research has employed Machine Learning (ML) models to develop learned operating systems (OSes) and kernels. The latter dynamically adapts to the job load and dynamically adjusts resources (CPU, IO, memory, network bandwidth) allocation to respond to the actual user demand. What this work has in common is that it utilizes ML to improve kernel decisions. To this day, and to the best of our knowledge, no work has taken the opposite direction, i.e., using OS to improve ML. While some work proposes applying system-level optimizations to ML algorithms, they do not tailor the OS to adapt to the ML context. To address this limitation, we take an orthogonal approach in this paper by leveraging the OS to enhance the performance of ML models and algorithms. We explore the path towards an ML-specialized OS, MaLV-OS. MaLV-OS rethinks the OS architecture to make it specifically tailored to ML workloads, especially in virtualized clouds, which are now widely used to run ML applications. MaLV-OS envisioned architecture includes (1) a micro-kernel, Micro-LAKE, which allows kernel space applications to use the GPU, and (2) an MLaaS (ML as a Service) subsystem that gathers ML models to help Micro-LAKE with memory management and CPU scheduling. MaLV-OS architecture also offloads system-sensitive parts of the models to the OS, to lighten the model complexity and programming, and speed up its execution. Finally, MaLV-OS integrates an open-source GPU virtualization software, merged directly into the hypervisor. For more flexibility, MaLV-OS vision is to enable the virtual machine to dynamically select MLaaS policies that can improve the performance of the model the user is running. Because MLaaS is designed as loadable kernel modules, the MaLV-OS architecture enables the dynamic addition of new capabilities to the MLaaS subsystem.


Meta's AI assistant is coming to Quest headsets in the US and Canada

Engadget

Meta's AI-powered assistant have been accessible on the Ray-Ban smart glasses for quite some time, but the company will only start rolling it out to its Quest headsets next month. The assistant will still be in experimental mode, however, and it's availability will be limited to users in the US and Canada. Meta has revealed the update alongside its announcements for the Llama 3.1 and the new Meta AI capabilities. Users who get access to the assistant in August will be able to put its hands-free controls to the test. The company said Meta AI is replacing the current technology used for Voice Commands on Quest, so it will be the one controlling the headset whenever people use voice for navigation and the one answering their questions if they ask for information.


How to Improve Machine Learning Code Quality with Scikit-learn Pipeline and ColumnTransformer

#artificialintelligence

When you're working on a machine learning project, the most tedious steps are often data cleaning and preprocessing. Especially when you're working in a Jupyter Notebook, running code in many cells can be confusing. The Scikit-learn library has tools called Pipeline and ColumnTransformer that can really make your life easier. Instead of transforming the dataframe step by step, the pipeline combines all transformation steps. You can get the same result with less code.


AI Creates A New Generation Of VPN Passthrough Tools

#artificialintelligence

AI has led to a number of impressive changes over the years. Even a number of technologies that were considered cutting edge a few years ago are obsolete in the age of AI. VPNs are a prime example. AI is creating new solutions for VPNs, which is enhancing privacy in a number of ways. A little over a year ago, Tech Radar published an article about the future of AI with the VPN industry. One of the biggest changes they talked about was AI-based routing, but there are other changes on the horizon as well.