Goto

Collaborating Authors

 flask


MetisFL: An Embarrassingly Parallelized Controller for Scalable & Efficient Federated Learning Workflows

Stripelis, Dimitris, Anastasiou, Chrysovalantis, Toral, Patrick, Asghar, Armaghan, Ambite, Jose Luis

arXiv.org Artificial Intelligence

A Federated Learning (FL) system typically consists of two core processing entities: the federation controller and the learners. The controller is responsible for managing the execution of FL workflows across learners and the learners for training and evaluating federated models over their private datasets. While executing an FL workflow, the FL system has no control over the computational resources or data of the participating learners. Still, it is responsible for other operations, such as model aggregation, task dispatching, and scheduling. These computationally heavy operations generally need to be handled by the federation controller. Even though many FL systems have been recently proposed to facilitate the development of FL workflows, most of these systems overlook the scalability of the controller. To meet this need, we designed and developed a novel FL system called MetisFL, where the federation controller is the first-class citizen. MetisFL re-engineers all the operations conducted by the federation controller to accelerate the training of large-scale FL workflows. By quantitatively comparing MetisFL against other state-of-the-art FL systems, we empirically demonstrate that MetisFL leads to a 10-fold wall-clock time execution boost across a wide range of challenging FL workflows with increasing model sizes and federation sites.


FLASK: Fine-grained Language Model Evaluation based on Alignment Skill Sets

Ye, Seonghyeon, Kim, Doyoung, Kim, Sungdong, Hwang, Hyeonbin, Kim, Seungone, Jo, Yongrae, Thorne, James, Kim, Juho, Seo, Minjoon

arXiv.org Artificial Intelligence

Evaluation of Large Language Models (LLMs) is challenging because instruction-following necessitates alignment with human values and the required set of skills varies depending on the instruction. However, previous studies have mainly focused on coarse-grained evaluation (i.e. overall preference-based evaluation), which limits interpretability since it does not consider the nature of user instructions that require instance-wise skill composition. In this paper, we introduce FLASK (Fine-grained Language Model Evaluation based on Alignment Skill Sets), a fine-grained evaluation protocol for both human-based and model-based evaluation which decomposes coarse-level scoring to a skill set-level scoring for each instruction. We experimentally observe that the fine-graininess of evaluation is crucial for attaining a holistic view of model performance and increasing the reliability of the evaluation. Using FLASK, we compare multiple open-source and proprietary LLMs and observe a high correlation between model-based and human-based evaluations. We publicly release the evaluation data and code implementation at https://github.com/kaistAI/FLASK.


Introduction to ML Deployment: Flask, Docker & Locust

#artificialintelligence

You've spent a lot of time on EDA, carefully crafted your features, tuned your model for days and finally have something that performs well on the test set. Now, my friend, we need to deploy the model. After all, any model that stays in the notebook has a value of zero, regardless of how good it is. It might feel overwhelming to learn this part of the data science workflow, especially if you don't have a lot of software engineering experience. Fear not, this post's main purpose is to get you started by introducing one of the most popular frameworks for deployment in Python -- Flask.


🤖 Python with Raspberry Pi: An Introduction

#artificialintelligence

Codecademy's free Python course is a great way to get started with the language, as it provides a hands-on, interactive introduction to the syntax and features of Python.


The Data Science Pro Bootcamp 2022: 75 Projects In 75 Days

#artificialintelligence

In the last century, oil was considered as the'black gold'. But, with the industrial revolution and the emergence of the automotive industry, oil became the main driving source of human civilization. However, with time, its value dwindled due to the gradual exhaustion and resorting to alternative renewable sources of energy. In the 21st century, the new driving force behind industries is Data. As a matter of fact, even automobile industries are using data to impart autonomy and improve the safety of their vehicles.


Python Frameworks vs. Python Libraries

#artificialintelligence

Python frameworks help project owners fast-track their application's time-to-market. In this entry, let's answer the pressing need of startups to understand the difference between Python's frameworks and libraries. What are frameworks and libraries?" As a project owner(startup), this may be the question you ask when developers seek your thoughts on the best python frameworks or libraries to use. Now, before you panic and call a lifeline, think first.


Deploying Machine Learning Models with Heroku

#artificialintelligence

For starters, deployment is the process of integrating a trained machine learning model into a production environment, usually intended to serve an end-user. Deployment is typically the last stage in the development lifecycle of a machine learning product. The "Model Deployment" stage above consists of a series of steps which are shown in the image below: For the purpose of this tutorial, I will use Flask to build the web application. In this section, let's train the machine learning model we intend to deploy. For simplicity and to not divert from the primary objective of this post, I will deploy a linear regression model.


How to Make Your Models Available to the Public

#artificialintelligence

An end-to-end Machine Learning solution is an important way to bring AI to production and make it available for mass consumption and usage. But today, most AI practitioners simply do the pre-processing, training, evaluation and tuning stages and leave the remaining part to DevOps engineers. As such, a new field of development named MLOps has come into the mainstream. The focus has shifted from simply training and evaluation to also bringing and integrating it to production pipelines. On an individual level as well, knowing how to bring your model to the public is an important tool to have in an AI practitioner's skill-set.


GitHub - Nneji123/Serving-Machine-Learning-Models

#artificialintelligence

This repository contains instructions, template source code and examples on how to serve/deploy machine learning models using various frameworks and applications such as Docker, Flask, FastAPI, BentoML, Streamlit, MLflow and even code on how to deploy your machine learning model as an android app. The Repository also has code and how-to's for deploying your apps to various cloud platforms(AWS, Heroku, Vercel etc), working with Github actions for CI/CD(Continuous Integration and Continuous Development), TDD(Test driven development) with pytest and other useful information. Before we get into building and deploying our models we'll have to setup our environment. I use'pyenv' for managing different versions of python and pyenv-virtualenv for setting up virtual environments. You can learn how to install pyenv on your operating system by checking out their official github.


How to Easily Build Your First Machine Learning Web App in Python

#artificialintelligence

One of the coolest parts of building machine learning models is sharing the models we built with others. No matter how many models you've built, if they stay offline, only very few people will be able to see what you've accomplished. This is why we should deploy our models, so anyone can play with them through a nice UI. Flask is a Python framework that lets us develop web applications easily. After following this guide, you'll be able to play with a simple machine learning model in your browser as shown in the gif below.