R-Brain is a next generation platform for data science built on top of Jupyterlab with Docker. It was recently unveiled at JupyterCon in late August. Don't let the name fool you. R-Brain currently supports R, Python, SQL, and more. It has integrated intellisense, debugging, packaging, and publishing capabilities.
Whether you're a novice data science enthusiast setting up TensorFlow for the first time, or a seasoned AI engineer working with terabytes of data, getting your libraries, packages, and frameworks installed is always a struggle. While containerization tools like Docker have truly revolutionized reproducibility in software, they haven't quite caught on yet in the data science and AI communities, and for good reason! With constantly evolving machine learning frameworks and algorithms, it can be tough to find time to dedicate towards learning another developer tool, especially one that isn't directly linked to the model building process. In this blog post, I'm going to show you how you can use one simple python package to setup your environment for any of the popular data science and AI frameworks, using just a few simple steps. Datmo leverages Docker under the hood and streamlines the process to help you get running quickly and easily, without the steep learning curve.
Good work comes out of good workspace. This practice might apply to data science too. In a studio, multiple Jupyter notebooks run separately as containers. The container is a technology to separate computer workspace into multiple workspaces with different software stacks and separate computer resources. In one container, you can run Jupyter notebook and install some required software with apt or yum commands.
Microsoft is pleased to announce public preview refresh of the Azure Machine Learning (AML) service. The refresh contains many new improvements that increase the productivity of data scientists. In this post, I want to highlight some of the improvements we made around machine learning experimentation, which is the process of developing, training, and optimizing a machine learning model. Experimentation also often includes auditing, governing, sharing, repeating, understanding and other enterprise-level functions. The process of developing machine learning models for production involves many steps.
It has been always difficult to consume TensorFlow or ONNX models without the help of tools like TensorFlow Serving or gRPC and all the fun that comes with protocol buffers. Hosting deep learning models to be consumed using REST was very hard although this is probably the most common approach application developers would start with. Microsoft has recently released Azure Machine Learning service which comes with heaps of features to facilitate development and deployment of machine learning models. One of those features is hosting ONNX models in docker containers to be consumed using REST. In this post, we go through an end to end workflow of hosting a sample ONNX model and consuming it from a .NET application.