jupyter notebook


Getting Started With Google Colab – Towards Data Science

#artificialintelligence

Just let me code, already! You know it's out there. You know there's free GPU somewhere, hanging like a fat, juicy, ripe blackberry on a branch just slightly out of reach. Wondering how on earth to get it to work? For anyone who doesn't already know, Google has done the coolest thing ever by providing a free cloud service based on Jupyter Notebooks that supports free GPU.


The AI database is upon us

#artificialintelligence

Be sure to share on LinkedIn. As organizations get better at managing and using a wider variety of data, the more they will adopt and make use of AI. IBM General Manager for Data and AI Rob Thomas has said organizations can't have effective AI without sound IA (Information Architecture). And one of the pillars of any IA is data management. In this new era of data, databases are no longer considered the traditional system of record or datastore.


Music Genre Classification with Python – Towards Data Science

#artificialintelligence

Companies nowadays use music classification, either to be able to place recommendations to their customers (such as Spotify, Soundcloud) or simply as a product (for example Shazam). Determining music genres is the first step in that direction. Machine Learning techniques have proved to be quite successful in extracting trends and patterns from the large pool of data. The same principles are applied in Music Analysis also. In this article, we shall study how to analyse an audio/music signal in Python.


Jupyter Notebook -- Forget CSV, fetch data from DB with Python

#artificialintelligence

If you read a book, article or blog about Machine Learning -- high chances it will use training data from CSV file. Nothing wrong with CSV, but let's think if it is really practical. Wouldn't be better to read data directly from the DB? Often you can't feed business data directly into ML training, it needs pre-processing -- changing categorial data, calculating new data features, etc. Data preparation/transformation step can be done quite easily with SQL while fetching original business data. Another advantage of reading data directly from DB -- when data changes, it is easier to automate ML model re-train process.


Practical Deep Learning for Coders, v3

#artificialintelligence

Looking for the older 2018 courses?: This site covers the new 2019 deep learning course. The 2018 courses have been moved to: course18.fast.ai. Note that the 2019 edition of part 2 (Cutting Edge Deep Learning) is not yet available, so you'll need to use the 2018 course for now (the 2019 edition will be available in June 2019). If you're new to all this deep learning stuff, then don't worry--we'll take you through it all step by step. We do however assume that you've been coding for at least a year, and also that (if you haven't used Python before) you'll be putting in the extra time to learn whatever Python you need as you go.


7 Steps to Mastering Basic Machine Learning with Python -- 2019 Edition

#artificialintelligence

Then read Michael J. Garbade's Understanding K-means Clustering in Machine Learning and implement k-means for yourself. Then take a look at Gabriel Pierobon's DBSCAN clustering for data shapes k-means can't handle well (in Python) to implement a density-based clustering model. Now that we have sampled around, let's switch gears back to classification and check out a more complex algorithm.


Apache Kafka KSQL TensorFlow for Data Scientists via Python Jupyter Notebook

#artificialintelligence

There is an impedance mismatch between model development using Python and its Machine Learning tool stack and a scalable, reliable data platform. The former is what you need for quick and easy prototyping to build analytic models. The latter is what you need to use for data ingestion, preprocessing, model deployment and monitoring at scale. It requires low latency, high throughput, zero data loss and 24/7 availability requirements. This is the main reason I see in the field why companies struggle to bring analytic models into production to add business value.



Build and Deploy a Machine Learning Model with Azure ML Service - The New Stack

#artificialintelligence

This article is a post in a series on bringing continuous integration and deployment (CI/CD) practices to machine learning. Check back to The New Stack for future installments. For the background and context, we strongly recommend you to read the previous article on the rise of ML PaaS followed by the article on the overview of Azure ML service. In this tutorial, we will build and deploy a machine model to predict the salary from the Stackoverflow dataset. By the end of this, you will be able to invoke a RESTful web service to get the predictions.


How to build an API for a machine learning model in 5 minutes using Flask Codementor

#artificialintelligence

As a data scientist consultant, I want to make impact with my machine learning models. However, this is easier said than done. When starting a new project, it starts with playing around with the data in a Jupyter notebook. Once you've got a full understanding of what data you're dealing with and have aligned with the client on what steps to take, one of the outcomes can be to create a predictive model. You get excited and go back to your notebook to make the best model possible.