Goto

Collaborating Authors

 blockquote


Data Citizens: Why We All Care About Data Ethics

#artificialintelligence

I'm not a data scientist, yet I still care about ethics in data science. I care about it for the same reason I care about civics: I'm not a lawyer or a legislator, but laws impact my life in a way that I want to understand well enough that I know how to navigate the civic landscape effectively. By analogy, data citizens are impacted by the models, methods, and algorithms created by data scientists, but they have limited agency to affect them. Data citizens must appeal to data scientists in order to ensure that their data will be treated ethically. Data science ethics is a new field and it may seem like we need to invent all the tools and methods we will need to build that field from scratch.


Position Based Attribution vs. Data Driven Attribution: Where Machine Learning Fits in

#artificialintelligence

The rich text element allows you to create and format headings, paragraphs, blockquotes, images, and video all in one place instead of having to add and format them individually. Just double-click and easily create content. A rich text element can be used with static or dynamic content. For static content, just drop it into any page and begin editing. For dynamic content, add a rich text field to any collection and then connect a rich text element to that field in the settings panel.


Apple Has Released Core ML 2

#artificialintelligence

At WWDC Apple released Core ML 2: a new version of their machine learning SDK for iOS devices. The new release of Core ML, whose first version was released in June 2017, should create an inference time speedup of 30% for apps developed using Core ML 2. They achieve this using two techniques call "batch prediction" and "quantization". Batch prediction refers to the practice of predicting for multiple inputs at the same time (e.g. Quantization is the practice of representing weights and activation in fewer bits during inference than during training. During training, you can use floating-point numbers used for weights and activations, but they slow down computation a lot during inference on non-GPU devices.


Google Brings Machine Learning to Firebase with ML KIT

#artificialintelligence

Google recently introduced ML KIT, a machine-learning module fully integrated in its Firebase mobile development platform and available for both iOS and Android. With this new Firebase module, Google simplifies the creation of machine-learning powered applications on mobile phones and solves some of the challenges of implementing computationally intense features on mobile devices. ML Kit allows mobile developers to create machine-learning features based on some of the models available in its deep-learning Vision API such as image labeling, OCR and face detection. ML Kit is available both for Android and iOS applications directly within the Firebase platform alongside other Google Cloud based modules such as authentication and storage. ML Kit aims at solving several of the challenges specific to mobile devices which are raised by the computationally intensive operations required for artificial intelligence.


Google Upgrades Its Speech-to-Text Service with Tailored Deep-Learning Models

#artificialintelligence

A month after Google announced breakthroughs in Text-to-Speech generation technologies stemming from the Magenta project, the company followed through with a major upgrade of its Speech-to-Text API cloud service. The updated service leverages deep-learning models for speech transcription that are tailored to specific use-cases: short voice commands, phone calls and video, with a default model in all other contexts. The upgraded service now handles 120 languages and variants with different model availability and feature levels. Business applications range from over-the-phone meetings, to call-centers and video transcription. Transcription accuracy is improved in the presence of multiple speakers and significant background noise.


What's New in Azure Machine Learning?

#artificialintelligence

Matt Winkler delivered a talk at Microsoft Build 2018 explaining what is new in Azure Machine Learning. The Azure Machine Learning platform is built from the hardware level up. It is open to whatever tools and frameworks of your choice. If it runs on Python, you can do it within the tools and frameworks. Services come in three flavors: conversational, pre-trained, and custom AI.


How Booking.com Uses Kubernetes for Machine Learning

#artificialintelligence

Sahil Dua, developer at Booking.com, explained how they have been able to scale machine learning (ML) models for recommending destinations and accommodation to their customers using Kubernetes, at this year's QCon London conference. In particular, he stressed how Kubernetes elasticity and resource starvation avoidance on containers helps them run computationally (and data) intensive, hard to parallelize, machine learning models. Kubernetes isolation (processes not having to compete for resources), elasticity (auto-scaling up or down based on resource consumption), flexibility (being able to quickly try out new libraries or frameworks) and GPU support (albeit Kubernetes support for NVIDIA GPUs is still in alpha, it allows 20x to 50x speed improvements) are key for Booking.com to run a large number of ML models at their scale (around 1.5 million room nights booked daily and 400 million monthly visitors). Each model runs as a stateless app inside a container. The container image does not include the model itself, it is retrieved at startup time from Hadoop.


Microsoft Embeds Artificial Intelligence in Windows 10 Update

#artificialintelligence

The next Windows 10 update opens the way for the integration of artificial intelligence within Windows applications, directly impacting hundreds of millions of devices from Windows PCs and tablets to IoT Edge devices. The new version of the Windows ML platform allows developers to integrate pre-trained deep-learning models within their applications directly in Visual Studio. The models must be converted into the Open Neural Network Exchange (ONNX) format before importing into VS tools. ONNX is an open-source machine-learning framework launched by Microsoft and Facebook in September 2017, later joined by AWS. ONNX enables portability between neural-network frameworks, making it possible for models trained with tools like Pytorch, Apache MxNet, caffe2 or Microsoft Cognitive Toolkit (CNTK) to be translated to ONNX and later implemented in Windows applications.


There's No AI (Artificial Intelligence) without IA (Information Architecture)

#artificialintelligence

This article first appeared in IEEE Software magazine. IEEE Software offers solid, peer-reviewed information about today's strategic technology issues. To meet the challenges of running reliable, flexible enterprises, IT managers and technical leads rely on IT Pro for state-of-the-art solutions. Artificial intelligence (AI) is increasingly hyped by vendors of all shapes and sizes-from well-funded startups to the well-known software brands. Financial organizations are building AI-driven investment advisors1. Chat bots provide everything from customer service2 to sales assistance3.


Popular Python Data Science Platform Anaconda Now Shipping With Microsoft VS Code

@machinelearnbot

Release 5.1 of Anaconda, the data science and machine learning platform, now includes Visual Studio Code as an IDE. This is part of a wider collaborative effort between Anaconda Inc. and Microsoft. Anaconda is a distribution of the R and Python languages aimed at data scientists. It bundles language runtimes, utilities and tools in one distribution managed by its own package management system, called conda. The Anaconda distribution is open-source though Anaconda Inc., the company behind the distribution, also offers a commercial Enterprise product.