model development


IBM's Watson Studio AutoAI automates enterprise AI model development

#artificialintelligence

Deploying AI-imbued apps and services isn't as challenging as it used to be, thanks to offerings like IBM's Watson Studio (previously Data Science Experience). Watson Studio, which debuted in 2017 after a 12-month beta period, provides an environment and tools that help to analyze, visualize, cleanse, and shape data; to ingest streaming data; and to train and optimize machine learning models in real time. And today, it's becoming even more capable with the launch of AutoAI, a set of features designed to automate tasks associated with orchestrating AI in enterprise environments. "IBM has been working closely with clients as they chart their paths to AI, and one of the first challenges many face is data prep -- a foundational step in AI," said general manager of IBM Data and AI Rob Thomas in a statement. "We have seen that complexity of data infrastructures can be daunting to the most sophisticated companies, but it can be overwhelming for those with little to no technical resources. The automation capabilities we're putting Watson Studio are designed to smooth the process and help clients start building machine learning models and experiments faster."


What is The Role Of A Machine Learning Engineer?

#artificialintelligence

Machine learning engineers are the key role player in machine learning model development. They are responsible for multiple task from coding to deployment, testing and troubleshooting the issues comes while developing such models. However, to know what exactly machine learning engineer do you need to can find out their role, duties and actual task performed by such professionals. Sometimes Machine learning engineers also called data scientist as they study and transform the data science prototypes and algorithms while working on ML-based modles. Similarly, there are many other responsibilities machine learning engineers perform.


Artificial Intelligence Creates a New Generation of Machine Learning

#artificialintelligence

Headquartered in Silicon Valley with offices in Shanghai and Hangzhou in China, R2.ai Inc. is growing rapidly. We sat down with the company's Founder and CEO to talk about AI that creates AI, and how automation is going to affect jobs in the future. Originally a chemist, Yiwen Huang, PhD, ended up working in Artificial Intelligence (AI) and Machine Learning (ML) 23 years ago when doing research using AI to identify molecular structures in chemicals. "I found the world of machine learning and computing so fascinating that I decided to switch into computer science. Since then, I've worked for 20 years in this space with data and data management, machine learning, and enterprise software development," he tells Interesting Engineering.


Three Machine Learning Tasks Every AI Team Should Automate

#artificialintelligence

With the war on AI talent heating up, the new "unicorns" of Silicon Valley are high-performing data scientists. Although as recently as 2015 there was a surplus of data scientists, in the most recent quarter there was a 150,000 deficit. This quant crunch will only grow deeper as the gap between the demand for these experts in developing machine learning models is not met with the supply from graduate programs. How do leading companies take steps to mitigate the damage of the quant crunch on their ability to earn a return on machine learning and AI investments? They empower the experts that they do have with a combination of tools and techniques that automate as much of the tedious components of the modeling process as possible.


Getting AI/ML and DevOps working better together

#artificialintelligence

Artificial Intelligence (AI) and machine learning (ML) technologies extend the capabilities of software applications that are now found throughout our daily life: digital assistants, facial recognition, photo captioning, banking services, and product recommendations. The difficult part about integrating AI or ML into an application is not the technology, or the math, or the science or the algorithms. The challenge is getting the model deployed into a production environment and keeping it operational and supportable. Software development teams know how to deliver business applications and cloud services. AI/ML teams know how to develop models that can transform a business.


Data Scientists, Not AutoML, Driving Model Development

#artificialintelligence

As machine learning technology matures and moves into the mainstream, analysts are attempting to figure out who within organizations is building machine learning models used in production and what tools and methodologies they are using. It turns out that those furthest down the learning curve have well established data science teams building machine learning models for production. Meanwhile, much ballyhooed AutoML services from Google and other cloud vendors are so far gaining little traction among enterprises, according to a recent survey of the state of machine learning adoption by O'Reilly Media. About half of survey respondents said their data science teams build machine learning models, indicating that those companies were further along in development and have adopted processes used for software development to build machine learning models. About three-quarters characterized their models as "sophisticated" while two-thirds described their data science teams as "early adopters" of machine learning technology.


PipelineAI Leverages Docker to Simplify AI Model Development - Container Journal

#artificialintelligence

PipelineAI is taking advantage of the community edition of Docker to help organizations develop artificial intelligence (AI) applications faster at less cost. Company CEO Chris Fregly says PipelineAI Community Edition is free, public-hosted edition of the PipelineAI Enterprise Edition, with which developers can employ Apache Kafka streaming software to drive data in real time into an AI models built using Spark ML, Scikit-Learn, Xgboost, R, TensorFlow, Keras or PyTorch frameworks. The PipelineAI platform makes use of graphical processor units (GPUs) and traditional x86 processors to host an instance of Docker Community Edition that makes available various AI frameworks that need to access data in real time, says Fregly. Over time, most AI applications will need to access multiple AI models to automate a process. PipelineAI aims to reduce the cost of creating those AI models by making it less expensive for developers to determine which AI framework will work best during the life cycle of their AI application.


What is Microsoft Azure Machine Learning? - Definition from WhatIs.com

#artificialintelligence

Azure Machine Learning Workbench: Workbench is an end-user Windows/MacOS application that handles primary tasks for a machine learning project, including data import and preparation, model development, experiment management and model deployment in multiple environments. Azure Machine Learning Experimentation Service: This service interoperates with Workbench to provide project management, access control and version control (through Git). It helps support the execution of machine learning experiments to build and train models. Experimentation also focuses on the construction of virtualized environments, which enables developers to properly isolate and operate models, and records details of each run to aid in model development. Azure Machine Learning Model Management: This service helps developers track and manage model versions; register and store models; process models and dependencies into Docker image files; register those images in their own Docker registry in Azure; and deploy those container images to a wide assortment of computing environments, including IoT edge devices.


Logistic Ensemble Models

arXiv.org Machine Learning

Predictive models that are developed in a regulated industry or a regulated application, like determination of credit worthiness, must be interpretable and rational (e.g., meaningful improvements in basic credit behavior must result in improved credit worthiness scores). Machine Learning technologies provide very good performance with minimal analyst intervention, making them well suited to a high volume analytic environment, but the majority are black box tools that provide very limited insight or interpretability into key drivers of model performance or predicted model output values. This paper presents a methodology that blends one of the most popular predictive statistical modeling methods for binary classification with a core model enhancement strategy found in machine learning. The resulting prediction methodology provides solid performance, from minimal analyst effort, while providing the interpretability and rationality required in regulated industries, as well as in other environments where interpretation of model parameters is required (e.g. businesses that require interpretation of models, to take action on them).


Azure Machine Learning services ease data science struggles

@machinelearnbot

At a high level, a data science workflow consists of model development, experimentation and tuning, which then culminate in model deployment and management. Azure Machine Learning Workbench addresses the first step in that process. Unlike the existing Azure Machine Learning development interface, Workbench is a packaged client application for Windows or macOS. Workbench provides a data science integrated development environment that facilitates data ingestion and preparation, model development and testing, and deployment to various runtime environments. Much like Visual Studio, Workbench uses a project metaphor to manage development via a logical container for model code, raw and processed data, Model Metrics and Run History.