Telstra has used open source machine learning technology to answer the age-old question that plagues every marketer: how effective is my ad spend? The telco wields one of the biggest marketing budgets in Australia, but that doesn't stop Telstra from wanting to track the performance of every dollar spent. The company previously faced a six-month lag to get visibility into the effectiveness of its marketing spend; that is now down to five weeks using new marketing mix modelling developed in partnership with Accenture, Deakin University and Servian. The telco previously used a traditional econometric model to assess the performance of its marketing spend, pulling together 800 variables – which took two-and-a-half months to assemble – and then modelling this using regression techniques. "Six months after the marketing period had ended I could tell the CMO [chief marketing officer] and the marketers how effective their marketing was... six months ago," Telstra's director of research, insights & analytics Liz Moore told the recent Big Data & Analytics Innovation Summit in Sydney.
The point of difference was, one was optimized for CPU while the other was optimized for the GPU. The reason for this is sometimes during inference the CPU can be faster than the GPU. While during training almost every time GPU is faster. These multiple frameworks created a lot of confusion among developers and since they were quite close to the hardware(for high performance), they were difficult to program in.
That is to say, for example, you have a memory heavy task that involves dealing with text (natural language processing), CoreML will automatically run it on the CPU whereas if you have compute heavy tasks like image classification, it will use the GPU. In order to make the conversion process simple, Apple designed its own open format for representing cross framework machine learning models called mlmodel. The SMS Spam Collection v.1 is a public set of SMS labeled messages that have been collected for mobile phone spam research. Now every time we run our app, Xcode will compile our machine learning model so that it can be used for making predictions.
Google's TensorFlow team released TensorFlow Lattice today to help developers ensure that their machine learning models adhere to global trends even when training data is noisy. A lookup table is a representation of data that includes inputs (keys) and outputs (values). Roughly speaking, the TensorFlow team's approach is to train the lookup table values using training data to maximize accuracy given constraints. This is really just a fancy way of saying that it allows developers to ensure that as inputs move in a single direction, outputs move in the same direction.
Google's TensorFlow team released TensorFlow Lattice today to help developers ensure that their machine learning models adhere to global trends even when training data is noisy. Lattice draws from the concept of lookup tables to simplify the process of defining macro rules to restrict models. A lookup table is a representation of data that includes inputs (keys) and outputs (values).
Nowadays it is pretty common to pack such server and all its dependencies into a package, configure it and deploy this package as a whole. We need to create so-called Docker image, create a container on that image and run it. Corresponding to our case -- we should test that the container runs and the server provided by TensorFlow successfully starts, accepts requests to our model and responses to them. TensorFlow Serving provides Docker images, so we can clone the repository and use them.
Machine learning models must be trained on historic data which demands the creation of a prediction data pipeline, an activity requiring multiple tasks including data processing, feature engineering, and tuning. This progress is achieved when both teams collaborate on the same automated machine learning platform, offering different deployment options that support the needs of the business as identified by the IT team. Success in this initiative requires companies to manage AI as a business initiative, to have hundreds of machine learning models in production, and to move models from development to their production environment in ways that are simple, robust, fast, and repeatable. Automated machine learning platforms allow business people to develop the models they need to transform operations while collaborating with specialist, including data scientists and IT professionals.
Most of the times, the real use of our Machine Learning model lies at the heart of a product – that maybe a small component of an automated mailer system or a chatbot. These are the times when the barriers seem unsurmountable. For example, majority of ML folks use R / Python for their experiments. But consumer of those ML models would be software engineers who use a completely different stack.
A group of researchers from the University of Washington, the University of Michigan, Stony Brook University, and the University of California Berkeley found that by adding a few stickers or spray paint to signs caused deep neural network-based classifiers to confuse them for other types of signs -- understandably, a big cause for concern. In a paper titled "Robust Physical-World Attacks on Machine Learning Models," the researchers describe how they developed a new "attack algorithm" capable of creating "adversarial perturbations" -- by visually altering signs in a number of real-world ways so that computer vision technology will misclassify them, regardless of distance or viewing angle. The team found that with this approach, they were able to confuse a machine 100 percent of the time into classifying a stop sign as a 45-mile-per-hour speed limit sign, and a right-turn sign as a stop sign. While the dataset of a few thousand training examples was relatively small, the results plainly show the potential vulnerabilities of deep learning artificial neural networks used in autonomous driving systems when real objects are modified.
Microsoft and Facebook have announced a joint project to make it easier for data analysts to exchange trained models between different machine learning frameworks. The Open Neural Network Exchange (ONNX) format is meant to provide a common way to represent the data used by neural networks. Caffe2, PyTorch (both Facebook's projects), and Cognitive Toolkit (Microsoft's project) will provide support sometime in September. This story, "ONNX makes machine learning models portable, shareable" was originally published by InfoWorld.