Collaborating Authors


How predictive APIs are used at Upwork, Microsoft and BigML (and how they could be standardized) -- PAPIs stories


PAPIs '15, the 2nd International Conference on Predictive APIs and Applications, took place in Sydney, Australia and featured 4 research presentations. The corresponding papers were compiled into proceedings that were published in the Journal of Machine Learning Research (Volume 50 of the Workshop & Conference Proceedings series; you can also download the whole proceedings in a single pdf here). The first paper of these proceedings gives us a behind-the-scenes look at Microsoft Azure ML, an MLaaS environment for authoring predictive models, experimenting with them, running them on a cloud infrastructure and publishing them as web APIs. The Azure ML team presents design principles, challenges encountered and lessons learnt while building the platform. While it is common for ML practitioners to measure models' performance via predictions' accuracy, the second paper of these proceedings by Brian Gawalt of Upwork focuses on concerns of software engineers who are in charge of deploying in production and scalability: models' throughput and response time.