Goto

Collaborating Authors

 cloud function


Tech predictions for 2023 - AI/ML, cyber security

#artificialintelligence

As Steve Jobs said – 'Let's go invent tomorrow instead of worrying about what happened yesterday.' Advancements, inventions, and improvements in the technology world is progressing at breakneck speed. It takes a lot for our modern workforce and enterprises to keep up with the pace of technology. Geo-political and macroeconomic factors have an impact on tech trends as well. At this very moment, certain economies around the world are either in a state of recession or on the path of slowdowns.


Content moderation using machine learning: a dual approach -- The TensorFlow Blog

#artificialintelligence

I've often wondered why anonymity drives people to say things that they'd never dare say in person, and it's unfortunate that comment sections for videos and articles are so often toxic! If you're interested in content moderation, you can use machine learning to help detect toxic posts which you consider for removal. Machine learning is a powerful tool for all sorts of natural language-processing tasks, including translation, sentiment analysis, and predictive text. But perhaps it feels outside the scope of your work. After all, when you're building a website in JavaScript, you don't have time to collect and validate data, train a model using Python, and then implement some backend in Python on which to run said model.


What Serverless Computing Is and Should Become

Communications of the ACM

Let us examine an illustrative example from big data processing. Consider a simple query that might arise in an ecommerce setting: computing an average over 10 billion records using weights derived from one million categories. This workload has the potential for a lot of parallelism, so it benefits from the serverless illusion of infinite resources. We present two application-specific serverless offerings that cater to this example and illustrate how the category affords multiple approaches. One could use the AWS Athena big data query engine, a tool programmed using SQL (Structured Query Language), to execute queries against data in object storage.


AI in practice: Identify defective components with AutoML in GCP

#artificialintelligence

Until recently, the use of artificial intelligence (AI) was only possible with great effort and construction of own neural networks. Today, the barrier to entering the world of AI through cloud computing services has fallen dramatically. Thus, one can immediately use current AI technology for the (partial) automation of the quality control of components without having to invest heavily in AI research. In this article, we show how such an AI system can be implemented exemplarily on the Google Cloud Platform (GCP). For this purpose, we train a model using AutoML and integrate it perspectively using Cloud Functions and App Engine into a process where manual corrections in quality control are possible.


Integrating artificial intelligence into your IoT solutions

#artificialintelligence

In this article, you learn how to use artificial intelligence, or at least machine learning, to raise the alarm when there are changes in a supposedly static environment, such as a hay barn while the hay is drying after the harvest. I use two methods to achieve this: visual recognition and image comparison. Visual recognition requires more processing than can be done easily on a Raspberry Pi. The solution here is to upload pictures of the IBM Cloud, and ask IBM Watson Visual Recognition to identify the objects in them. If a new object appears, or if an expected object disappears (and doesn't show for a whole day, because objects may only be identifiable under certain lightening conditions), this AI system raises an alarm. Because object recognition using IBM Watson Visual Recognition requires significant bandwidth to upload pictures to the IBM Cloud, I designed an AI system that can work on a low-bandwidth network connection, such as a LoRa connection. To detect changes in such an environment, the second part of this article uses image comparison. Images are taken every ten minutes, and each time the image is compared to the image taken 24 hours prior. This way, the changes in lightening conditions will hopefully be minor enough to prevent false alarms. To implement visual recognition in the cloud, we will base our architecture on the short range architecture in the first article. The devices in the hay barn use WiFi to communicate with an access point in the farmhouse which is connected to the Internet.


Leverage deep learning in IBM Cloud Functions

#artificialintelligence

Based on Apache OpenWhisk, IBM Cloud Functions is a Functions as a Service (FaaS) platform that makes it easy to build and deploy serverless applications. In this tutorial, you'll build a serverless application using IBM Cloud Functions that monitors the content of a Cloud Object Storage bucket and analyzes the content of images that are uploaded to the bucket by a human or an automated process. For illustrative purposes, analysis is performed by a deep learning microservice from the Model Asset eXchange and analysis results are stored as JSON files in the same bucket. You can easily adapt the outlined approach to take advantage of hosted cognitive services, such as those provided by IBM Watson, and to store results in a NoSQL datastore like Cloudant or a relational database. By completing this introductory tutorial, you learn how to monitor a Cloud Object Storage bucket for changes (new objects, updated objects, or deleted objects) using Cloud Functions and how to use deep learning microservices from the Model Asset eXchange to automatically analyze those objects in near real time.


Tips for a cost-effective machine learning project - KDnuggets

#artificialintelligence

You just released a machine learning project. It can be a new product at your start-up, a proof of concept for a client demo, or a personal project that enriches your portfolio. You are not looking for a production-grade site; you want to get the job done. So that a few users can test your product. This post is a follow-up and an update over this previous post, where I introduced raplyrics.eu,


Leverage deep learning in IBM Cloud Functions

#artificialintelligence

Based on Apache OpenWhisk, IBM Cloud Functions is a Functions as a Service (FaaS) platform that makes it easy to build and deploy serverless applications. In this tutorial, you'll build a serverless application using IBM Cloud Functions that monitors the content of a Cloud Object Storage bucket and analyzes the content of images that are uploaded to the bucket by a human or an automated process. For illustrative purposes, analysis is performed by a deep learning microservice from the Model Asset eXchange and analysis results are stored as JSON files in the same bucket. You can easily adapt the outlined approach to take advantage of hosted cognitive services, such as those provided by IBM Watson, and to store results in a NoSQL datastore like Cloudant or a relational database. By completing this introductory tutorial, you learn how to monitor a Cloud Object Storage bucket for changes (new objects, updated objects, or deleted objects) using Cloud Functions and how to use deep learning microservices from the Model Asset eXchange to automatically analyze those objects in near real time.


Models as Serverless Functions

#artificialintelligence

I recently published Chapter 3 of my book-in-progress on leanpub. The goal with this chapter is to empower data scientists to leverage managed services to deploy models to production and own more of DevOps. Serverless technologies enable developers to write and deploy code without needing to worry about provisioning and maintaining servers. One of the most common uses of this technology is serverless functions, which makes it much easier to author code that can scale to match variable workloads. With serverless function environments, you write a function that the runtime supports, specify a list of dependencies, and then deploy the function to production. The cloud platform is responsible for provisioning servers, scaling up more machines to match demand, managing load balancers, and handling versioning. Since we've already explored hosting models as web endpoints, serverless functions are an excellent tool to utilize when you want to rapidly move from prototype to production for your predictive models. Serverless functions were first introduced on AWS in 2015 and GCP in 2016. Both of these systems provide a variety of triggers that can invoke functions and a number of outputs that the functions can trigger in response. While it's possible to use serverless functions to avoid writing complex code for gluing different components together in a cloud platform, we'll explore a much narrower use case in this chapter.


Crowdsourcing ML training data with the AutoML API and Firebase

#artificialintelligence

Want to build an ML model but don't have enough training data? In this post I'll show you how I built an ML pipeline that gathers labeled, crowdsourced training data, uploads it to an AutoML dataset, and then trains a model. I'll be showing an image classification model using AutoML Vision in this example but the same pipeline could easily be adapted to AutoML Natural Language. Here's an overview of how it works: Want to jump to the code? The full example is available on GitHub.