Cloud Computing: AI-Alerts

How the public clouds are innovating on AI


The three big cloud providers, specifically Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP), want developers and data scientists to develop, test, and deploy machine learning models on their clouds. It's a lucrative endeavor for them because testing models often need a burst of infrastructure, and models in production often require high availability. These are lucrative services for the cloud providers and offer benefits to their customers, but they don't want to compete for your business only on infrastructure, service levels, and pricing. They focus on versatile on-ramps to make it easier for customers to use their machine learning capabilities. Each public cloud offers multiple data storage options, including serverless databases, data warehouses, data lakes, and NoSQL datastores, making it likely that you will develop models in proximity to where your data resides.

Huawei: Uighur surveillance fears lead PR exec to quit

BBC News - Technology

The report referenced an "interoperability test [in which] Huawei and Megvii jointly provided a face-recognition solution based on Huawei's video cloud solution. In the solution, Huawei provided servers, storage, network equipment, its FusionSphere cloud platform, cameras and other software and hardware, [while] Megvii provided its dynamic facial-recognition system software".

IBM open-sources Kubeflow Pipelines on Tekton for portable machine learning models - SiliconANGLE


IBM Corp. said today it's hoping to provide a standardized solution for developers to create and deploy machine learning models in production and make them portable to any cloud platform. To do so, it said it's open-sourcing the Kubeflow machine learning platform on Tekton, a continuous integration/continuous development platform developed by Google LLC. It's popular with developers who use Kubernetes to manage containerized applications, which can run unchanged across many computing environments. IBM said it created Kubeflow Pipelines on Tekton in response to the need for a more reliable solution for deploying, monitoring and governing machine learning models in production on any cloud platform. That's important, IBM says, because hybrid cloud models are rapidly becoming the norm for many enterprises that want to take advantage of the benefits of running their most critical business applications across distributed computing environments.

ASIC Clouds

Communications of the ACM

Specialized replicated compute accelerators (RCA) are multiplied up by having multiple copies per ASICs, multiple ASICs per server, multiple servers per rack, and multiple racks per datacenter. Server controller can be an FPGA, microcontroller, or a Xeon processor. Power delivery and cooling system are customized based on ASIC needs. If required, there would be DRAMs on the PCB as well. Each ASIC interconnects its RCAs using a customized on-chip network. Raises $8M to Advance Auto-Adaptive Machine Learning


Continual learning to build and automate ML pipelines from research to production, automatically retraining models in production with incoming data and advanced monitoring capabilities to ensure that models are accurate, healthy and performing well. Machine learning management that standardizes the full ML process in a collaborative environment, which supports management of models, experiments, data and research for "100% reproducible data science". An open platform that works with any framework or programming language. The platform's advanced connectivity to any compute resources (cloud/on premis) lets companies utilize on-premise infrastructure, including Kubernetes, Data Lakes, Hadoop, and more – as well as scale to any cloud service. Continual learning to build and automate ML pipelines from research to production, automatically retraining models in production with incoming data and advanced monitoring capabilities to ensure that models are accurate, healthy and performing well.

How the Google Coral Edge Platform Brings the Power of AI to Devices - The New Stack


The rise of industrial Internet of Things (IoT) and artificial intelligence (AI) are making edge computing significant for enterprises. Many industry verticals such as manufacturing, healthcare, automobile, transportation, and aviation are considering an investment in edge computing. Edge computing is fast becoming the conduit between the devices that generate data and the public cloud that processes the data. In the context of machine learning and artificial intelligence, the public cloud is used for training the models and the edge is utilized for inferencing. To accelerate ML training in the cloud, public cloud vendors such as AWS, Azure, and the Google Cloud Platform (GCP) offer GPU-backed virtual machines.

The Digital Transformation: How Legal is Changing from Paper to Hybrid-Cloud Legaltech News


AI and machine learning technologies are emerging from the beta phase and taking center stage to transform traditional information governance solutions into key enablers of the intelligent connected law firm.

The Tricky Ethics of Google's Cloud Ambitions


Google's attempt to wrest more cloud computing dollars from market leaders Amazon and Microsoft got a new boss late last year. Next week, Thomas Kurian is expected to lay out his vision for the business at the company's cloud computing conference, building on his predecessor's strategy of emphasizing Google's strength in artificial intelligence. That strategy is complicated by controversies over how Google and its clients use the powerful technology. After employee protests over a Pentagon contract in which Google trained algorithms to interpret drone imagery, the cloud unit now subjects its--and its customers'--AI projects to ethical reviews. They have caused Google to turn away some business.

An Executive's Guide To Understanding Cloud-based Machine Learning Services


Amazon SageMaker, Microsoft Azure ML Services, Google Cloud ML Engine, IBM Watson Knowledge Studio are examples of ML PaaS in the cloud. If your business wants to bring agility into machine learning model development and deployment, consider ML PaaS. It combines the proven technique of CI/CD with ML model management.

Amazon Brings Machine Learning Smarts To Edge Computing Through AWS Greengrass


Amazon has been investing in all the three key areas - IoT, edge computing, and machine learning. AWS IoT is a mature connected devices platform that can deliver scalable M2M, bulk device on-boarding, digital twins and analytics along with tight integration with AWS Lambda for dynamic rules. AWS Greengrass extends AWS IoT to the edge by delivering local M2M, rules engine, and routing capabilities. The most recent addition, Amazon SageMaker, brought scalable machine learning service to AWS. Customers can use it for evolving trained models based on popular algorithms.