Goto

Collaborating Authors

 edge computing platform


State of the Edge report projects edge computing will reach $800B by 2028

#artificialintelligence

A battle for control over edge computing environments is expected to drive a total of $800 billion in spending through 2028, according to a report published today by the LF Edge arm of the Linux Foundation. The State of the Edge report is based on analysis of the potential growth of edge infrastructure from the bottom up across multiple sectors modeled by Tolaga Research. The forecast evaluates 43 use cases spanning 11 vertical industries. The one thing these use cases have in common is a growing need to process and analyze data at the point where it is being created and consumed. Historically, IT organizations have deployed applications that process data in batch mode overnight.


NVIDIA Launches EGX - An Edge Computing Platform With Multi-Cloud And AI Capabilities

#artificialintelligence

At the Computex event in Taiwan, NVIDIA unveiled EGX, a multi-cloud and AI-enabled edge computing platform for enterprises. NVIDIA EGX is a unified edge computing stack that can span from the tiny Jetson Nano to a full rack of T4 servers. Customers can start small with EGX and gradually scale to support full-blown GPUs. NVIDIA is optimizing the software stack to power devices such as drones to dedicated servers that can handle AI inferencing at scale. NVIDIA Edge Stack is an optimized platform powered by NVIDIA drivers, a CUDA Kubernetes plugin, a CUDA container runtime, CUDA-X libraries and containerized AI frameworks and applications such as TensorRT, TensorRT Inference Server and DeepStream SDK.


A Deep Dive on AWS DeepLens - The New Stack

#artificialintelligence

Last week at the Amazon Web Services' re:Invent conference, AWS and Intel introduced a new video camera, AWS DeepLens, that acts as an intelligent device that can run deep learning algorithms on captured images in real-time. The key difference between DeepLens and any other AI-powered camera lies in the horsepower that makes it possible to run machine learning inference models locally without ever sending the video frames to the cloud. Developers and non-developers rushed to attend the AWS workshop on DeepLens to walk away with a device. There, they were enticed with a hot dog to perform the infamous "Hot Dog OR Not Hot Dog" experiment. I managed to attend one of the repeat sessions, and carefully ferried the device back home.