Centec Networks Unveils TsingMa Ethernet Switching Silicon for 5G Transport and Edge Computing Networks

#artificialintelligence

Centec Networks, a leading innovator of Ethernet switching silicon and SDN white box solutions, today announced the TsingMa CTC7132 system-on-chip (SoC) device, its sixth-generation switching silicon that is designed to help fuel the transformation from 4G to 5G deployment and from traditional cloud computing to edge computing with optimized cost, power, performance and features. "Built upon our proven Transwarp switch architecture, the TsingMa chip is a complete SoC device that is purpose-built to address the growing demand for extreme low latency, comprehensive end-to-end tunnel security and rich network telemetry in the era of 5G and edge computing," said Tao Gu, vice president of business development at Centec Networks. "TsingMa is the first of a series of new chips we are rolling out for OEMs and ODMs so they can build a new class of network equipment for 5G transport and edge computing networks." "Powered by Artificial Intelligence, network automation can effectively solve the operation and maintenance challenges of massive nodes in 5G deployment," said Tang Xiongyan, chief scientist of Network Technology Research Institute at China Unicom. "TsingMa's capability of collecting comprehensive metadata of network flows, deep forwarding states and perceived behavior, is a powerful tool for the development of intelligent network automation."


This Tiny Supercomputer Is the New Wave of Artificial Intelligence (AI)

#artificialintelligence

From just powering gaming computers, NVIDIA Corporation (NASDAQ:NVDA) has advanced its GPU business to focusing the use of its technology to power advanced machine technologies. NVIDIA DGX-1 – This is what is known to be the world's first commercially available supercomputer designed specifically for deep learning. NVIDIA claims that DGX-1 is a supercomputer delivering the computing power of 250 2-socket servers in a box. The company states on their website that its NVIDIA NVLink implementation delivers massive increase in GPU memory capacity, giving you a system that can learn, see, and simulate our world--a world with an infinite appetite for computing. NVIDIA also claims the DGX-1 can be trained for tasks like image recognition and will perform significantly faster than other servers.


Networking technology: Where it is now -- and where it's headed

#artificialintelligence

As corporate bandwidth requirements continue to surge exponentially with every passing year, it becomes clear that bandwidth demands as well as the business requirements of the modern digital workspace are setting the stage for the implementation of new, advanced technologies. These technologies give rise to fresh possibilities and further fuel the demand for adding intelligent systems to our daily lives and greater reliance on tech support, both in the home and workfronts. With software trends emerging regularly in the IT scene, digital services and people are becoming further intertwined to characterize everything that's new the world of network technology this year. These recent advancements are more than likely to disrupt existing operations and foster an era of digitization and intelligence throughout the business sector. Let's see what's getting hot now in networking technology -- and how they will be sizzling by the end of the year.


How can serverless computing be cost-justified?

ZDNet

What a serverless deployment costs depends on a range of variables. The real question is whether it is more cost-effective than traditional means of software deployment. The key issues to bear in mind when considering the suitability of the serverless model for a software deployment are the nature of the application, and the degree to which you draw upon the services of third parties for code, and for services such as hosting and etc. Serverless computing is a highly modular deployment methodology, with code consisting of functions that behave in a particular way in response to a particular input. Among its core cost-benefits is its fast spin-up and spin-down time. A function is invoked, does its thing, and spins down again, so billing can be highly granular: you pay only for the time the function is working, and for the data it outputs.


Deep Learning meets Deep Deployment

#artificialintelligence

We now have a deep learning model that is able to deliver valuable results, but how can we apply it easily to new data where and when we need to?