The AI and ML deployments are well underway, but for CXOs the biggest issue will be managing these initiatives, and figuring out where the data science team fits in and what algorithms to buy versus build. Penguin Computing is rolling out four high-performance computing stacks for workloads such as artificial intelligence, analytics and data science and cloud. The move by Penguin Computing, a division of SMART Global Holdings, highlights how many HPC vendors are targeting commercial adoption and workloads that are becoming more mainstream. For instance, Dell Technologies launched HPC stacks aimed at industries and use cases. The takeaway is that HPC is moving beyond supercomputing rankings to more commercialization as workloads for AI, simulation and analytics goes enterprise mainstream.
Cloud Bigtable has long been Google Cloud's fully managed NoSQL database for massive, petabyte-sized analytical and operational workloads. At $0.65 per hour and node, it was never a cheap service to run, especially because Google Cloud enforced a minimum of three nodes per cluster for production workloads. Today, however, it is changing that, and you can now run Bigtable production workloads on just a single node. "We want Bigtable to be an excellent home for all of your key-value and wide-column use-cases, both large and small," Google Cloud Bigtable product manager Sandy Ghai said in today's announcement. "That's true whether you're a developer just getting started, or an established enterprise looking for a landing place for your self-managed HBase or Cassandra clusters."
Businesses are migrating to cloud architectures at a rapid clip and by 2020, cloud traffic will take up 92 percent of total data center traffic globally, according to Cisco's Global Cloud Index report. The networking giant predicts that cloud traffic will rise 3.7-fold up from 3.9 zettabytes (ZB) per year in 2015 to 14.1ZB per year by 2020. "The IT industry has taken cloud computing from an emerging technology to an essential scalable and flexible networking solution. With large global cloud deployments, operators are optimizing their data center strategies to meet the growing needs of businesses and consumers," said Doug Webster, VP of service provider marketing for Cisco, in a press release. "We anticipate all types of data center operators continuing to invest in cloud-based innovations that streamline infrastructures and help them more profitably deliver web-based services to a wide range of end users."
Edge computing is a form of computing where the processing occurs close to the source of activity and data. Working close to the edge reduces the latency of transporting data from the source to the processing units, and is ideal for uses cases that require rapid responses, such as the internet of things. The concept of edge computing is complementary to cloud computing, which is typically centralized processing residing far from the source of data. In edge-based systems, which some call the "near cloud," the goal is to extend the boundary of the cloud to be closer to the edge. It's easy to think edge computing magically solves many problems that cloud computing can't, but there's a trade-off due to the highly distributed nature of edge systems.
This blog will look at an area of the business which might cause some people's eyes to automatically glaze-over, but my challenge is to take this potentially boring topic and flip it on its head. What am I talking about? Cost seems to drive most conversations around cloud adoption, but we all tend to pretend it doesn't. Each cloud is different, everyone knows that. But here's the news flash: the way that cloud providers charge for their clouds is equally as different, and it can actually be a dangerous conversation to enter if you're not equipped with the knowledge you need to navigate it well.