Collaborating Authors

Hyperconvergence: What it is, how it works, and why it matters


Reimagining business for the digital age is the number-one priority for many of today's top executives. We offer practical advice and examples of how to do it right. Hyperconvergence is a marketing term referring to a style of data center architecture that trains the attention of IT operators and administrators on the operating conditions of workloads over systems. The main objective of hyperconverged infrastructure (HCI) has been to simplify the management of data centers by recasting them as transportation systems for software and transactions, rather than networks of processors with storage devices and memory caches dangling from them. The "convergence" that HCI makes feasible in the data center comes from the following: Since the dawn of information technology, the key task of computer operators has been to monitor and maintain the health of their machines.

What Kubernetes really is, and how orchestration redefines the data center


"You'll see that Kubernetes doesn't provide all these things," said Red Hat's Gracely. "They're all areas where the community is, through different vendors, through open source add-on projects, giving the marketplace a lot of options, giving them choice, giving them pluggability for these different elements, and allowing companies to ultimately decide, within this broader framework, how do I build the best platform for what we want to do, pick the best pieces that make sense for us, but still have it all be interoperable and supportable?" Yet as Gracely's comment itself demonstrates, since the product of any of these collections is indisputably a platform, and Kubernetes is the facilitator at the center of it, then all of these results should be "Kubernetes platforms." Red Hat's OpenShift is one prominent example, as well as the latest 2.0 edition of Rancher. Whether Kubernetes is perceived by data center managers and CIOs as a platform or as an engine is not an esoteric or trivial matter. If the orchestrator is to continue to make headway in the enterprise, it can't afford to be treated as a lab experiment, or one of those crazy tools the developers love but no one else understands. "Engine" implies the need for a complete chassis (or, to borrow a phrase from my other gig, a "new stack"), and thus gives some evaluators the impression that it's incomplete by design. A platform must provide the enterprise with the hope that it could soon host all of its applications, not just the funky ones with the curly-Q's and the microservices. For this reason, the CNCF has been presenting Kubernetes as a platform capable of hosting both old and new applications by way of containerization, even when the benefits of transferring old applications from virtual machines to containers have yet to be assessed.

Kubernetes will rule the hyperscale data center in 2018


Half of the story of hyperscale is about the construction of vast, new infrastructure where there was none before. The other half is about the elimination of walls and barriers that would make one think the world was smaller than it actually was. Kubernetes is a series of open source projects for automating the deployment, scaling, and management of containerized applications. Find out why the ecosystem matters, how to use it, and more. "[If] a workload goes to Amazon, you lose," VMware CEO Pat Gelsinger was quoted as telling his company's Partner Exchange Conference in 2013.

Service mesh: What it is and why it matters so much now


A service mesh is an emerging architecture for dynamically linking to one another the chunks of server-side applications -- most notably, the microservices -- that collectively form an application. These can be the components that were intentionally composed as part of the same application, as well as those from different sources altogether that may benefit from sharing workloads with one another. Perhaps the oldest effort in this field -- one which, through its development, revealed the need for a service mesh in the first place -- is an open source project called Linkerd (pronounced "linker -- dee"), now maintained by the Cloud-Native Computing Foundation. Born as an offshoot of a Twitter project, Linkerd popularized the notion of devising a proxy for each service capable of communicating with similar proxies, over a purpose-built network. Its commercial steward, Buoyant, has recently merged a similar effort called Conduit into the project, to form Linkerd 2.0.

The drive-thru data center: Where an appliance runs your local cloud


It took 37 days for the Norwegian company contracted to build fiber optic connectivity for Brazil just to carry the optical cable for the project from the Atlantic Ocean, down the Amazon River, to the city of Manaus. Connectivity for this city's two million-plus inhabitants has never been a certainty. For 2014, cloud services analyst CloudHarmony ranked South America as the continent with the greatest service latency by far of any region in the world; and the following year, it rated Brazil as the world's least reliable region for Amazon AWS service. By now, the people of Manaus are tired of the "Amazon" puns. Meanwhile, they consume fast food just like the rest of the world.