While much of the discussion around Open Compute revolves around its hardware innovations for large-scale datacenters, the infrastructure software element is equally important. Would Open Compute even exist if Facebook hadn’t written tailor-made software to make all their hardware run smoothly? That poses an important question for the rest of us: If you’re not Facebook or Google and you don’t have resources to write your own software, what can you do to ensure that scalable computing works for you?
It turns out that the right supporting software is available to almost any business looking to embrace scalable computing. But adopting the principles of Open Compute successfully requires a holistic approach.
A healthy data ecosystem
The ultimate goal of any organization that embraces scalable computing must be optimization at every layer, with the ability to enhance each layer without creating imbalance elsewhere. For example, increasing efficiency in Open Compute storage hardware requires increased responsibility for availability and durability in the software layer. Only with this symbiosis between hardware and software can administrators and users reap the many rewards of scalable computing: incredible efficiency at scale, always-on stability, and budget and resource alleviation.
But that symbiosis can be elusive. Facebook wrote Haystack, its own storage infrastructure software, to manage the efficient retrieval of photos and other user data, but other organizations typically have different workloads. And although their software needs will differ, their pressures are just as acute, since users have high expectations when it comes to availability, file retrieval, and latency.
Here are the major considerations for ensuring that your software will enable your scalable computing platform:
Think about the whole farm, not just the individual cattle. A software solution should accommodate and orchestrate hundreds or thousands of hardware assets, massive amounts of data being transferred between them, applications, and end users. HPE infrastructure and Scality’s software platforms are designed for this kind of scale.
Always-on availability and consistent latency are not optional. Despite potentially enormous numbers of nodes and pathways, scalable computing allows for always-on performance regardless of individual point failures or unpredictable peaks of demand. Scaling up capacity or performance should consist of freely adding more hardware, with no interruption of service. Software updates and patches should behave the same way. End users expect this 24 x 7; your infrastructure team can’t reap the benefits of scalable computing without it.
Remember your network. The goal with networking mirrors the goal for the entire system: Ensure the interoperability of each layer and prevent wide disruption when there is a fault. When you’ve got networking as the glue between all the layers, it’s crucial to ensure the stability of the stack through many conceivable interruptions.
It’s about staying competitive
Whether you’re a service provider or operate a large data center for internal customers, the open-infrastructure model is crucial for remaining competitive. However, it isn’t just about the hardware. Scalable computing offers greater efficiency and economies of scale, but it takes the right software and a holistic approach to bring out all the benefits of the model.
We look forward to discussing this with you at Open Compute Summit in San Jose through March 10.