The Multi-cloud Approach: The Future of Storage

I recently read with interest an article on the future of hybrid architectures by Mike Uzan.  To put it bluntly, Mike states that hybrid architectures are always a bad choice. He suggests that hybrid is a compromise between a legacy concept and a new disruptive one, and that in the long run, the newer concept wins 100% of the market, leaving any hybrid solution at a disadvantage.

I couldn’t agree more about the inadequacy of hybrid architectures and would add hybrid cloud to his list, as I recently stated in a previous post.

Where my opinion and that of Scality differ from Mike’s is on the extent to which the world will become entirely all-flash anytime soon.

We are moving to a digital world where most business value will be created in a digital form. Therefore, businesses need to store more and more data, in the form of IoT, user-generated data, videos, documents, backups, archives of everything… Storing all this in all-flash, even with dedupe – and much of the unstructured data cannot be compressed further or deduped – is prohibitively expensive.

At Scality, we see a world with two types of storage leveraging a multi-cloud (not hybrid cloud) approach.

The first type is one with super low latency (sub-millisecond) for relational databases, virtual desktops, home directories, virtual machine farms, and big data analytics. With such low latency, systems do not have time to go from machine to machine on an Ethernet network. Data needs to be available then and there. All-flash solutions, like those from Kaminario, 3Par or Pure Storage, and hyperconverged systems, like those from Simplivity, are the right architectures for this.

The second type of storage meets the need for capacity storage, for storing the massive volumes of content generated by the digital era, starting with plain old backups, videos, and data lakes. For Global 2000 enterprises, media & entertainment companies, the healthcare industry, and service providers, storing this mass of data on all-flash or hyperconverged systems is not economically viable. The right architecture for this is a distributed system, where the data is stored on cheap industry-standard servers using very large hard drives, and where self-healing software guarantees that the data is safe and the load is balanced automatically. This is what each of the cloud giants are doing, and this is what Scality delivers with Scality RING for enterprise, with all the necessary enterprise bells and whistles, starting with authentication and security.

We don’t believe in hybrid cloud. We do believe in multi-cloud. We don’t think that Global 2000 enterprise will move 100% of their IT to the public cloud. Again, this is just not economically viable. When data is active, when a compute instance is used 100% of the time, public cloud is much more expensive than a well-designed private cloud infrastructure (and you need the skillset in-house).

The Future is Multi-cloud

What we see for the future of Global 2000 IT is an infrastructure composed of multiple clouds. Large enterprises won’t be able to sustain the digital transformation with their traditional IT infrastructure, even if it is virtualized. Simply stated: it is not going to be competitive. In an era where most of our global economic value is being created digitally, if your costs for digital infrastructure are higher than your competitors’ costs, you cannot win the race. Global 2000 organizations will need to transform their IT into “Cloud IT,” with a mix of private clouds and public clouds where it is cost-efficient. They will need to move data from one cloud to another programmatically based on workloads. For example, if you produce videos, you probably want your main repository to be in-house, and Scality RING is the perfect companion for that.

When you want to distribute a video through the Internet around the world, you would copy this video to a public cloud, use the burst compute to transcode the video in multiple formats, and use the cloud CDN to distribute the video efficiently. If your distribution campaign is over, you can simply delete the video from the cloud thus halting additional storage costs. And when your video becomes obsolete and you want to archive it, you would move it to an archival cloud like Amazon Glacier. It is indeed data movement, but it is not tiering the way your grandpa used to do it. Scality’s newly-launched Zenko ( product provides exactly this, managing unstructured data in a multi-cloud world.

Follow me on Twitter @jlecat Scality MultiCloud

2 thoughts on “The Multi-cloud Approach: The Future of Storage”

  1. Roy White says:

    I get the multi-cloud aspect; but as for the public cloud can’t match the storage needs of on-premise then you are completely wrong. I solution very large solutions with storage needs that require 1000+ IOPS per TB; and the private solutions are unable to compete (EMC, HDS, Netapp). The fact that you can provide 100,000 IOPS to a single instance and then turn it off is a feature which private on premise solution can not deal with (competitively priced; yes it ‘CAN’ be done).
    As for the comment about ‘We don’t think that Global 2000 enterprise will move 100% of their IT to the public cloud’ I agree and I think we are another 5 years or so before this happens; as the application developers need to catch up to separate SW & HW; but it will happen (that said; all cloud is; is somebody else’s server on a shared principle).
    The point ‘public cloud is much more expensive than a well-designed private cloud infrastructure’ – again this is very arguable. If you are trying to like for like apples for apples; then possibly yes I would agree. However if you look at how the solution could be adapted to use server-less solutions as ‘part’ of the overall solution plus other features which public cloud can give you then no.
    ‘The way your grandpa used to do it’ comment did make me laugh out loud; he use to chip coal from the coal face; use a mangle for the clothes washing; and if we stick to that principle then we would not move forward. The purpose of storage tiering came about as part of the storage economics to drive down cost which ensuring performance was not degraded whilst avoiding bottlenecks. Times have moved on and the bottleneck has for the most part been removed which is the network; so if you have solutions which pass data east-west with an application environment then so long as its storage is near by then high performance can be managed; if the storage flow is North-south then it becomes an issue and hence I agree the private can be better placed.

  2. Tim Wessels says:

    Well, I don’t see a definition of hybrid cloud or multi-cloud so you really can’t have a conversation about their relative advantages and disadvantages. So, first define your terms and then offer examples that either make your case or refute your case depending on what you are promoting as a solution. From my perspective, hybrid cloud storage is a much more useful term then multi-cloud. A hybrid “thing” is made by combining two different elements. A hybrid cloud can be created from various combinations of private, public and community cloud storage. For example, an organization that wants to keep warm data in their private cloud and wants to park cold or archive data in a public cloud can combine two different storage clouds to create hybrid storage. A private cloud can also create hybrid storage by using a community cloud located in their region. This could be very useful if public cloud storage is not an option.

    Multi-Cloud is a vendor-defined word that typically means you can send your data to all of the “Big 3” public cloud storage providers using some “fabric” or “policy” to move data to where you want to store it. The implication is that your data is not “locked in” to a single public provider which makes you immune to a storage service outage. If this is accurate then you are actually paying to have your data replicated in two different public clouds so you can be insured against losing contact with it. Might be less expensive to keep it on premises in a private cloud and replicate it to just one public cloud provider.

    Let’s face it. Data has gravity and it is sticky. Keeping all of your data in a public cloud is a mistake because you have zero control over the infrastructure. Using hybrid cloud storage that combines private cloud use with public clouds or a community cloud is sensible. Using so-called multi-cloud storage is mostly just a new way to avoid saying hybrid storage and everyone knows just how important it is to describe something you want to sell with a new name instead of using the old name for what it is doing.

Leave a Reply

Your email address will not be published. Required fields are marked *