Announcing enhanced Scality RING support for OpenStack Kilo

The OpenStack cloud framework has been growing rapidly in popularity over the last few years, as witnessed by the incredible attendance growth at the last few OpenStack summits. Last fall we attended the Paris OpenStack summit for the Juno release and saw the growing momentum and interest firsthand, including real mainstream customer use for this next generation cloud framework. We’re excited to see Kilo, the next release of OpenStack, be officially announced at the Vancouver Summit.

Scality has been busy augmenting the OpenStack storage management capabilities of the RING, with a goal to provide a simplified solution for managing all of OpenStack storage repositories. As many of you know, OpenStack provides several persistent data storage services, as follows:

  • Swift – for object based data accessed over REST from Nova compute instances
  • Cinder – for managing block data volumes from Nova instances
  • Glance – for managing image storage repositories

OpenStack also has a design committee working on a file service termed Manila that we expect to see come to fruition later in 2015. Scality has supported the Cinder API with a RING driver since 2013 (originally accepted for the OpenStack Grizzly release), and we announced a RING OpenStack Swift driver during the Juno summit last fall.

We continue to work with OpenStack adopters to enhance extend the RING’s capabilities for innovative cloud use cases. Today, Scality is announcing support for OpenStack Swift Storage Policies, as well as a native OpenStack Glance storage API.

The ability to exploit Swift’s storage policies with the RING’s Swift driver can open up some powerful new distributed storage use cases. Storage policies provide a way for applications to instruct the storage system on how an object should be protected, and even the location (data center region) for an object to be stored in. As a simple example, the RING has since its inception supported durable storage for small objects through a variable replication mechanism (1 to 6 copies of an object), and for the last several years we have also offered a robust Erasure Coding mechanism that is more efficient than replication for large objects (such as megabyte to gigabyte sized media data). In this way, we don’t require applications to incur the storage overhead of multiple copies (replication) for these bigger objects, while still providing very high levels of data protection.

Storage policies can pass information about the desired protection type down through the OpenStack Containers, where they are attached – through the RING Swift Driver, to enable OpenStack Nova instances to exploit these powerful schemes. Moreover, we are now hearing that customers are deploying OpenStack based applications across multiple geographies, such as data centers in Asia, North America and the US – and these applications need control over which data centers are used to store particular objects.

For example, for regulatory compliance in the Financial industry, a data object may need to be stored only in a specific data center region, such as for UK banking regulations, and not stored across country borders.

UK_data_storage_125

In other use cases such as media, for Workflow reasons, an object should be replicated three (3) times, with two copies stored locally in China and one copy stored remotely in North America. This fine-grained control over object storage is all possible now with support for OpenStack storage policies in the Scality RING.

US_China_data_storage_72

In addition, the RING now supports a native OpenStack Glance API, so it is now possible to converge image storage onto the RING. This provides even more powerful storage convergence capabilities for OpenStack, by enabling three storage “silos” to coexist (peacefully) on the same storage system:

These Scality storage advancements promise to further enhance the use cases for OpenStack storage, as well as to overall simplify OpenStack storage management. This can lead to greatly reduced capital expenditures as compared with managing several purpose-built storage systems, and perhaps more importantly – reduced operational expenses through fewer discrete storage systems to manage.

Leave a Reply

Your email address will not be published. Required fields are marked *