With RING Organic Storage 4.0, Scality sets a new milestone for file storage at scale.

San Francisco, CA – April 2, 2012 – Scality, the established leader in object based storage, unveils today a new version of its flagship software, RING Organic Storage, which addresses all needs of file storage at scale.

While file storage has traditionally been deployed on dual-controller NAS architecture, the need for scale, which results from big data applications and the move to cloud infrastructures, requires a much more scalable design. With RING Organic Storage 4.0, Scality delivers a solution that handles both the primary storage and the long-term preservation of files at petabyte scale, with a density of up to 1.8 PB of protected data per rack. Scality’s RING is more cost effective than traditional NAS or Scale-out NAS because it leverages commodity hardware and reduces cost of operations.

RING Organic Storage 4.0, a major release, comes with 4 new essential features:

A complete scale-out file system
Advanced Resilience Configuration (ARC) for efficient long-term storage
A performance unsurpassed by any object based storage to date
A new management system to simplify the management of hundreds of servers

And the 4.0 release continues to leverage key properties of Scality’s design such as complete parallelism, self-healing, hardware agnostic organic upgrades and multiple datacenter replication for disaster recovery.

Industry analysts all agree that the storage of files or unstructured data now represents the bulk of storage required by service providers and large enterprises. At the current pace, unstructured data will soon reach 80% of all storage requirements. Traditional NAS storage has had to resort to overly complex architectures to handle this data volume, which has led to unsustainable system sprawl.

“Scality RING Organic Storage is a revolutionary step in the storage of files at scale”, said Jerome Lecat, Scality CEO. “For the first time, storage architects can select a solution that delivers performance for hot data, a cost of ownership comparable with webscale infrastructures, and simplicity of management, all at the same time”.

“Scality RING has seen an accelerating rate of adoption over the last few quarters particularly for primary storage applications. We expect the new RING Organic Storage 4.0 to appeal to our historical customers, as well as to web hosting service providers and large enterprises as they require more scalable file storage.” added Lecat.

Scale-out file system

Scality RING 4.0 introduces standard file-based access thanks to a scale-out file system. Unlike other storage vendors who deliver multiple interfaces by placing a gateway in front of their underlying storage technology, Scality is delivering file access with its own completely parallel design. As a result, the Scality file system benefits from the same performance, fault tolerance, self-healing, and organic growth characteristics intrinsic to its pioneering RING platform. Particularly, the ability to deploy a large number of connectors with shared metadata on a global namespace assures the delivery of virtually unlimited IOPS for file applications.

“By integrating as a userland agent within the Linux FUSE framework, our scale-out file system provides 100% POSIX compliancy. Additionally, by using a low level FUSE interface, we’re able to serve highly parallel workloads very efficiently as concurrent operations are spread out among all RING nodes. Sparse files and random writes at any offset are made possible thanks to MESA, the RING’s unique distributed transactional database,” said Giorgio Regni, Scality’s CTO.

Advanced Resilience Configuration

ARC was unveiled today, as part of the RING 4.0 release. It represents a new way to protect data against failures. This feature is based on erasure code technology, which has been used in the Telecommunications industry for some years. At very large scale, RAID 5 and 6 present serious vulnerabilities in terms of risk exposure and cost, which makes them unviable for such use cases.

Scality introduces a solution in which the original data is preserved and coupled with multiple checksums from which the data can be derived. The immediate benefit is that original data can be read directly, rather than calculated from the checksums. This is a very fast operation without any “penalty-on-read” typically associated with additional cpu cycles for data reconstruction.

Scality chooses a (16,4) combination by default, where 16 represents the number of original data fragments and 4 the number of tolerated failures. In this configuration Scality ARC accommodates 4 simultaneous failures, whether they result from disks, servers, networks, racks or sites. In terms of storage consumption, ARC (16,4) adds only 25% more storage than raw data globally, a drastic reduction from 2 or 3 copies in a replication environment.

ARC also delivers at least 99.9999% of durability1 for local configuration and reaches 99.99999999999% when ARC is replicated to another data center. “At petabyte scale, it is cost prohibitive to deploy a replication-based infrastructure. Scality ARC represents the perfect solution for such environments. Delivering high performance with direct I/O operations, ARC provides high levels of protection with a substantial economy,” confirms Brad King, Director of Customer Architecture at Scality.

ARC is also the solution for space-constrained environments. With hardware from leading manufacturers, holding 72 HDD of 3 TB and a motherboard with 4 rack units (4U), Scality is able to deliver over 1.8 PB of protected storage within one rack2.

Unsurpassed performance

Scality’s reference architecture is based on a two-tier architecture, with a first tier handling active data and file-system metadata and a second tier delivering web scale cost effectiveness for long term data storage. This two-tier architecture is completely transparent to client applications, which only see a virtually unlimited storage pool.

In a report published today, ESG Lab has evaluated the performance of Scality’s first tier when deployed with SSD and found that it delivers no more than 7 ms latency for complete read or write operations on 4 KB objects, and a throughput comparable with high performance computing systems.

In the conclusion of its report, ESG Lab writes “Until now, object storage could deliver the scalability, but not the performance. Standard response times for object-based storage are in the hundreds of milliseconds to full seconds, much too slow for today’s cloud-based services. In comparison, ESG testing demonstrated that Scality object storage on Intel Xeon servers equipped with Intel Enterprise SSDs and low-latency 10GbE network environment delivers 4-10 millisecond performance—10 times faster than other scale-out systems. The Scality solution delivers scalable object storage in which performance is never a barrier. Users can solve any performance problem with this architecture just by selecting the right number and types of server nodes and disk.”

Daniel Binsfeld, VP Customer Services of Scality notes “In practice, performance is not an issue for our customers, because we deliver much more than they typically require for their file-based use cases. Our two tier architecture delivers an average of 40 ms for 100 KB object writes and reads when deployed solely on HDD, and under 20 ms when deployed on a mix of SSD and HDD.”

New Supervisor features

Leveraging the experience gathered from existing customer deployments, Scality has redesigned its management system, known as the Supervisor. This management platform, which has its own web GUI, can also be integrated in existing customer management systems using a REST interface, SNMP or command line functions and scripting. The new Supervisor has considerably simplified the management of a large number (thousands) of servers.

Furthermore, the new management platform keeps track of many more indicators on the health and operations of each storage node, greatly simplifying performance tuning, troubleshooting and capacity planning operations.

Availability and pricing

Scality RING version 4.0 has been deployed in beta since January 2012, and is available immediately. It continues to be priced per capacity on a “pay-as-you-go” basis.

Jérôme Lecat concludes “Scality RING 4.0 is a major evolution for Scality but also represents a redefinition of object storage. We fully expect this new development to influence the way that end users view the use case for object storage. It further demonstrates our capacity to develop truly innovative technologies, perfectly aligned to the growing market for file-based storage at large scale.”

Learn more

1 Durability: According to Amazon web site, “durability (with respect to an object stored in S3) is defined as the probability that the object will remain intact and accessible after a period of one year. 100% durability would mean that there’s no possible way for the object to be lost, 90% durability would mean that there’s a 1-in-10 chance, and so forth.”

2 With the combination of Scality’s ARC feature and new designs coming from leading hardware manufacturers handling up to 70 large capacity disks per server, a 500 node RING can contain up to 80 PB of original data.

To learn more, additional information, including a datasheet for Advanced Resilience Configuration (ARC) is posted on our website at scality.com.