Scality RING delivers advanced capabilities and innovative solutions to real world problems:
Full Scale-Out Model
The Scality RING is comprised of multiple servers. Hundreds or even thousands of servers are connected to one another, aggregating capacity so as to deliver a gigantic storage pool to the application. There is no limit to the number of servers that can be used, no limit to the storage capacity and no single point of failure.
3rd Generation Object Storage
Distinct from traditional file and block protocols, object storage provides for scalability, independence from the client application and flexibility regarding the choice of hardware components.
All of the logic is controlled within the Scality RING software layer, thus avoiding hardware vendor lock-ins while providing for simple application integration.
Developers have the choice of a native object interface, a file system interface (with the SOFS pack), or a block interface, and they have their pick of storage gateways.
2nd Generation Peer-to-Peer (P2P)
Scality RING has a distributed structure and logical organization based on Distributed Hash Tables (DHT), with no centralized intelligence, catalog or hierarchy across servers. This eliminates common bottlenecks and ensures that there is no central point of failure.
Furthermore, the direct mapping between the key and the object itself creates a consistently linear relationship between an expanding number of available nodes and the usable capacity—without compromising performance.
Scality also uses a version of the Chord protocol to provide efficient internode routing and boost data delivery. Together these characteristics make it possible to reliably support extremely high SLA agreement levels.
Scality’s Key/Value store affords an elegant and powerful solution to object location and access. It also delivers linear performance as capacity is increased. A key of 160 bits explicitly determines the placement of objects, as well as redundancy and SLA levels.
End-to-End Parallel Design
Scality RING affords a fully parallel architecture, including parallel data transfer from the application to the RING via connectors, thus maximizing resource utilization and delivering high performance. In addition, the relationship between storage nodes and IO daemons on servers gives RING a further element of parallelism, all the way to the physical storage.
Unlimited Objects and Files
Whether the RING uses object, file or filesystem storage modes, there is no limit to capacity. Because everything is fully virtualized within the RING, the only limiting factor is the object key size. However, a keysize of just 20 bytes provides sufficient space to store an object for every grain of sand on the planet.
There is also no limit to the size of the stored object, and objects can grow or become smaller during their lifetime. Objects exceeding a specifiable size can be automatically split based on application requirements and configuration.
No Data Sharding
Scality RING requires no manual sharding. As a result, administrative complexity and deployment times are greatly reduced, and no manual process is required when there is a change in the size of the RING.
Data Redundancy and Protection
Three mechanisms are available to protect data: Replication, Geographic Distribution (Multi-Geo) and ARC (Erasure Code technology).
- Replication is the default method used to protect objects. Six copies (5 replicas) is the default configuration on the RING. However these settings are fully configurable on a per object basis, and these parameters can be controlled from the Supervisor console. The copy mechanism for the replication is managed at the connector level.
Geo Distribution (aka Multi-Geo) is an option that enables RING-to-RING copies, providing for geo-redundancy and greater disaster recovery capabilities.
- ARC is Scality’s Erasure Coding technology to provide protection against simultaneous failures at the disk, server and network levels. This feature is built into the RING and offers a default configuration of 14 data fragments and 4 checksum or parity fragments (but it is fully configurable). ARC (14,4) requirs only 30% storage overhead, while limiting the additional cost of a replication solution and providing protection against four simultaneous failures. For more information about ARC, please see the ARC data sheet.
Applications access their data through connectors deployed on application servers, dedicated machines or RING servers. Multiple generic connectors exist, including REST/HTTP, SRWS, RS2 Light, full S3-compatible RS2 (with indexing, accounting and authentication), the Scale Out File System (SOFS) file interface, and block storage integration via a Scality partner offering. The SOFS connector delivers very fast and parallel file data access, as well as easy application integration. For more information about SOFS, see Scale Out File System in the Product>Features section of this website.
Scality RING includes a comprehensive fully automated mechanism that detects failures at object or node level and rebuilds objects without manual intervention. This guards against silent data corruption due to disk errors. This functionality is fully configurable, allowing the administrator to accelerate data resynchronisation or to give priority to a particular application. This capability serves to satisfy even the most stringent SLA.
In response to configuration or topology changes, such as the loss or addition of nodes, RING automatically rebalances the keyspace across the storage nodes. This feature is configurable through the Supervisor, allowing the administrator to tailor specific parameters to boost or limit the balancing function.
RING is elastic and can scale up or down with complete transparency. The RING engine was developed to be fully autonomous, and to react to changes in load due to disk or network failures, hardware additions and configuration adjustments.
Scality allows for the prioritization of objects within a RING or across mulitple RINGs. By hierarchically arranging objects across tiers, more active data can be kept on faster primary storage (eg. Flash / SSD), and less active data can be migrated to the slower, less expensive, capacity-oriented hardware (eg. SATA) of secondary tiers. This allows service providers to align the value of customer data to the cost of the storage—and to deliver on their SLAs while reducing the cost of doing so.
Multiple connectors in stateless mode are configured to maintain access to the data infrastructure. Parallel access to multiple storage nodes and multiple copies of the data itself allow for consistent and continuous service to applications, and allow the service provider to reliably deliver on very demanding SLAs.
The supervisor node provides administrators with granular control of the RING platform. It is usually configured on a dedicated machine that runs the Web GUI with a dashboard. It is also possible to manage the platform with the Command Line Interface (CLI) called RingSH.
Scality RING runs on standard Linux distributions (CentOS, RedHat, Ubuntu and Debian) and commodity x86 servers. It is completely hardware agnostic, affording the freedom to select hardware based on price and convenience. There are no special constraints with regard to CPU, memory, network or disk type. Mix-and-match (mixed-model) deployment is also permitted, enabling you to leverage existing legacy hardware. Please contact Scality customer service for recommendations and guidelines with regard to optimal hardware and configuration settings.
SecludIT has certified the security excellence of Scality RING. These independent security experts validated the security of RING with the StaaS connector. For more information and a full description of the test and audit results, please refer to the Security whitepaper in the Library>Analyst Reports section (http://www.scality.com/research_reports/) of this website.