Continuous video and film resolution enhancement from 1080P to 2K, 4K, 8K, and 3D is fundamentally increasing storage requirements for nearline content archive and especially, deep content archive. Completed films today often exceed 1 PB of usable capacity but, that doesn’t take into account all of the files of the uncompleted film (i.e., material that will be utilized in director cuts, commentary, documentaries and more). Total material data can easily exceed 400 times the usable storage capacity of the finished film, which results in geometric data storage growth that overwhelms traditional storage systems.
This presents an exceedingly difficult problem if all of the talent working on a project is co-located. The problem becomes crushing when the talent is distributed across different sites, and even continents—which is more and more often the case.
Key Rich Media Market Challenges
Digital preservation and digital conversion archive requirements are rapidly outpacing traditional storage solutions, that currently use massively complicated workarounds, which frequently break, leading to loss of productivity and lost data. This also complicates file availability issues. Rich Media storage systems must overcome the following obstacles in order to meet modern day storage challenges:
- Incapacity to scale to levels required by Rich Media’s exponential deep archive data growthTraditional solutions lead to storage system sprawl, with excessive expenditures of time and money on management, infrastructure, and ongoing data migrations—all in an ultimately futile effort to chase down the root causes of data storage and retrieval problems.
- Incompatibility with distributed and shared geographic accessStorage solutions must support a wide variety of workgroups, contractors, or customers located in dispersed geographic locations, with concurrent local access to verify, edit, change, add, convert, rework, or manipulate content.
- Excessive HA and DR costs and complicationsWith traditional storage, high availability and disaster recovery are predicated on storing multiple copies of data. To accommodate the explosion in data, expensive hardware and data center additions increase storage system complexity and lead ultimately to operator and customer frustration.
- Exceedingly High TCOTotal cost of ownership is based on a model that does not work for Rich Media organizations, where content is king.
The sun never sets on working hours in a geographically distributed work force. The era of 9 to 5 is over. Traditional storage is incapable of maintaining availability without incurring incredibly high costs that are unfeasible for production and broadcast companies. During a tech refresh, data migration alone severely impacts storage uptime and availability. In corner cases when traditional storage can meet the challenges of scaling, geographic distribution, and availability, it does so at such an exceedingly high cost that it becomes financially untenable.
Essential Rich Media Requirements
Geographically dispersed talent makes uptime, all the time, a mandatory requirement. No downtime allowed for scheduled or unscheduled events. Storage must be available 7 x 24 x 365 for all the demands placed on it, in a world that never sleeps.Decreasing margins and increasing competition has put added pressure on all Rich Media organizations to meet or exceed the following storage requirements without breaking the bank:
- Ability to scale to billions of objects while maintaining performance for all users without disruptionRich Media storage must be able to provision PBs to EBs of capacity for billions of objects or files, and provide excellent performance regardless of where users are located.
- Ability to share files across geographically distributed locationsFiles and data must be movable, based on policy, to where they are required when they are required.
- Always available and onlineFive nines (99.999%) availability all the time.
- Competitive TCOTightening margins and higher technology costs are placing ever increasing pressure on storage budgets.
Storage is clearly one of Rich Media’s largest cost components, with an outsized effect on whether or not requirements can be met. Rich Media nearline and deep archive content is already in the PBs, and quickly moving to EBs. It doesn’t take all that many projects, users, or files to rapidly consume available storage capacity. With the recognition that content is king, finding a cost-effective way to store, search, manipulate, and distribute content over the long-term is an absolute must for the production and broadcast industries.
This means storage must be adaptive, flexible, and always online, providing for transparent tech refreshes and enabling all scheduled and unscheduled maintenance to be performed without user disruptions—and these requirements must be delivered at the lowest possible cost. Given the industries’ broad revenue variability, a storage system with pay-by-the-drink (pay-per-use) pricing would seem to be a much better fit than the upfront pricing of traditional storage (pricing for fixed, preset storage capacity).
The Solution: Scality RING™ Organic Storage
Scality RING Organic Storage is architected from the ground up to meet and exceed all Rich Media nearline and deep archive requirements. It scales capacity into the exabytes, files or objects into the billions, and can do so over a geographically dispersed area. The scalability of the RING solution is the direct result of its unique Distributed Hash Table (DHT). DHT is an extraordinarily efficient lookup methodology that enables storage and retrieval of very large numbers of files or objects at a very high level of performance.
Scality RING Organic Storage provides unparalleled data, nodal, and system availability by leveraging its distinctive industry-hardened, carrier-grade peer-to-peer technology. The RING also comes with unequalled built-in system data resilience similar to an organic immune system. Every node constantly monitors a limited number of its peers, automatically rebalancing replicas and load to make the system fully self-healing without human intervention. Consistent hashing guarantees that only a small subset of keys is ever affected by a node failure or removal.
The RING also rebalances the data load automatically when a node fails, is removed or upgraded, or when new nodes are added. RING makes technology refresh a simple, online process with no application disruptions, eliminating data migration, long nights, and sleepless weekends. The result is a very high level of fault tolerance because the system stays reliable even with nodes joining or leaving the ring. Scality RING keeps costs low by enabling the use of standard off-the-shelf commodity server nodes, and through the use of a paradigm-shifting pay-by-the-drink pricing model. Unlike traditional storage, Scality RING charges are based on used capacity, not raw storage capacity, thereby assuring the lowest possible storage TCO.
© 2012 Scality. All rights reserved. Specifications are subject to change without notice. Scality, the Scality logo, Organic Storage, RING, RING Organic Storage, are trademarks or registered trademarks of Scality. in the United States and/or other countries.