To eliminate storage processing bottlenecks, Scality has introduced native support for storing and processing Hadoop data, enabling data to be left in place, stored and processed by the same compute cluster, thereby eliminating the need to move data between storage and processing. Using Scality SOFS, documents can be stored using standard file protocols such as NFS, and then processed with Hadoop Map Reduce without copying the data in HDFS format. Now Hadoop users can take advantage of Scality’s enterprise-scale RING, a scale-out storage infrastructure designed for petabyte- and exabyte-scale storage. With data doubling every eighteen months, the storage requirements of Hadoop-based analytics require a change in storage paradigms.
Scality RING for Hadoop