A behind-the-scenes look at Scality’s six-year run as a Leader.
At Scality, we’re extremely grateful to be named a Leader in the Gartner Magic Quadrant for the sixth year running. We’re humbled to be among the very best vendors in our class – an honor bestowed on only a few.
Gartner is a gold standard for IT vendor evaluation. The Magic Quadrant is a deep dive into our value proposition, business and product health. As vendors, we gear up annually for this deep dive analysis into our business. For me, that means a lot of espresso (Peet’s Cafe Latte is a personal fav) and burning the midnight oil.
Examining Gartner methodology
Gartner analysts use many metrics and measures to evaluate vendors, but the two primary criteria are execution and vision.
The important question is what’s behind this progress and why it matters for Scality customers. We believe that it provides our customers with third-party validation and confidence in the technology investments they’re making with us. It’s also a way to understand different vendors’ strengths and weaknesses and gain guidance for the future.
Execution: Real-world customer deployments and growth
We see execution as the Gartner measure of the vendor’s business: performance vis-à-vis revenue growth and customer acquisition, and the overall functioning of the business from an operations perspective. Gartner also evaluates product execution. To provide a better understanding of how our products are being used, we provided real-world customer stories from us. That includes our customers across the globe, which validate our ability to sell and support solutions in at least three major regions.
Vision: Keeping customers at the center
The other key axis of the Magic Quadrant is vision, which focuses more on the areas where the company will center its products and solutions in the coming years.
We keep our focus on three core customer data challenges:
- Data growth: If customers only had to manage a few terabytes, the scale of problems and solutions would be much lower. Most larger companies today already manage petabytes of data in aggregate. Much of that is valuable enough to warrant storing and retaining for the long term (months to years). This growth curve shows no sign of slowing down.The era of individual enterprises managing exabytes of data is coming; for the largest institutions, it’s already arrived.
- Data everywhere: Historically, data was created and consumed mainly in large IT data centers. In the past 10+ years, data is increasingly distributed across mobile devices, public clouds and in data centers. The variety of new edge locations that will create/consume data is also increasing – from stadiums to airports, cruise ships to autonomous vehicles, hospitals and so on. New data generation will largely be from machine-generated sources rather than human-based sources. We’re in a hyper-distributed world from a data perspective.
- New cloud-native workloads: We’ve seen the emergence of new data-intensive workloads in big data analytics, AI/ML, and media content distribution come online. We expect new cloud-native applications to further drive new content and data capacities as companies undergo application modernization. Kubernetes will also fundamentally change how storage is provisioned and consumed, to fit more naturally into this paradigm.
So how has our vision shifted over the past six years?
To make sure a vision sticks, you have to demonstrate a focused effort that shows steady progression. Our vision has evolved significantly over the past six years, a reflection of the changing customer needs and market trends we saw forming at each stage. Vision is the culmination of countless discussions with analysts, partners, internal stakeholders and of course, our customers. The future is never certain and things will inevitably change, but our vision is shaped by the needs of our customers, both today and in the future. Let’s look at how our vision and our solutions have adapted to changing market needs.
Data is bigger than ever
Around 2014, it became clear that enterprises had petabytes of unstructured data they preferred to store in-house versus the public cloud. Regulatory compliance and the value gained from historical insights required data retention for years. Enterprise customers concurrently were establishing their own “private” cloud environments to gain agility. They needed a storage service layer to fit into that. For Scality RING, this meant:
- Integration into private cloud environments
- Ability to deploy and operate in secure, offline (and sometimes dark) environments
- Simplified, elevated UI based management capabilities
- Security integrations with enterprise services and encryption (AD, LDAP, Kerberos & KMS)
The emergence of multi-cloud data
By 2016/2017, many of our customers required multi-cloud data capabilities — a need sparked by data being present on-premises and some combination of service providers and/or AWS, Azure, Google clouds. The need for data mobility started with requirements for data migration and evolved into more sophisticated policies for data collaboration, data protection or sovereignty. Our customers were dealing with heterogeneous data storage environments, and using a mix of NAS, scale-out NAS, SAN (with file systems), object storage, backup devices, clouds and tape to store unstructured data.The need for a storage-agnostic multi-cloud data management solution led to Zenko; the need for hybrid cloud data management capabilities was delivered in RING8. We introduced features to solve three key use-case patterns (hybrid-cloud data archiving, data bursting and data D/R) and we added integrated search capabilities.
Our customers now use our solutions to manage data across their private and public clouds. There’s also a growing need for data management to/from new edge locations. Data is everywhere, and solutions must be able to store and manage that data where it lives.
New cloud-native application workloads
Cloud-native applications and the emergence of a new class of data-heavy applications workloads in AI/ML and big data analytics are having a major impact. This is changing demands on storage dramatically, from both a consumption (access) and a management (provisioning) perspective. The natural mode here is toward more automation and therefore, fully API-driven actions. Automation and management simplification will be a central aspect of our vision moving forward.
The expectations of storage system capacity will be stretched even further, with the number of applications, users, data objects, concurrency and uptime demands following along. This will demand data storage systems that are more adaptive to solve more and bigger problems and help reduce the heterogeneous silos problem. They need to be more sustainable so they can maintain data for the long term.
In the end, this look back on six years of Gartner vision reminds us that the only constant in our industry is change. We look forward to many years of solving these emerging data problems and challenges.