Scality: Storing the Information Age

“I’m Jerome Lecat, CEO of Scality. I am extremely excited to share with you a massive transformation that will affect every one of us, in our personal lives, and in every business in every industry. This change is how data storage is done. In the next couple minutes, our customers, our partners, our shareholders, and our leadership team will talk to you about this massive change, and about how Scality is at the core of it.”

“Software is really taking over, I think, every area of infrastructure, and so, in the past, great companies have been built who’ve developed proprietary hardware – and that includes storage, it also includes networking, it includes compute. And If you look at the trend we’ve seen over the last several years, it started in compute, with virtualization where the physical hardware that you’re running on is really irrelevant, and the added value is all in software. And we’ve now seen that go to networking and go to storage, and so it’s been a great commoditization of the hardware involved. And the companies that are the next generation of leaders in the storage area are going to be companies whose core value is really in the software that they build and that software is going to be able to run on almost any type of commodity hardware,” Mark Siegel, Managing Partner at Menlo Ventures.

“It used to be that when you bought storage, you were buying, you know, a million-dollar piece of hardware that rolled out of a big company’s factory. It was very complex, it had a lot of moving parts, and it scaled sort of vertically. Today, when you buy large-scale storage, you’re buying a distributed system that’s probably built on top of white box technology, and then you’re putting, you know, intelligent software on top of that distributed hardware. And, all of the intelligence, all of the “smarts” of the system, have really moved into the software layer,” Mark Muehl, Senior Vice President of Platform Technologies, Comcast.

“We think that, especially in the storage part, there will be more and more software-defined things coming up. Because people don’t want to have black boxes as hardware or anything. They want to use standard IT components. That’s what we also see in the broadcast environment,” Hans-Josef Lauer, IT Manager at RTL II.

“Petabyte-scale infrastructures tend to break everything. It’s interesting – the challenges people face when going to scale that size. And it’s not just the big guys anymore. You look at what data growth rates are across the board. Sooner or later everyone’s going to be a petabyte-sized data shop. It’s doubling, you know, almost every year. And with that doubling comes some really difficult challenges. First of all, just, how do you store it? And once it’s stored, how do you access it in a timely enough fashion to have, you know, value to the company, right? And when you do access it, is it accurate? Those are all really difficult things to do that only get amplified as the size gets increasingly larger,” Jim Dawson, Operating Partner at Menlo Ventures.

“Certainly storage needs to scale. The more storage you give users, the more- the more they will use. The more they use, the more storage you need to give them. So, storage needs to scale to very very large sizes, very large updates- update rates, and also storage needs to be shared between users who are spread worldwide, and therefore geographical replication, geographical span is important,” Marc Shapiro, Research at INRIA and distributed systems expert.

“The applications are changing. They’re evolving to be more cloudlike. Data points are being generated all over the place. We are seeing a need for an evolution in the storage space, and object storage represents that. And, the vision that HP has for where we want to go with object storage on servers aligns very with what Scality’s vision is. That sort of reliability, that sort of flexibility, is what our customers are looking for. And I agree with it – I believe that’s where our customers are, and I believe that’s what HP’s vision is as well,” Joseph George, Director of Big Data Servers and Solutions, HP

“Changing the world takes investment. We are changing the data storage world and we are very excited to announce 45 million dollars new round of funding, bringing total invested in the company since the beginning to over 80 million,” Jerome Lecat.

“We’re now six years old, so we’ve made sure our go-to-market principles are basically three simple pillars: software-only, indirect – hence the alliances with the likes of HP or Dell, and third, it’s about an initial focus on service providers that we’re now broadening to large enterprise and targeted verticals. What the funding allows us to do is basically fuel and accelerate that strategy – reaching out to new geographies, reaching out to more partners who are targeting the same large enterprise and verticals we focus on,” Erwan Menard, COO, Scality.

“So, the funding is going to allow us to accelerate existing development. So, advanced file system features, with redundancy and hardware performance. Scality does a lot of research internally today, and with the funding, we are going to be able to migrate some or such project to engineering. Some of them are file and object interoperability, multi-geo services, as well as security,” Giorgio Regni, CTO, Scality.

“The world is changing. In the information age, the businesses who own the data will win. To continue winning, businesses will need to harvest, store, mine and distribute ever more data for – and about – their customers, as well as about their production processes. Our job at Scality is to bring to the world the software to securely store all this data and make it readily available at all times from everywhere,” Jerome Lecat.

“The main technical benefit we experienced from implementing Scality RING was the ease-of-use on off-the-shelf hardware. There were near-to-no issues within the implementation of the project because everything worked as advertised,” Pierre-Yves Karembellec, Head of Architecture, Dailymotion.

“Really, we wanted a solution that could evolve over time. We wanted to be able to continue to grow, be able to add more nodes, and with the software-defined storage we’re able to do that – and evolve over technologies and also use commodity hardware underneath. And it’s easily expandable that way as well. You’re not tied down to one particular piece of hardware, you can go to a new technology over time and keep adding new nodes of that nature, and you’re data’s still preserved. One of the biggest problems is, you know, once you get petabytes and petabytes of data, it’s not easy to move or copy off. And so, you really have to pick a long-term solution,” Allan Lamkin, SVP and CTO, Deluxe OnDemand.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.