A key goal for version 5.0 of the Scality RING was to simplify the installation and configuration. We wanted to make it much easier for customers and partners to get started with the RING. This article will show you how the installation of our petabyte-scale distributed system now takes a matter of minutes (vs. hours).
For the article, we used a small lab built on KVM virtual machines. The RING is typically deployed on density-optimized servers with a mixture of high capacity spinning disks and a few SSDs to accelerate metadata operations.
We’ve integrated our installer with Salt (aka Saltstack), a popular configuration management tool, and you can see here that I have a bunch of “minions” ready to go. We provide a single script for you to execute across however many nodes you’d like.
We execute this from the server that will be used to manage the environment. This is called the “supervisor” and it provides a user friendly GUI and also an advanced CLI for centralized management.
Now, you can see that the installer discovered a couple of different hardware configurations. We have nodes that we will use as connectors and nodes that will be used as back end storage servers. I’m going to break these out into groups to make it easier to install the packages I want. This comes in really handy when you have tens, hundreds, or even thousands of servers to install to.
I’m going to apply a couple of roles to the storage servers here. This is because we can actually use the storage servers themselves as entry points into the system, directly accessible to clients. Because this is a demo system, I also want to be able to show a separate access layer that can be scaled independently.
I can select which data protection mechanisms I’d like to use. I’m going to choose a combination of replication and erasure coding. I can set a policy threshold to automatically choose which method to use based on file size, and the installer will pick an erasure coding schema for me based on the size of the system and Scality’s recommended best practices.
Now, I create the first ring in the system. If we had SSDs on these servers, we would create a second Ring for metadata as well.
We can use standard Linux methods to update the packages in the future, so I’ll put in my credentials for the yum repository here.
That’s all the input the installer needs from me. Now it is going to complete everything else its own – this includes calculating and distributing the keyspace, formatting drives, and installing all the packages.
And that’s it. From here, we can log into the RING Supervisor GUI and manage the running system!