It has become a cliché, but indeed the pace of change and acceleration in our world of technology continues at a bewildering pace. In business, the common adage is that people and data are the most valuable assets, and it really is clear that information and knowledge are key creators of value.
There are examples where this is totally obvious such as within the digital realm of movies and music. For proof, just scan your monthly credit card bill for streaming service subscriptions. Also, take a moment to think of truly impactful digital content — the MRI that aids a doctor in early disease detection for a patient, the genome data that helps unlock a cure, and the convenience of planning our daily lives online for work, family, travel and entertainment.
Guiding the future: A shift in how data is stored and consumed
Behind the scenes of all of this business data, we see a fundamental shift in how data is stored and consumed. Data is now created practically everywhere. People generate data both in their business and personal lives, but we now also see machine-generated data being created at a massive pace in manufacturing locations, utilities, vehicles, etc. Data lives in our homes, cars, on cruise ships, in airplanes, hospitals, sports stadiums and many more places than we can list.
For enterprises, this reality means planning must include infrastructure to store, protect, consume and manage data anywhere —- in all of these places. For technology vendors like ourselves, this now translates into data everywhere, from the data center to the cloud and to the emerging “edge” — and this edge is a dramatically growing area of technology innovation and consumption.
A shift in user persona: The democratization of data storage
Ten to twenty years ago, there was a classical IT persona who managed storage within the enterprise data center: the storage administrator. These deeply knowledgeable and technical professionals understand that protecting data is key to their business success and making it consumable to the right people (and only the right people!) is the primary objective of their jobs. Understanding how data is stored, its formats, as well as how it is accessed and consumed gave rise to a specialized world of users who understand the speeds and feeds of storage and fluently speak in the language of technical data storage acronyms.
Increasingly, the process of capturing, protecting and giving access to data storage no longer rests solely in the hands of enterprise IT. It has become the domain of a broad range of application owners and technical architects as well as highlighted the role of development operations, or “DevOps,” teams. This collection of people now makes critical decisions within enterprises for solutions — which encompass applications, people, processes and infrastructure — and all of these decisions are made in a more independent manner than before.
A shift in workloads: The rise of cloud-native
The rise of new business applications in recent years is clear. Whereas we used to hear about enterprise resource planning (ERP) and business process re-engineering (BPR), we now hear about data lakes, big data analytics, artificial intelligence and machine learning. These workloads are driving major changes in data, how much of it needs to be stored, and how it gets consumed.
From a technical perspective, such workloads embrace new design principles and methodologies in application development design and deployment. This new wave, termed cloud-native, includes the use of distributed software services packaged and deployed as containers and orchestrated on Kubernetes. The promises of this new approach include efficiency, scalability and — very importantly — portability. The latter aspect will allow software applications and infrastructure to support the new dynamic we described earlier: data is created and lives everywhere.
From a storage perspective, cloud-native applications will also change how storage is accessed, provisioned and managed. This is a world of software services and interactions between services through well-defined interfaces or APIs. Storage has historically been an area where standard interfaces have been adopted. In the realm of file systems, specifically, we have well-known SMB and NFS protocols. For cloud-native applications, we see a natural fit of API-based access to storage, which object storage supports naturally through its RESTful APIs. The popular Amazon S3 API is now fully embraced by independent software vendors (ISVs) and storage vendors alike for the cloud, data center and for the edge. APIs also apply to storage management and monitoring, and we see API-based automation as another central theme in this cloud-native wave.
A future-proof solution for a new world: Lightweight, cloud-native object storage
Offering portability, API-based access, automation, and scalability to effectively unbounded levels, object storage brings all of the right ingredients together to be the optimal storage model for the new cloud-native world. Next-generation object storage solutions can and will go further in providing higher levels of performance for new applications and workloads, and will also provide simplicity of operations to ensure that wider ranges of users will be able to fully exploit them.
With that, we are excited to announce the launch of Scality ARTESCA, lightweight, cloud-native object storage software. Our efforts are aligned with all of the new dynamics, forces and trends seen over the last five years, and the innovation itself has, in fact, been in development for four years. We have gained experience in cloud-native methods and the new world of Kubernetes. As such, ARTESCA is cloud-native software, consisting of distributed services running in containers, and it is deployed and orchestrated on Kubernetes. This means that it lives in the same ecosystem and environment that applications and users will engage with — and will fit naturally into Kubernetes’ expectations, consumption and management style.
Key Application Characteristics
- Object S3 API access
- Data-intensive processing and IO Workloads
- High-concurrency and performance (Throughput, low-latency and S3 ops/sec)
- Container and Kubernetes-based
Also unique to ARTESCA is the fact that it’s both lightweight and enterprise-grade — a challenging combination of capabilities to bring together. ARTESCA can start very small on a single VM or server and grow easily, one server at a time. Moreover, it brings to the table the full complement of enterprise-grade multi-tenancy, identity management, security, data durability, geo-replication, multi-cloud data management and systems monitoring.
In an accelerated, app-centric world of increasing complexity and shifting demands, a new approach to data management and delivery is essential. ARTESCA will enable a new generation of cloud-native applications through their entire lifecycle, from development and test to production deployments, wherever your data lives.