OpenDrives’ Atlas 2.1 Enables Massive Enterprise Scalability
General Availability of Atlas 2.1
To finish off an incredible (and incredibly productive) year, we at OpenDrives are announcing one more significant market release: the next version of our software platform and file system that powers all OpenDrives storage solutions. Slated for release this week—on October 7, 2020—Atlas 2.1 brings an enormous amount of power and flexibility to our storage lineup. Our focus with Atlas 2.1 has been to bring out the real potential of our hardware solutions. That potential lies in what we can enable through software capabilities rather than advances solely in our hardware.
Through software-defined capabilities, we’re now able to overcome significant constraints in a number of functional areas, especially across our deployment architecture and scalability. The thornier issues of scaling our storage solutions (both up and out) challenged us to follow an evolutionary path toward extreme scalability and modularity. A little background helps to understand how much of an accomplishment this has been for OpenDrives.
OpenDrives and the path to massive scalability
OpenDrives didn’t come into being as an enterprise vendor of software intentionally. Our founders weren’t trying to develop an enterprise storage solution for general consumption, at least not at the outset. In fact, we were all professionals immersed in the media and entertainment (M&E) industry who had come across serious storage-based impediments to getting our jobs done effectively and efficiently. Given that all of our different operations were running highly demanding and resource-intensive workflows, our founders discovered the overlooked and unaddressed bottleneck in their overall IT infrastructures: storage.
Because nobody else in the storage industry was addressing this performance bottleneck, our founders designed a new storage solution from the ground up for our own purposes. Our solution was fully dedicated to running the types of demanding workflows found in media and entertainment. But we also wanted to find ways wherever possible to reduce the complexities found in some of the market-leading storage solutions at the time, which took a scale-out approach to storage infrastructures, one that required more nodes, more moving parts, and lots of interconnected network components stitching it all together.
Our founders came at the problem from the opposite direction, creating a more simplified and streamlined scale-up NAS solution without all the complex, interconnected storage area network (SAN) components but with the extreme performance required to run M&E workflows. We created OpenDrives’ first ultra-high-performance scale-up storage offer, and that initial solution catalyzed our industry reputation for ultra-high performance and simplicity all in a single device. Over time, we were running through 4K and 8K workflows well before our competition, and customers took notice.
With initial success also came the ability to tap into what our customers were seeing and experiencing with our solutions in production environments. What we began to hear was that, while our scale-up solutions were enabling extreme performance and throughput, they were hitting a barrier at the upper threshold of capacity and performance. We discovered that scale-up performance and capacity didn’t continue forever upward without repercussions and that adding a new scale-up “silo” wasn’t necessarily the best solution for that either, especially when factoring in new and ever-growing capacity requirements. As a matter of fact, new storage silos created additional deployment and management issues. Our customers began telling us that scale-out capabilities were really what they needed, but without sacrificing the scale-up performance they had to come to expect from us. Performance reductions were simply not negotiable.
For the past few years and more, resolving that customer need has been the north-star goal we’ve been trying to achieve. We wanted to make sure that OpenDrives storage solutions incorporated an architecture that provided the endless scalability potential of scale-out storage, all while retaining scale-up performance and ease of use (adding complexity to the user and management experience was of course non-negotiable too). Our Atlas software platform and filesystem are largely behind our ability to achieve this goal.
Is this hyperscale, or something else entirely?
Atlas 2.1 offers an abundance of new features and functions, but core to our ability to scale massively outward while retaining scale-up performance is our storage clustering architecture. Atlas enables the creation of clusters of individual scale-up devices. What this means is that a single, independent scale-up device can be coupled with others (or aggregated with many) to create scale-out clusters. This parallel distributed architecture enables the balancing of workloads among the cluster nodes without incurring performance hits such as increased latency, because Atlas ensures that individual files remain intact – they are not distributed out in parts to different nodes – and do not have to be reaggregated. Each node contains all compute, storage, and network resources to act independently of all other nodes in the cluster. Scalability and centralized management are achieved without sacrificing performance.
In the larger IT infrastructure industry, this notion of scaling out is behind the concept of hyperscale. The connotations behind the term hint at resources (compute devices such as workstations and servers, for example) with lesser performance capabilities but available at a much cheaper price point, making broader scale a more cost-effective fix in the end, as long as extreme performance isn’t the goal. While our Atlas software enables parallel distributed functionality, it also provides the ultra-high-performance characteristics our customers have come to expect. As such, hyperscale is not the best way to characterize our storage solutions. We achieve massive enterprise scale while retaining all the high-performance characteristics of our scale-up solutions.
Cluster nodes operate like individual kitchens
Nothing clarifies a technical concept such as massive enterprise-scale like an analogy. A nice one I use to describe how nodes within our clusters process tasks and carry a distributed load involves commercial kitchens. Imagine a single kitchen at a restaurant with all the ingredients (files) to cook specific dishes (projects/workflows). That kitchen can handle X-number of customers and is finely tuned (scaled up) to handle an optimal workload. This high-performance kitchen represents a single node in a storage cluster and is self-sufficient and not dependent on any functions outside that kitchen alone.
But maybe your restaurant needs to grow to handle many more patrons. So, you add another complete kitchen just like the first, again containing all the ingredients—which is identically capable of producing the same meals as the first. Now let’s increase that to five discrete kitchens. How about ten or beyond? You can certainly produce a lot of the meals you’re intending to serve with those resources!
In comes a new order. Because any of your numerous kitchens can handle the new order (all kitchens can produce any dish on the menu), an orchestrating person can route that order to the kitchen which is most idle and can best handle the new workload. As orders are completed, kitchens become free, while simultaneously new orders come in. With intelligent orchestration and routing – which is one of the roles Atlas plays in clustering – the performance of all kitchens hums along, serving all orders without backup or latency in completing those tasks. Performance and scale are both achieved. Want to handle more? Just keep adding kitchens, without limits!
The road ahead for OpenDrives
Massive scalability like this, though, doesn’t exist without the new capabilities built into Atlas 2.1. Moving forward, OpenDrives will continue to enrich the set of software-defined capabilities that unlock the real utility in our storage solutions. As you can see, we’re moving beyond boutique, high-performance storage solutions tailored to the media and entertainment industry, but we’re still retaining all the differentiators we developed because of and for that market: ultra-high performance, throughput, and resiliency. Now, you can add massive enterprise scalability to that list.