What is the most important factor when building a data platform that delivers performance at scale? Making sure it is actually scalable! And that is exactly what we do at WEKA.

When it comes to managing performance and capacity, you must keep the platform from getting bogged down by legacy storage rebalancing methods. That’s why we do things a little differently at WEKA with our dynamic cluster data rebalancing.

In this blog, we will discuss the traditional methods of storage rebalancing and how WEKA approaches the problem in a new and innovative way.

What is Rebalancing?

In legacy storage systems, data is spread across multiple nodes in a cluster. This helps balance things out and make sure everything is running smoothly. The goal is to make the most of the system’s capacity and optimize its performance based on available data, usage, and resources.

But over time, usage patterns change, and the original data placement is no longer optimal. That’s where storage rebalancing comes in. It’s the process of moving data around to different storage nodes so that the load is balanced, and resources are being used most efficiently

In other words, storage rebalancing helps keep your storage system running like a well-oiled machine!

Why is it Needed in the First Place?

First, let’s talk about some different storage systems and how they handle the rebalancing problem.

For the most part, centralized storage systems, such as traditional file servers or Storage Area Networks (SANs), typically do not require storage rebalancing. This is because all data is stored on a single centralized device, and storage capacity and performance are scaled by adding additional storage devices, rather than distributing data across multiple nodes.

NOTE: There are some “scale out” NAS and SAN vendors that have models that require rebalancing both within an individual node, and across different nodes in the cluster as they isolate the capacity functions in each node separately from the rest of the cluster.

It is worth noting that some systems cannot rebalance data due to their design, which limits the number of nodes in the system and restricts their ability to rebalance.

On the other hand, some cloud storage systems, such as Amazon S3, use a global namespace and automatically distribute data across multiple nodes, eliminating the need for manual rebalancing. These systems are designed to handle changes in data distribution and node failures automatically, without the need for manual intervention. For example, Amazon S3 may rebalance data behind the scenes for better cross-data center resiliency, although the user does not directly manage this.

In a distributed storage system, however, storage rebalancing is a crucial process that ensures the balanced distribution of data across multiple nodes. This helps prevent any single node from becoming a bottleneck or “hot spot” for read and write operations and optimizes resource utilization, leading to improved performance and availability as the system scales or experiences changes in usage patterns.

Also, storage rebalancing can be used in case of failures, although this is not its primary use case. For instance, in the case of a node failure, rebalancing may kick in to rebalance the data, so workloads are not quite so impacted and then be required to perform again once the failed node has been replaced. It can also be used to add or remove nodes from the system, ensuring that data is properly redistributed and protected.

There are several scenarios in which the addition of nodes to a cluster might be necessary. For instance, if the amount of data in your application has increased significantly, you may need to add a node to prevent performance degradation. The same goes for situations where the analytic or compute workload has increased significantly.

Sometimes, you may also need to swap out a node for maintenance, upgrading, or replacement. And although it’s less common, you may need to remove a node if the cluster is over-provisioned or if you need to divert the hardware for another purpose.

In some scale-out distributed storage systems, a dedicated (single) metadata server architecture is commonly used to manage metadata. While this approach simplifies data placement and rebalancing decisions, it also introduces a significant bottleneck in terms of scalability and reliability as it constitutes a single point of failure. Taking that into consideration, many scale-out distributed storage systems employ the use of many metadata servers (MDSs), or a distributed metadata management approach instead.

The use of a distributed metadata management approach (multiple MDSs) is necessary for ensuring metadata consistency. However, it also adds complexity to the rebalancing process as the file system must be dynamically rebalanced to distribute data evenly across all metadata server clusters during cluster expansion or, if supported, contraction. This helps to maintain high performance and prevent overloading specific MDSs within the cluster but comes at the cost of increased complexity in rebalancing across multiple nodes. Examples of file systems that utilize MDSs for metadata management include GPFS.

At WEKA, we adopt a unique approach to metadata management by distributing metadata alongside the data it describes. This design decision results in a more efficient system that can operate with a higher level of parallelism, which supports faster dynamic rebalancing operations, leading to improved overall cluster performance.

In short, storage rebalancing is an essential process that helps maintain a balanced distribution of data in a distributed storage system, leading to improved performance, availability, and data protection.

Rebalancing Sounds Great – is it Always?

Yes, it does, and it is essential to a healthy scaling solution. However, there is more to unpack when it comes to the topic of rebalancing. Rebalancing a large-scale distributed storage system (think petabyte scale) can have significant ramifications if not effectively managed. The performance impact of rebalancing will depend on several factors, such as the size and complexity of the system, and the specific use case. The rebalancing process can be resource-intensive, requiring high utilization of CPU, memory, network, and storage, as data is transferred from node to node and disk to disk.

During rebalancing, it is common to experience performance degradation as the system taxes its resources. The network and disk usage increase, leading to increased latency for read and write operations, and an increased risk of data loss due to network failures or node failures during the rebalancing process.

However, proper architecture can minimize the negative impact of rebalancing. For example, with the WEKA Data Platform, our dynamic cluster data rebalancing operates in a controlled fashion, prioritizing it as a background effort, thereby reducing its impact on system performance. Compared to traditional storage systems, WEKA’s approach ensures that performance stays the same as the system scales to support greater data density.

Why is WEKA better?

WekaFS, the foundational file system of the WEKA® Data Platform, leverages a unique combination of parallel I/O and metadata management to provide high-performance and scalable distributed file system capabilities. This is accomplished by evenly writing data across all servers in the cluster when possible. Essentially, how we write data on the cluster pretty much negates the need for any rebalancing, except for when a physical capacity change event occurs. And in those cases, the platform implements automated, dynamic cluster data rebalancing by continuously monitoring the resource utilization of each node in the system. This allows the system to calculate utilization levels and redistribute data automatically and transparently to prevent hot spots while maintaining performance and availability.

Initially, data is distributed evenly, and rebalancing is only performed as required or under special circumstances, such as expanding or contracting, or shrinking a cluster. The rebalancing process is controlled by WekaFS, allowing the rebalancing speed to adapt to the requirements of the situation. For example, during the expansion of a cloud-based cluster, data is slowly rebalanced into new instances to immediately benefit from the increased resources, while during the shrink process, the rebalancing process is accelerated to decrease the shrink time and reduce costs.

WEKA also employs a sophisticated data placement algorithm to optimize data placement and minimize the impact of rebalancing on performance. This algorithm considers network speed, disk performance, and available capacity to ensure that rebalancing is performed efficiently and effectively.

The utilization of WEKA’s rebalancing feature confers several advantages. Firstly, it ensures optimal cluster performance and data protection even as capacity and usage fluctuate. Secondly, as the cluster expands through the addition of new nodes or SSDs, WEKA’s rebalancing mechanism facilitates performance enhancement, increased resiliency, and heightened capacity without the need for downtime-inducing data migration processes. Furthermore, the absence of a requirement for matched-capacity SSDs enables the deployment of cost-effective, cutting-edge SSD technology.

It is important to note that the rebalancing process also enables a balanced wear level on NVMe drives, thereby extending their lifespan. Furthermore, the rebalancing of data across the entire cluster leverages more CPUs, RAM, networking, and NVMe drives in parallel, resulting in linear performance for the process.

Ok, So What Are the “Gotchas” With This Approach?

WEKA implements a rebalancing mechanism to optimize its storage capabilities. However, the process has some minor side effects.

One potential side effect of rebalancing is the temporary utilization of additional storage capacity as small, redundant copies of data are created during the rebalancing process. This is extremely minor and only involves keeping a second copy of a 1MB segment for just a split second until we get the commit that it’s been written elsewhere — then it gets evicted. So it’s really an ephemeral process that doesn’t consume any additional capacity and doesn’t have any real long-term impact.

This may temporarily increase storage consumption until the redundant copies are evicted. Not a significant disadvantage but fair to point out.

It is worth noting that with the WEKA Data Platform, you have the option to adjust the rebalancing rate to meet the specific requirements of the use case and to minimize the impact on system performance and availability.

The TL; DR

The design objective of distributed storage is to eliminate limitations in terms of scalability and growth. The WEKA platform’s dynamic cluster data rebalancing feature addresses this challenge by enabling seamless and rapid integration of new distributed storage resources. This results in a solution that can accommodate the storage demands of even the most storage and compute-intensive users and applications.

Learn More About the WEKA Data Platform