Heros Are for Zeros. Show me
David Hiatt. February 8, 2019
Several of my previous blog posts discussed the need for and the aspects of benchmarking storage system performance. It’s such an important topic, I thought it worth additional perspective. There is no shortage of benchmarks or the associated “hero” numbers that stem from testing with them. All too often though, vendors and customers alike focus on the numbers themselves, not on what they mean or under what conditions they were achieved, or the cost of the infrastructure it takes to get them. Furthermore, each benchmark is relevant to a different audience, and end-customers should determine which is most appropriate for their environment and use cases. Example benchmark organizations include STAC, SPEC, IO500, MLPerf, and the Deep500. WekaIO is a strong believer that the proof is in the pudding and is an active member in these organizations.
A trip to Missouri
An interesting competition has emerged lately between WekaIO and NetApp for the top spot on SPEC SFS2014’s software build benchmark. Currently, WekaIO’s Matrix file system is in the number one position. I say currently, because I fully expect NetApp to answer our latest figures with better results, until they reach the limits of their scale-up architecture, which can only scale out to 12 high availability nodes. Matrix, on the other hand, can scale to thousands of nodes and will continue to scale performance because it doesn’t use a traditional controller-based architecture, nor are we constrained by the performance limitations of the NFS protocol, which was designed for file sharing, not performance. At best, you can squeeze 1.5 GBs/sec of bandwidth through a single NFS link. The result is that you need many clients to push more bandwidth from the storage system. Looking at modern workloads that can easily scale to tens of Gigabytes per client, it highlights the problem scaling performance with NFS. In the most recent benchmark, NetApp required 48 clients versus 19 for Matrix, two and half times as many clients to drive considerably less performance.
WekaIO’s Matrix uses a much more efficient client driver that can easily saturate multiple 100GbE network link to a single client. The benchmark summary report below shows how Matrix achieved 241% more performance per client compared to NetApp and with 30% less NVMe drives. Table 1 below shows a comparison of the latest results for WekaIO and NetApp from the SPEC website.
Number are numbers, right?
Wrong. Looking beyond the hero numbers in red, there are two important indicators of a superior design (besides performance) that yield real business value — lower cost and greater efficiency, which lead to a higher ROI.
Lower Cost — the cost of the NetApp gear used to achieve these results is estimated to be more than 4x what a comparable Matrix solution would be yet WekaIO’s solution provides 36% higher performance at one-third the latency to a 250% less clients. It is apparent from the recent NetApp submissions in the table below, where they doubled the number of storage nodes from 4 to 8 and doubled the number of clients from 24 to 48, that they are playing the game of “if you throw enough hardware at the benchmark you will get the number you want”. This costly approach simply demonstrates that performance per client is the limitation due to the NFS protocol limitation, achieving less than a third of what Matrix can do per client.
Greater Efficiency, Higher ROI — The net-net of this is that with a modern scalable infrastructure, it is possible to achieve much higher performance to a single compute client at a dramatically lower cost, improving investment ROI and downstream worker efficiency. The NetApp benchmark highlights the inability of NFS to deliver scalable performance to a single compute client meaning it leaves the applications starved of data. Of course, NetApp still has room to expand performance, up to its limit of 24 storage nodes. Then what? Matrix will continue to scale.
AI AND HIGH VELOCITY ANALYTICS NEED A NEW FILE SYSTEM
Lectus arcu bibendum at varius vel pharetra vel. In cursus turpis massa tincidunt.