What is the STAC M-3 Benchmark and Why Should You Care?

Barbara Murphy. June 1, 2020
What is the STAC M-3 Benchmark and Why Should You Care?

Barbara Murphy, Vice President of Marketing at WekaIO, addresses how WekaFS overcomes the performance and latency challenges in the Financial Services industry in this second blog of a three-part series. This second part of the series is titled “What is the STAC M-3 Benchmark and Why Should You Care?”

At Weka we are resolute in validating our claim to be the world’s fastest file system and we do that by validating our software on as many independent benchmarks as possible. We are big fans of independent benchmarking because it allows customers to validate vendor claims and it allows vendors to stand behind their claims as opposed to relying on internal “benchmarketing”.

When I look at other vendor performance claims I am often reminded of the crucifixion scene in The Holy Grail where everyone claimed to be Brian of Nasereth. Who is the real Brian? Independent audited benchmarks provide the proof points that let customers make a determination on a storage vendor’s claims and the really good ones make sure to provide full disclosure down to the server and infrastructure component level.

So before answering what the STAC M-3 benchmark is and why you should care, I first want to explain what STAC is.  STAC (Securities Technology Analysis Center) is a company that coordinates the STAC Benchmark Council, a community made up of banks, brokerage houses, hedge funds, proprietary trading companies and stock exchanges on one side, and the representative technology vendors who build products for the finance market on the other side.  Among the many hats that it wears, STAC helps cut through the vendor noise by facilitating the Council’s development of standardized benchmarks that are by representative of finance workloads, and by applying rigorous auditing to ensure the validity of the benchmark results.

In total STAC currently has ten different benchmarks relating to finance including trade execution, backtesting, risk computation, and tick analytics – namely STAC-M3. The STAC-M3 tick analytics benchmark suite measures performance for integrated solutions that enable high-speed financial analytics using time series databases.  The benchmark captures the end-to-end solution including database software, compute resources, networking, and storage systems enabling the Council members to critically compare results. The benchmark suite provides a way to articulate advancements in software and hardware that benefit the member companies.

The two most commonly referenced STAC-M3 benchmark suites are Antuco and Kanaga.  The baseline suite – Antuco – uses a limited dataset size with constraints to simulate performance against a full-size dataset residing mostly on non-volatile media. It tests a wide range of compute-bound and storage-bound operations to probe the strengths and weaknesses of each stack. The scaling suite – Kanaga – uses a subset of Antuco queries without constraints against a significantly larger data set. For the same operation, the benchmarks vary the numbers of concurrent requests and the size of the data subset. For a scalable shared file system like Weka’s, this test is more relevant as it is reflective of users who want/need a shared file system.  If data sets are small and can easily fit inside a single server’s locally attached storage, there is no need for scale-out storage.

The key metric for the STAC-M3 benchmark suite is query response time. Latency is the enemy of the finance market, and is exacerbated by slow networking, slow storage media, slow compute processing or even poor software execution.  At the end of the day it is the combination of the entire solution that results in better performance rather than any one individual piece, which is why the STAC-M3 benchmarks are representative of real life.

The Kanaga benchmark suite includes multi-year high bid analytics that involves reading terabytes of data to answer a query, placing significant load on the storage I/O, while other tests such as Theoretical P&L in the Antuco suite is computationally intense with less impact from the storage system.

In total there are 17 mean response time benchmarks in the Antuco suite and 24 mean response time benchmarks in the Kanaga test suite and they test the system in two dimensions – number of concurrent clients and quantity of data under analysis.  This multidimensional model helps highlight the limit of one solution as well as articulating where another solution starts to make sense for the anticipated workloads. It also helps understand the amount of resources (compute, storage I/O and networking) required to reach the performance demands for an organization.

To learn more about STAC and its benchmark suites go to the STAC website, or better still become a member.  It is a lot cheaper than buying the wrong infrastructure based on marketing claims.

Weka’s landing page on the STAC website can be accessed from this link.

Weka’s 2019 benchmark submission, which set 8 records in scale testing can be accessed from the following link.

Also come and join us at Global STAC Live, where we will have some more exciting news to share. This is an online conference running live from June 2nd to June 4th and available on demand through July 3rd.


Related Resources

Case Studies
Case Studies

Preymaker VFX Studio-in-the-Cloud

View Preymaker Case Study
White Papers
White Papers

Hyperion Research: HPC Storage TCO – Critical Factors Beyond $/GB

View Now
White Papers
White Papers

A Buyer’s Guide to Modern Storage