As WEKA returns to Supercomputing’s SC22 conference (booth #3223), I’d like to take a moment to look at some of the technology and issues to overcome that I think we’ll see in play at the conference and what it means for consumers of High Performance Compute environments, and how WEKA is addressing these environments.

Right off the bat, let’s talk about hardware adoption. Over the past several years there have been massive improvements in CPU, GPU, RAM and NIC performance. Many of the latest versions of these products have recently been released, or are on the cusp of being released in the near future. Nvidia will continue to talk about their H100 GPU while still selling a TON of the existing A100s. Will AMD and Intel talk about their upcoming GPUs? Possibly, but I would definitely expect them to be showing off their next gen CPUs and faster DDR5 RAM connections. I also expect to hear more about where the market is in starting adoption of 400Gb networks for both ethernet and infiniband. Finally, last year saw a number of demonstrations of CXL gear. This year, I would expect to see first run products utilizing CXL connections based on the PCIe gen 5 bus. Everything from endpoints for RAM pooling to Network card and data accelerators will be just some of what I expect we will hear about.

WEKA is on the forefront of this trend. We are already testing the new CPUs with new SSD models, the fast NICs, etc. And because WEKA is software defined, we’re able to rapidly adopt these new technologies as they come out, allowing our customers to take advantage of them very quickly. For customers, the pace of hardware improvements is incredible, in fact it’s reached a point where you can now have supercomputer performance in just a small rack in a closet.

But hardware is just a portion of the show! Being able to do High Performance Data Analytics (HPDA) along with AI/ML image analysis and large simulation suites gets into software tuning and tools. Everything from advanced software for pre-processing of data to normalization before training, to operational tools to regression check the accuracy of an AI training model against prior versions. I also expect there will be new research on how to optimize software to take advantage of different libraries that can produce faster results for areas as diverse as Monte Carlo simulations in finance, image recognition for manufacturing and life sciences, and decision support for climate and weather analysis.

One thing that is becoming more and more common in these environments is that all of the software and processes to run a data pipeline are creating MASSIVE amounts of data, and in particular Lots Of Small Files (LOSF). This is happening because datasets have been changing from large streaming files into discrete smaller chunks, and lots of them. Big cryo-em microscopes are creating 10,000 files an hour. Assembly lines with sensors for QA analysis are creating multi-millions of files per day. 5 years ago, the problem was how to address 100 million files with fast access. Today, the number is 10 billion or more! The challenge then is how do you address all these small files as part of an analysis or AI/ML data pipeline?

WEKA has the ability to handle LOSF with extremely low latency. By distributing and parallelizing all of the metadata in the system, we can help accelerate this next generation of workloads, both on-prem or in the cloud. Combine this with our scaling capabilities (both up and down!), our ability to work with the latest hardware, and you’ve got a great combination.

If you want to come talk to our experts about how to address these software and file challenges, check WEKA out in booth #3223 at SC22!