NYSE Floor Talk: WEKA’s Jonathan Martin Introduces WEKApod


NYSE Floor Talk with Judy Shaw: WEKA President Jonathan Martin talks about the launch of WEKApod™️ in partnership with NVIDIA

Watch to learn more about WEKA’s role as the official technology partner for the U2 experience at Sphere in Las Vegas, why the world needs solutions for sustainable AI, and the launch of WEKApod™️ in partnership with NVIDIA and what it means for enterprises.

View Transcript

Joining me today on NYSE Floor Talk is Jonathan Martin. He is the president at WEKA. Jonathan, it is wonderful to have you here. Thanks for joining me.

Great to be here. I’ve been a fan of this show since Dave Mustaine sat in this very seat.

Great to have you. So now let’s start off by talking about the company. Tell me a little bit about WEKA and what you do.

WEKA is an AI-native software data platform. It’s used by 275 of the world’s largest AI deployments, including 11 of the Fortune 50. So typically why people engage with WEKA is they bought this very, very fast accelerated compute from people like NVIDIA, and they need to serve huge volumes of data to that. So they’re looking at building what we call data pipelines to serve those kinds of industrial AI factories that they’re building.

So the WEKA Data Platform allows them to build a very, very fast, very, very performant environment. A very, very expansive environment and also very, very efficient environment for driving large volumes of data.

The benefit to them is that they’re able to massively reduce the time it takes to do artificial intelligence. So what AI people would call the time it takes to do training of models or inference and models, they can compress that time by anywhere between 10 and 100 times. So if you can imagine being able to do 100 times more work in a single day — that’s what the WEKA Data Platform gives you.

Okay. So Jonathan, beyond AI, you’ve supported some exciting projects in the media and entertainment space recently running huge workloads in the cloud at peak efficiency. Tell me a little bit about these projects.

That’s right. We are super excited to be the official technology partner of U2 at the Sphere experience, if you’re familiar with the new media and entertainment spectacle built by Madison Square Garden in the middle of Las Vegas. Twenty five years ago, I think U2 redefined the live entertainment experience with the Zoo TV tour and the Achtung Baby tour, and they wanted to reimagine that experience in this new landscape, in this new palette atmosphere.

If you can imagine this for a second, it’s the 4 x 16k screens, 164,000 independent channels of audio, and we were streaming data from the WEKA Data Platform into this environment at over 402 gigabytes a second. For two and a half hours.

We’re very deep in the media and entertainment space, so a lot of your favorite films, your favorite TV shows, a lot of live broadcasts are all based on WEKA, as well as 15 of the Super Bowl commercials that you saw this year were also built on WEKA.

Outside of media and entertainment, segments like the federal space, life sciences, financial services are all adopting AI technology very, very quickly. So they’ve been great customers for us, too. And then over the last year, we’ve seen an explosion in generative AI with the advent of ChatGPT and the explosion that came from that, and then also a lot of these new GPU cloud companies are also based on WEKA. So companies like Iris, people like Stability AI, people like Midjourney, people like NexGen Cloud, all built on WEKA.

Now AI innovation, no signs of slowing down there. Tell me, what should companies be thinking about when scaling AI?

So a lot of people, they get religion on compute. Right? And so they they go out, they buy their first GPUs, they go and buy some fast network.

And, you know, since the dawn of time, infrastructure has really been a triangle with compute on one side, network on the second side, and storage with data infrastructure on the third. In these AI projects, two corners of the triangle have taken a massive leap forward. So all of a sudden, GPUs can process data faster than CPUs by anywhere between a hundred and a thousand times. If they’re using networking, they’re using, you know, 400 gigabit or 800 gigabit networking. But they’re trying to use the same storage infrastructure that they built the CRM or the ERP system on fifteen or twenty years ago.

So as people move into production, they realize that these very, very fast GPUs cannot be served enough data, and that’s when they go on their journey to begin looking for a new storage supplier. We joke that these GPUs a lot of the time, they’re kind of like sloths they’re asleep about seventy percent of the time. The challenge with that is even though they’re asleep, even though they’re not processing data, they’re consuming electricity, and they consume a tremendous amount of electricity.

And so by pairing very fast compute, very fast networking, and a very fast data platform, like the WEKA Data Platform, they’re able to really deliver on the promise of AI. In a way that is unimaginable.

Now WEKA recently launched a Sustainable AI Initiative. Tell me about that.

Yeah, so as we mentioned, these AI environments are incredibly power intensive.

So for typically a thousand GPUs that you might be deploying are consuming a megawatt of power. Last year, we saw people putting in orders for tens of thousands or hundreds of thousands of GPUs, so the power requirements are enormous. So enormous that in Sam Altman’s speech at Davos earlier this year, he described that the world doesn’t truly appreciate the energy requirements of AI and he believes that one of the biggest challenges for AI is really being able to supply enough power to these environments. So what you tend to find is that as people are moving into large scale AI environments, they begin to realize that sustainability and using more sustainable practices are increasingly important. Some of that is obviously buying the latest hardware. It’s more energy efficient than the previous generations.

Some people are looking at using software like WEKA to drive more utilization or more output out of the hardware that they buy. But increasingly, you’re seeing people move to GPU cloud, so people like NexGen Cloud or Applied Digital or Iris who are building very, very sustainable environment for large scale GPU farms.

We partner with all of those companies. The reason that they they work with us is that for every petabyte of WEKA that people buy, they save about 260 tons in carbon dioxide emissions. When you think that a lot of these environments are now in the hundreds of petabyte range or even the exabyte range, a thousand petabytes, that is a tremendous amount of carbon dioxide savings by utilizing WEKA.

Now as AI goes mainstream, we’re hearing more about its massive environmental footprint and what you call AI’s ‘sustainability conundrum.’ What are your thoughts on how we start solving the problem?


So without a doubt, the AI sustainability conundrum is something that is at the forefront of the biggest AI deployments on the planet. And let me give you a little bit of context of why. Last year, about three percent of the world’s power was consumed by data centers. That number has doubled over about the last ten years. But because of the adoption of AI, the expectation is that in the next just two years, about eight percent of the world’s power will start to be consumed by data centers.

In some countries who have really leaned into their data center economy, people like Ireland, people like Iceland, they found that last year, more power was consumed by data centers than was consumed by their population. So a lot of people are realizing that, you know, the appetite for AI is enormous, but the way that people are building these large AI farms has to change. And so we launched earlier this year something called our AI Sustainability initiative to try and bring more focus and more attention to how to solve these problems.

When you look at, you know, this training piece of AI, if you look at chat GPT, for example. So the speed at which these applications are consuming more and more power is quite mind-bending. So back in September or October 2022, ChatGPT-3 was released. It took about 1.3 gigawatts of power to train that model.

That’s about 4.6 million dollars of power to train the model. Just five months later, ChatGPT-4 came out, that consumed 100 million dollars of power to train the model. And the expectation is ChatGPT-5 whenever that comes in the next year or the next two years will be that and maybe half or a full order of magnitude again. So maybe half a billion dollars of power just to train a model. You can imagine with the amount of focus and attention that AI has today, if every large organization starts spinning up models similar to this, the power consumption requirements are gonna be absolutely ginormous.

We wanted to bring attention to how to do this this differently and by partnering with things like the GPU clouds we can help massively reduce the amount of infrastructure that’s required to deliver, ultimately the same outcome with much, much less power consumption.

Now, WEKA has some exciting news to announce this week. Could you tell me about it and what it means for enterprises?

We do. So we have some great news. We have an extended partnership with NVIDIA.

NVIDIA is one of the original investors in WEKA back in the early days, but this week, we have announced, our certification of a new product called the WEKApod. The WEK-pod is a preconfigured or prepackaged hardware appliance, which embeds the late cutting edge hardware technology and the WEKA Data platform into a single box. And using the WEKApod, we’ve certified that with the NVIDIA DGX SuperPOD environment, a lot of pods going on here, to be able to bring customers the best in processing from NVIDIA and the data platform from WEKA. What this means is customers now have incredible choice. Same great capabilities from the same product.

They can get WEKA in their data centers as a reference architecture from Dell or HP or SuperMicro they can get it in the cloud, same capabilities available on AWS or Azure or GCP or OCI.

They can also now get it as a third choice in this prepackaged appliance. We believe some customers just want to prep the easy button for AI. They just want to roll in a box, plug it in, turn it on, and have all those great capabilities. So again, in conjunction with NVIDIA, super excited to be able to share that with the world today.

And finally, Jonathan, what’s next for WEKA?

So I believe we are in the very, very early days of the AI revolution last year, a lot of the early adopters were focusing on industrializing their ingestion and training. This year, they’re gonna move to inference. Inference is going to demand about ten times more infrastructure than training.

We’re also starting to see that large enterprises, a lot of the banks, retail, insurance companies spent 2023 working on policy and governance. They’re beginning to use discretionary budgets this year on their first AI pilots, their first AI projects, and where we expect to see mainline budgets in 2025 and 2026 moving to those environments. So it’s a massive, massive amount of opportunity.

All of these organizations are going to struggle with how do they serve enough data to these big GPU farms that they’re working on. So I think there’s a tremendous amount of potential for WEKA to build beyond the 275 customers that we have today, and we will continue to try and raise awareness around sustainable AI and be able to deliver all of the promise of AI without melting the planet at the same time.

Alright, Jonathan, wonderful to talk with you. Thanks for joining me on Floor Talk today.

Thank you so much.