Hi, everyone. This is Nilesh Patel, chief strategy officer at WEKA. I'm glad to be here with Charlie Boyle, VP of DGX Systems at NVIDIA. Charlie, welcome, and thanks for joining. Thanks, Nilesh. And we are here today discussing some of the key factors that a lot of enterprises have to consider in designing, building, and scaling their AI infrastructures and take into account some of the challenges of data infrastructure that associated with that. Charlie, give us an introduction to AI factory. Why is it needed? And the importance of high throughput, low latency data infrastructure for those AI factories. You know, so an AI factory is really built to create intelligence and intelligence is in the form of tokens. And, you know, what our customers are looking for is one type of infrastructure that can produce the tokens they need today but also be ready for the future. You know, new models are coming out. New reasoning models. You know, new data models are coming out every day. And so they want a factory infrastructure that they can depend on, they can grow, and know that as the new latest AI technology comes out, that the factory they've built today is going to work with the models of tomorrow. So really building that flexible adaptable AI factory is your super core to to our customers today. That's great. And I guess, AI factories are generating tokens as an output, but the data is the fuel Yep. To those AI factories. What's your experience in terms importance of scalable, high performance data infrastructure like Weka's for some of those AI factories and those deployments? Yeah. And you hit it exactly right. You know, data is the fuel for the AI factory. It's the input, it's the raw materials. And, that's what really enterprise customers are looking for today is they're sitting on, you know, decades worth of enterprise data. They know from everything they've heard in the media that they can extract value out of that. But, turning that, you know, enterprise data they have into intelligence, into tokens, you know, really requires, you know, a robust AI infrastructure to do that. But the data infrastructure behind it is so important because your model is only as smart as the data that you put in it. So, you don't just want to give your model everything in your enterprise because there may be useful things. There's probably things that aren't useful. And so, curating the data, having a data platform that can intelligently figure out what a customer needs to put in their model to get the best output, to get the most valuable tokens out of it is super important. That's a great point and I believe that's where Weka comes into play with Weka's neural mesh. We've been able to deliver, you know, sub millisecond latencies, tens of terabytes per second throughput, and amazing scale across hundreds of storage nodes and so on. And that's helping customers achieve their outcome, particularly when they're, as you pointed out, in the more modern AI infrastructure with agentic workflows, you need rag up the scale that is not previously thought about. Reasoning models, multi model environment are really driving the data patterns that is extremely demanding. And being able to have that deployed on a at scale is where I think Neural Mesh is delivering tremendous value to the customers in terms of getting their AI outcome outcome faster. Thank you so much for your time today. Yep. Thanks, Nilesh, and and thanks for the great partnership. If you're expanding your AI factories or building new ones, come see how NVIDIA and Weka are building AI factories together and let us share our learning with you.