Hi everyone, this is Nilesh Patel, Chief Strategy Officer at WEKA. I'm glad to be here with Charlie Boyle, VP of DGX Systems at NVIDIA. Charlie, welcome and thanks for joining. Thanks, Nilesh. And we are here today discussing some of the key factors that a lot of enterprises have to consider in designing, building, and scaling their AI infrastructures, and take into account some of the challenges of data infrastructure that are associated with that. In fact, I will give you an example of our common customer that has been building large AI infrastructures where they are hosting modern model developers and large AI factories they are bringing onto their infrastructures. And they have recently published some results with Weka where now they are generating four to five times more, token output with half the time for the time to fuzz token. And this type of value is extremely important, particularly as you get down to reasoning model, large context window and so on. And so we'd love to understand what's your comment and your experience with the other customers on data requirements and how that scales. An enterprise customer can learn a lot from examples like that of very large scale infrastructure because, to your point, people are looking at reasoning models. The whole point of a reasoning model is you're asking it a question and it's not just giving you one answer. It's coming up with one answer and it's going back in the model and it's kind of going through recursively to give you, you know, it's thinking about it. And, but to think about something as you and I are talking, you know, you're asking me questions, I'm asking you questions. It's real time. It's kind of, it's natural. But inside of a reasoning model, it's working at supernatural speeds because we're not waiting to talk to each other to hear words. It's constantly processing. So, it's thinking at a rate that is super fast to give you that answer. But as a user, when you ask something a question, you have a certain expectation that it's going to come back. Getting an answer back quickly, that first token, that first word that's out is important, like, Oh, the model understands what I'm doing. Then getting all the context right. For our enterprise customers that are looking at these types of solutions, so much is user experience and user acceptance. And when they deploy something internally and people just love it, it explodes inside of the company. They get one model out, get one thing, you know, people start using it like, oh, this is so cool. And then every department, every business unit wants to do that. So having a scalable infrastructure, having an AI factory that's built for success is super important in the enterprise because they're going see that success from AI. Thank you so much for your time today. Yep. Thanks, Nilesh, and thanks for the great partnership. If you're expanding your AI factories or building new ones, come see how NVIDIA and Weka are building AI factories together, and let us share our learnings with you.