00:06
Nilesh Patel: Hi everyone. This is Nilesh Patel, chief strategy officer at WEKA. I’m glad to be here with Charlie Boyle, VP of DGX Systems at NVIDIA. Charlie, welcome and thanks for joining.
00:18
Nilesh Patel: And we are here today discussing some of the key factors that a lot of enterprises have to consider in designing, building and scaling their AI infrastructures and take into account some of the challenges of data infrastructure that are associated with that.
Charlie, we have been working together for several years and together between WEKA and NVIDIA – over the years, the AI infrastructure needs have changed and evolved. What’s your experience in terms of how those demands have changed on AI infrastructures?
00:48
Charlie Boyle: Yeah Nilesh, I mean, it’s evolved so much. We started this journey. We’re almost at the ten year anniversary of DGX back in 2016. And back then, you know, people were just getting started with AI and the real workload, you know, even up until the past few years has been AI training because everyone needed to build a model.
Because models didn’t exist in in the wild, you couldn’t download a model. And the way people leveraged AI was to build a model and then get some results and then, you know, build a new model, refine that model. But what we’ve really seen in the explosion of AI recently, especially in enterprises, is the availability of models that other people have trained that you can adapt in the enterprise.
So, the AI infrastructure that enterprises are using today, yes, they’re still doing training because they need to personalize the model for themselves, for their customers, their users. But that same AI infrastructure they’re using, they want to be able to train on, they want to be able to fine tune on.
And more importantly, they want to be able to deploy production applications on. In that case, you know, we’re really talking about inference there.