This white paper explores the impact of storage I/O on the training portion of the Deep Learning workflow and on inferencing for training model validation. AI development is akin to software development; while it is desirable to reduce overall cycle time, the finished product must function correctly, predictably, and reliably. The results shared in this paper show how increasing storage performance will help avoid I/O bottlenecks during model validation, reducing overall time for model development.
This paper was created in partnership with engineers from HPE, NVIDIA, WekaIO and Mellanox.
Register for your free copy now.