Blog originally published on InsideBigData.
Charla Bunton-Johnson is an experienced leader with more than 25 years of marketing and sales experience in the enterprise storage, security, networking, software, and graphics. Charla specializes in channel, alliance, field and solutions marketing with a keen eye on partner strategy and communications.
AI is a game changer for industries today but achieving AI success contains two critical factors to consider- time to value and time to insights. Time to value is the metric that looks at the time it takes to realize the value of a product, solution or offering. Time to insight is a key measure for how long it takes to gain value from use of the product, solution or offering.
The budget for AI infrastructure, personnel and data acquisition is significant for any company and it is critical that companies can quickly monetize it. WekaIO, HPE, NVIDIA, and Mellanox collaborated to create the “Accelerate time to value and AI insights” white paper which delivers a key AI referenceable solution for data scientists, IT managers and C-level executives.
The referenceable solution showcases how to accelerate time to value (i.e how fast you can put the infrastructure to work) by tightly integrating compute, storage and networking for best of breed AI performance. The acceleration comes from eliminating the guess work, experimentation, hardware/software integration challenges, and delivering assured results the user should get. Time to value shrinks from weeks/months to days to create a great strong foundation to build a single-node production solution that will scale with the business. Combining HPE Apollo 6500 with NVIDA Tesla V100 GPUs, WekaIO Matrix filesystem on HPE DL360s and Mellanox SB7800 InfiniBand switch gives the user the ultimate high-performance AI solution, whether you need it for machine learning or deep learning or even high-performance computing. This validated solution gives IT the purchasing confidence that the time to value will be accelerated.
The performance results for the solution were achieved by using NVIDIA NGC, which allow users to deploy optimized containers in minutes giving the best possible outcome for performance. The containers also enable near linear scaling so the user can predictably accelerate time to insight by adding GPUs and, while not shown in this paper, adding more compute nodes (e.g. Apollo 6500s). Lastly, WekaIO Matrix filesystem enables IO scaling for throughput and capacity. There is never a data bottleneck for hungry GPUs to accelerate compute. If you want to learn how to reduce the development cycle for AI download the white paper.
NOTE: Join WekaIO at NVIDIA GTC 2019 to learn more about how Matrix unlocks the data bottleneck for accelerate compute- AI, HPC, and machine learning analytics.