Why Network File System (NFS) is not Suitable for AI Workloads?

Barbara Murphy. September 25, 2020
Why Network File System (NFS) is not Suitable for AI Workloads?

79% of executives agree that not extracting value and insight from their data will result in the loss of competitive advantage and lead to the extinction of their business, but yet they don’t feel that they have the tools [and resources] to extract value from it.

Network File System (NFS) vs. Parallel File System

Network file system was actually introduced in 1984 when networks were slow and the storage systems were significantly more performant than the network link could deliver. The software stack was inefficient but it did not matter because the network was the bottleneck. Now It’s decades old and it’s really not suited to support today’s AI demanding workloads. The Network File System (NFS) protocol is capable of delivering up to 1.2GB/second on a single network link, which was more performant than the network up to Gbit networking. However we have seen huge advancements in compute, storage and networking (both Ethernet and InfiniBand), and it is very cost effective to support large network pipes, however the same old NFS software stack is managing communication between storage and compute.

Let’s look at the problem associated with the NFS software stack. The data moving between your storage and compute infrastructure is like a commuter train pulling into a station at rush hour, with lots of carriages, where the train cars represent your NFS appliance nodes. But only one door on the train can open—and that door represents the NFS filer head (controller).

Now passengers getting off the train are going to have to go through all the carriages and through that single filer, while passengers getting on the train are going to be bottlenecked by those coming off the train through that single door opening.

That single door represents the rigid design of an NFS NAS. All the passengers are your data—and data bottlenecks with the NFS protocol design.

Now imagine a different scenario, if that same train pulled into the station, but this time, all the doors on all the carriages open simultaneously, passengers would be able to get off and on the train on their own carriages without getting bottlenecked through that single door. That is the fundamental difference between NFS and a parallel file system. A parallel file system will continue to scale with more nodes added (the train carriages) while NFS is always limited to a single node.

Parallel File System & AI Workloads

Successful AI outcomes are dependent on 3 core technologies:

  • Compute accelerators like GPUs and FPGAs
  • Fast networks, like 100 Gbit Ethernet or 200Gbit InfiniBand
  • A modern file system to move and manage the data in a highly parallel fashion

Different stages within AI data pipelines have distinct storage requirements for massive ingest bandwidth, need mixed read/write handling and ultra-low latency, often resulting in storage silos, for each stage. This means business and IT leaders must reconsider how they architect their storage stacks and make purchasing decisions for these new workloads.

Latest generation GPU based servers support up to 800Gbit of networking to a single machine. These 8 network links can support over 80GBytes/second of bandwidth from the storage system to the GPU server. The NFS protocol is still limited to 1.2GBytes/second/network link leaving 80% of the available bandwidth unused. Ultimately this translates into I/O starvation for GPU workloads that need to read large amounts of data.

The Requirements of a Modern File System

A modern file system needs to provide the performance required by I/O and metadata intensive workloads found in emerging areas such as machine learning, visualization, inference and real-time monitoring. The key characteristics of a well designed modern file system are as follows:

  • Fully distributed data and metadata so that every node in the cluster can participate in feeding the data pipe (remember the train analogy)
  • The ability to saturate high speed networking including 100Gbit Ethernet and 200Gbit InfiniBand through massive parallelism
  • Support for modern protocols such as Nvidia® GPUDirect® storage that fully saturate a GPU server
  • The ability to scale performance linearly as compute demands grow
  • The ability to scale capacity independently of performance so that exponential data growth can be easily accommodated
  • Cloud ready for compute elasticity in the public cloud

Related Resources

Webinars
Webinars

[Webinar] Accelerating Cryo-EM & Genomics Workflows

View Now
Webinars
Webinars

[Webinar] Accelerating AI Training Models

View Now
White Papers
White Papers

A Buyer’s Guide to Modern Storage

Download