What is Network File System (NFS)?
WEKA. April 15, 2021
We are taught early in our lives that sharing is good. Network File System (NFS) was built on the principle of sharing. NFS is an Internet Standard, client/server protocol developed in 1984 by Sun Microsystems to support shared, originally stateless, (file) data access to LAN-attached network storage. As such, NFS enables a client to view, store, and update files on a remote computer as if they were locally stored. On the back end, NFS client software translates POSIX file access commands issued by applications into NFS server requests that respond with metadata, data, and status. The main versions in deployment these days (client and server) are NFSv3, NFSv4, and NFSv4.1.
What is Network File System (NFS)
Network File System (NFS), was a protocol invented in the 80’s to facilitate remote file sharing between servers. There are multiple versions of NFS. NFS v3 is the most common. NFS is easy to use and manage and requires a client in the Kernel that supports NFS mounting.
Benefits of Using NFS
Over the years, NFS has evolved to support more security, better file sharing (locking), and better (caching) performance. Moreover, it’s a relatively affordable and easy-to-use solution for network file sharing that uses existing internet protocol infrastructure.
At present, here are the benefits of the NFS service:
- Multiple clients can use the same files, which allows everyone on the network to use the same data, accessing it on remote hosts as if it were acceding local files.
- Computers share applications, which eliminates the needs for local disk space and reduces storage costs.
- All users can read the same files, so data can remain up-to-date, and it’s consistent and reliable.
- Mounting the file system is transparent to all users.
- Support for heterogeneous environments allows you to run mixed technology from multiple vendors and use interoperable components.
- System admin overhead is reduced due to centralization of data.
- Fewer removable disks and drives laying around provides a reduction of security concerns—which is always good!
How Does Network File System Work?
Fundamentally, the NFS client-server protocol begins with a “mount” command, which specifies client and server software options or attributes. NFSv4 is a stateful protocol, with over 30 unique options that can be specified on the mount command ranging from read/write block size, and protocol used. Security protocols validate client access to data files as well as data security options, etc.
Some of the more interesting NFS protocol software options include caching options, shared file locking characteristics, and security support. File locking and caching interact together, and both must be properly specified for shared file access to work. If file (read or write) data only resides in a host cache and some other host tries to access the same file, the data it reads could be wrong, unless both (or rather) all clients of the NFS storage server use the SAME locking options and caching options for the mounted file system.
File locking was designed to support shared file access. That is when a file is accessed by more than one application or (compute) thread. Shared file access could be occurring within a single host (with or without multi-core/multi-thread) or across different hosts accessing the same file over NFS.
Disadvantages of Network File System
There are many challenges with the current NFS Internet Standard that may or may not be addressed in the future; for example, some reviews of NFSv4 and NFSv4.1 suggest that these versions have limited bandwidth and scalability (improved with NFSv4.2) and that NFS slows down during heavy network traffic. Here are some others:
- Security—First and foremost is a security concern, given that NFS is based on RPCs which are inherently insecure and should only be used on a trusted network behind a firewall. Otherwise, NFS will be vulnerable to internet threats.
- Protocol chattiness—The NFS client-server protocol requires a lot of request activity to be set up to transfer data. The NFS protocol requires many small interactions or steps to read and write data, which equates to a ton of overhead for someone actively interacting with today’s AI/ML/DL workloads that consume a tremendous number of small files.
- File sharing is highly complex—Configuring and setting up proper shared file access via file locking and caching is a daunting task at best. On the one hand, it adds a lot of the protocol overhead, leading to the chattiness mentioned above. On the other hand, it still leaves a lot to be desired, inasmuch any each host’s mount command for the same file system can easily go awry.
- Parallel file access—NFS was designed as a way to sequentially access a shared network file, but these days applications are dealing with larger files and non-sequential or parallel file access is required. This was added to NFSv4, but not a lot of clients support it yet.
- Block size limitations—The current NFS protocol standard allows for a maximum of 1MB of data to be transferred during one read or write request. In 1984, 1MB was a lot of data, but that’s no longer the case. There are classes of applications that should be transferring GBs not MBs of data.
There are other problems with NFS, but these are our top five. Yes, block size restrictions could easily be made larger, but then the timeouts would need to be adjusted and perhaps rethought. And yes, parallel file access is coming, but protocol chattiness and file sharing (locking-caching) problems listed above are much more difficult to solve.
NFS has worked well for over 35 years now. It’s unclear whether NFS can be salvaged in today’s small file world. Yet another version of NFS could be pushed through the standard’s committee, but our view is that the chattiness problem is too endemic in the protocol definition to be eliminated entirely, AND NFS either needs to fully support shared files or not, doing both is a prescription for failure.
How to accelerate Network File System performance?
NFS offers limited performance and scalability for modern environments. For today’s networks and capabilities, it’s very limited. NFS offers capacities of up to 1.5 Gb/s while network cards can offer 100 Gb/s. NFS is also not efficient in managing metadata. Weka offers the simplicity of NFS with the performance and scalability of SAN and the ability to saturate 100 Gb/s pipes.
Additional Helpful Resources
Why Network File System Performance Won’t Cut AI & Machine Learning
Network File System and AI Workloads
Lustre File System Explained
General Parallel File System (GPFS) Explained
BeeGFS Parallel File System Explained
FSx for Lustre
Block Storage vs. Object Storage
Introduction to Hybrid Cloud Storage
Learn About HPC Storage, HPC Storage Architecture and Use Cases
NAS vs. SAN vs. DAS
Redefining Scale for Modern Storage