Time required: 3 Hours
This lab will introduce you to the concepts of NFS v4 services from a client and server perspective. By the end of this document, you will have built an Ubuntu 13.10 NFS Serve, created a client Configuration and tested your server configuration and any shared directories for the network client you choose to set up.
Although this lab focuses on the most common NFS server configuration (single NFS Server with multiple clients), there are a couple of different ways you can deploy NFS:
This is the most common scenario where you will deploy a single server that allows one or more individual clients or networks to have access to one or more folders that can be mounted locally.
For Example: A single office running older Linux or Sun Unix workstations with limited disk space needs to share a large number of corporate documents, these can be located on a central server that is backed up.
A failover server is simply an identically configured NFS server that can be quickly be activated in the event of the failure of a primary server. It's file shares are often synchronized via CRON and rsync in order to keep the content current. The most common configuration is a secondary network interface that has an identical IP as the primary server but is not active.
For Example: A larger single office running older Linux or Sun Unix workstations with limited disk space needs to share a large number of corporate documents that are updated by team members constantly as part of daily work. The loss of access to those documents would mean that business would stop. A failover can quickly restore functionality so that work can continue.
This is typically where one or more additional nodes exist in a clustered environment. All file systems are kept in sync using a synchronization daemon (like GlusterFS) or via CRON and rsync scripts that run periodically. The cluster shares a common "floating" IP so that if any node fails, the client does not drop connection to the share.
For example: A large cluster of web servers shares static content across servers and sites, both virtual and physical across multiple geographic locations. The cluster can distribute the load across multiple NFS nodes and maintain basic disaster recovery protection by replicating file changes between disparate physical data center locations.