Let’s face it, we’re living in an increasingly distributed world. This holds true for storage as much as for any other component of your stack.
This is the first installment in a series of posts that describe our research into using Parallel Distributed File Systems (PDFS) on Cloud infrastructures. At CCC, we use Amazon’s Web Services (AWS) as the public component of our hybrid cloud computing platform, but the ideas we describe in this series of posts should be applicable in most of the public cloud computing platforms.
What is a distributed file system?
A distributed computing system is simply a collection of different computers that work independently, at the local level, but perform work in a coordinated way, at the global level. A distributed file system is a specialized form of a distributed system that focuses on storage capacity.
Distributed File Systems (DFS) present a single namespace over multiple storage devices. They are sometimes referred to as “network file systems”. Modern ones typically consist of commodity disks and are connected by a network, which enables the single view.
Unlike a Storage Area Network (SAN), the file system presented by a DFS does not require block-level (e.g., SCSI) shared access to the underlying storage. Instead, communication between a client and a server is accomplished via a network protocol, like TCP/IP.
What is a parallel distributed file system?
In Parallel Distributed File Systems (PDFS), parts of a single file are striped across multiple storage devices, which allows for data access in parallel. In this way the system is able to increase the speed of access to a file when the combined bandwidth to a file is greater than the bandwidth to a single device. This scheme is also important when a single storage device isn’t large enough to hold the entire file.
The diagram below shows striping of three files across three devices. File 1 is striped across devices Dev 1 and Dev 2; File 2 is striped across all three devices, and File 3 is not striped.
Most PDFS allow the administrator to tweak a number of striping settings, including stripe size, maximum number of stripes for an object, and the maximum and minimum number of stripes in which to divide the file.
What problem are they trying to solve?
Like most distributed systems, a DFS is attempting to alleviate the problems associated with:
Two PDFS of interest: BeeGFS & Lustre
In this series, we’ll compare the following parallel distributed file systems:
Lustre is a a very popular choice in large-scale scientific computing. Most of the top 100 supercomputers on the planet use Lustre for storing files (see http://www.top500.org/). Lustre is open source software and is licensed under the GPL. BeeGFS is relatively a newcomer in the field that started as FhGFS at Fraunhofer ITWM, in 2005, and it got a commercial entity (ThinkParQ) behind it in 2014. BeeGFS is provided free of charge under their own license (EULA).
We will examine the performance characteristics of these file systems on AWS EC2 infrastructure.
What separates different PDFSs?
One of the key differences among various DFSs is how they handle metadata about the objects stored in the system. The system metadata is usually stored in a central metadata server (e.g., Lustre) or in a distributed fashion (e.g., Gluster, which uses a hashing algorithm). The centralized model for metadata storage makes the metadata server a potential single-point-of-failure.
Other areas where differences occur can be identified by asking the following questions:
does the file system require a special Linux kernel?
does the file system require specialized hardware?
is striping available at multiple levels (file, directory, file system, etc.)
is stripe size configurable?
The table below summarizes some of the important differences between the two PDFS we’ve investigated.
The next installment of this series will describe our work with the Lustre file system on AWS.