Note, I didn't make the directories /expX on any of the nodes, they are automatically made for you.
To start the volume:
$ gluster volume start test-volume
To mount the volume, we don't have to modprobe fuse since it's built into the 2.6.32 kernel that comes with EL6. You can also use NFS to mount gluster volumes, but I decided to use fuse.
$ mkdir -p /mnt/glusterfs $ mount -t glusterfs i-0000011a:/test-volume /mnt/glusterfs
YAY! working glusterfs. To confirm that it is working, I copied in a test file, mounted the test-volume on another node in the test cluster as well, and there was my file!
GlusterFS doesn't seem too advanced compared to Hadoop or Ceph. If I look in the /expX directories I just see the whole file in there. In the current release, I believe the closest volume configuration we could have to Hadoop or Ceph is Striped Replicated Volumes. But, that volume type is only supported for use as a MapReduce backend.
I think GlusterFS would be really cool for a OpenStack back end. Especially since it's so darn simple. Easily recoverable since the files are stored in plain text. Of course, you would probably want to do striping for the large image sizes of those files.
Overall, I feel this was the easiest of the file systems I have tried out. Ceph was a little scary with all the configuration needed. GlusterFS was as simple as just issueing a command to add another server. Of course, does this mean it'll load balance the files if a server goes away? Don't really know how that'll work.