From Fedora Project Wiki

Setting up a simple HekaFS cluster

Hypervisor/Host

I'm using an 8 CPU, 24GB RAM machine running F15, with a qlogic FC HBA to back-end NAS.

Brick node guests on host

I created four guests running F15 to use as brick nodes (servers):

  • F15node1 (192.168.122.21)
  • F15node2 (192.168.122.22)
  • F15node3 (192.168.122.23)
  • F15node4 (192.168.122.24)

(You may create as many as you want for your set-up.)

Client node guest(s) on host

I created a single guest as a client:

  • F15node5 (192.168.122.25)

(You may create more than one.)

Back-end storage

I provisioned eight 5G LUNs on a NAS device, allowing in this case for two LUNs per brick. The actual size and number of LUNs you use is up to you. Because the LUNs are attached in an apparently random order on every boot, I used the Disk Utility to add labels to each LUN. I labeled them guest1vol0, guest1vol1, guest2vol0, guest2vol1, guest3vol0, guest3vol1, guest4vol0, and guest4vol1. Add the LUNs to the guest brick VMs using Add Hardware->Storage in the guest detail window. Choose 'Select managed or other existing storage', enter /dev/disk/by-label/guestXvolY, Device Type: SCSI disk, Cache mode: none, Storage format: raw. Use other Cache mode and Storage format options at your discretion.

Important Links

The nitty gritty

  • On each server node, copy root's .ssh/id_dsa.pub to (root's) .ssh/authorized_keys file. Ensure that root can ssh between nodes without requiring a password to be entered.
  • download the glusterfs, glusterfs-server, glusterfs-fuse, and hekafs RPMs
  • install the RPMs on all server nodes and client nodes
  • on each server node make brick file systems for each of the LUNs and mount them, e.g. using ext4:
    • for lun in /dev/sd? ; do sudo mkfs.ext4 $lun; done
    • for dev in /dev/sd? ; do sudo mkdir -p /bricks/basename $dev; done
    • for dev in /dev/sd? ; do sudo mount $dev /bricks/basename $dev; done
    • optional make /etc/fstab entries for the mounts
    • Note: if you use xfs on iSCSI LUNs shared from the qemu/kvm host, the guests will not always probe and initialize the iSCSI LUNs correctly (fast enough?) and the guests will usually require manual intervention during boot. Unmount all bricks, run xfs_repair on each iSCSI device (e.g. /dev/sda), mount the bricks, and exit to continue to boot.
  • on each server node open ports in the firewall using the firewall admin utility:
    • open port 8080 tcp (Other Ports)
    • open ports 24007-24029 tcp (Other Ports)
  • set up gluster on each brick node
    1. on each server node enable glusterfsd, glusterd, and hekafsd:
      • chkconfig glusterfsd on
      • chkconfig glusterd on
      • chkconfig hekafsd on
    2. on each server node start glusterd and hekafsd:
      • service glusterd start
      • service hekafsd start
    3. open a browser window to principal server node port 8080 (http:192.168.122.21:8080).
    4. configure the nodes in your cluster
      • select Manage Servers
      • the IP address of the first or principal node is already listed
      • enter the IP address or node name and press Add
      • click Back to cluster configuration
      • repeat for each node in your cluster
      • press Done
    5. configure one or more volumes in your cluster
      • select Manage Volumes
      • As described above, each node in my cluster has ten volumes: /dev/sda ... /dev/sdj, mounted on /bricks/sda ... /bricks/sdj
      • tick the checkbox for /bricks/sda on each node
      • leave Volume Type: set to Plain
      • leave Replica or Strip count blank (unset)
      • enter testsda for Volume ID
      • press Provision
      • add_local(testsda) OK ... is displayed for all four nodes
      • click Back to volume configuration
      • testsda is now shown in the list of Existing Volumes
      • repeat as desired for additional volumes
      • press Done
    6. configure one or more tenants in your cluster
      • select Manage Tenants
      • enter bob as the Tenant Name
      • enter carol as the Tenant Password
      • enter 10000 as the Tenant UID Range: Low
      • enter 10999 as the Tenant GID Range: High
      • enter 10000 as the Tenant GID Range: Low
      • enter 10999 as the Tenant UID Range: High
      • press Add
      • add_local(bob) OK ... is displayed for all four nodes
      • click Back to tenant configuration
      • bob is now shown in the list of Existing Tenants
      • repeat as desired for additional tenants
      • click volumes in the entry for bob
      • testsda is shown in the Volume List
      • tick the Enabled checkbox for testsda
      • press Update
      • Volumes enabled for bob ... is displayed for all four nodes
      • click Back to tenant configuration
    7. start the volume(s)
      • press Done
      • select Manage Volumes
      • click start testsda entry in the list of Existing Volumes
      • start_local(testsda) returned 0 ... is displayed for all four nodes
  • mount the volume(s) on the client(s)
    • sudo hfs_mount 192.168.122.21 testsda bob carol /mnt/testsda