From Fedora Project Wiki

Revision as of 18:12, 8 September 2011 by Kkeithle (talk | contribs)

Setting up a simple HekaFS cluster

Hypervisor/Host

8 CPU, 24GB RAM, FC HBA to back-end storage, running F15.

Brick node guests on host

  • F15node01 (192.168.122.21)
  • F15node02 (192.168.122.22)
  • F15node03 (192.168.122.23)
  • F15node04 (192.168.122.24)

Client node guest(s) on host

  • F15node05 (192.168.122.25)

N.B. all guest nodes are running F15.

Back-end storage

40 5G LUNs on qlogic fibrechannel, provisioned as SCSI disks, 10 per brick. N.B. Size and number of LUNs is arbitrary.

Important Links

cloudfs git repo is git://git.fedorahosted.org/CloudFS.git F16 glusterfs RPMs are https://koji.fedoraproject.org/koji/buildinfo?buildID=259916 F16 hekafs RPM is https://koji.fedoraproject.org/koji/buildinfo?buildID=259462 HekaFS wiki page is https://fedoraproject.org/wiki/HekaFS F16 HekaFS Feature wiki page is https://fedoraproject.org/wiki/Features/HekaFS

If you use the hekafs RPM on RHEL, change line 23 of /etc/init.d/hekafsd from python2.7 to python2.6 after installing.

the nitty gritty

  • download the glusterfs, glusterfs-server, glusterfs-fuse, and hekafs RPMs
  • install the RPMs on all brick nodes and client nodes
  • make filesystems on the iSCSI LUNs and mount them, e.g. using ext4:
       for lun in /dev/sd? ; do sudo mkfs.ext4 $lun; done
       for dev in /dev/sd? ; do sudo mkdir -p /bricks/basename $dev; done
       for dev in /dev/sd? ; do sudo mount $dev /bricks/basename $dev; done
       optional make /etc/fstab entries for the mounts
       Note: if you use qemu/kvm guests as bricks and use xfs on iSCSI LUNs shared from the qemu/kvm host, the guests will not always probe and initialize the iSCSI LUNs correctly (fast enough) and the guests will usually require manual intervention to boot. Unmount all bricks, run xfs_repair on each iSCSI device (e.g. /dev/sda), mount the bricks, and exit to continue to boot. 
  • open ports in the firewall using the firewall admin utility:
       on all bricks, open port 8080 tcp (Other Ports)
       on all bricks, open ports 24007-24029 tcp (Other Ports)
  • setup gluster on the brick nodes
       enable glusterfsd, glusterd, and hekafsd on each brick:
           chkconfig glusterfsd on
           chkconfig glusterd on
           chkconfig hekafsd on 
       start glusterd and hekafsd on each brick:
           service glusterd start
           service hekafsd start 
       open a browser window to principal node port 8080. we use Google Chrome, Firefox seems not to work well
       configure nodes in your cluster
           select Manage Servers
           the IP address of the first or principal node is already listed
           enter the IP address or node name and press Add
           click 'Back to cluster configuration'
           repeat for each node in your cluster
           press Done
       configure one or more volumes in your cluster
           select Manage Volumes
           As per above, each node in my cluster has ten volumes: /dev/sda ... /dev/sdj, mounted on /bricks/sda ... /bricks/sdj
           tick the checkbox for /bricks/sda on each node
           leave Volume Type: set to Plain
           leave Replica or Strip count unset
           enter testsda for Volume ID
           press Provision
           add_local(testsda) OK ... is displayed for each node
           click 'Back to volume configuration'
           testsda is now shown in the list of Existing Volumes
           repeat as desired for additional volumes
           use the Back button in your browser to return to the Configuration Main menu
       configure one or more tenants in your cluster
           select Manage Tenants
           enter bob as the Tenant Name
           enter carol as the Tenant Password
           enter 10000 as the Tenant UID Range: Low
           enter 10999 as the Tenant GID Range: High
           enter 10000 as the Tenant GID Range: Low
           enter 10999 as the Tenant UID Range: High
           press Add
           add_local(bob) OK ... is displayed for each node
           click 'Back to tenant configuration'
           bob is now shown in the list of Existing Tenants
           repeat as desired for additional tenants
           click 'volumes' in the entry for bob
           testsda is shown in the Volume List
           tick the Enabled checkbox for testsda
           press Update
           Volumes enabled for bob ... is displayed for each node
           click 'Back to tenant configuration'
       start the volume(s)
           use the Back button in your browser to return to the Configuration Main menu
           select Manage Volumes
           click 'start' testsda entry in the list of Existing Volumes
           start_local(testsda) returned 0 ... is displayed for each node
  • mount the volume(s) on the client(s)
       sudo hfs_mount 192.168.122.21 testsda bob carol /mnt/testsda