From Fedora Project Wiki

No edit summary
No edit summary
 
(17 intermediate revisions by the same user not shown)
Line 3: Line 3:
===Hypervisor/Host===
===Hypervisor/Host===


8 CPU, 24GB RAM, FC HBA to back-end storage, running F15.
I'm using an 8 CPU, 24GB RAM machine running F15, with a qlogic FC HBA to back-end NAS.


===Brick node guests on host===
===Brick node guests on host===


F15node01 (192.168.122.21)
I created four guests running F15 to use as brick nodes (servers):
F15node02 (192.168.122.22)
 
F15node03 (192.168.122.23)
* F15node1 (192.168.122.21)
F15node04 (192.168.122.24)
* F15node2 (192.168.122.22)
* F15node3 (192.168.122.23)
* F15node4 (192.168.122.24)
 
(You may create as many as you want for your set-up.)


===Client node guest(s) on host===
===Client node guest(s) on host===


F15node05 (192.168.122.25)
I created a single guest as a client:
 
* F15node5 (192.168.122.25)


N.B. all guest nodes are running F15.
(You may create more than one.)


===Back-end storage===
===Back-end storage===


40 5G LUNs on qlogic fibrechannel, provisioned as SCSI disks, 10 per brick. N.B. Size and number of LUNs is arbitrary.
I provisioned eight 5G LUNs on a NAS device, allowing in this case for two LUNs per brick. The actual size and number of LUNs you use is up to you. Because the LUNs are attached in an apparently random order on every boot, I used the Disk Utility to add labels to each LUN. I labeled them guest1vol0, guest1vol1, guest2vol0, guest2vol1, guest3vol0, guest3vol1, guest4vol0, and guest4vol1. Add the LUNs to the guest brick VMs using Add Hardware->Storage in the guest detail window. Choose 'Select managed or other existing storage', enter /dev/disk/by-label/guestXvolY, Device Type: SCSI disk, Cache mode: none, Storage format: raw. Use other Cache mode and Storage format options at your discretion.


===Important Links===
===Important Links===


cloudfs git repo is git://git.fedorahosted.org/CloudFS.git
* HekaFS git repo is git://git.fedorahosted.org/CloudFS.git
F16 glusterfs RPMs are https://koji.fedoraproject.org/koji/buildinfo?buildID=259916
* F16 glusterfs RPMs are at http://koji.fedoraproject.org/koji/packageinfo?packageID=5443
F16 hekafs RPM is https://koji.fedoraproject.org/koji/buildinfo?buildID=259462
* F16 hekafs RPM is at http://koji.fedoraproject.org/koji/packageinfo?packageID=12428
HekaFS wiki page is https://fedoraproject.org/wiki/HekaFS
* HekaFS wiki page is https://fedoraproject.org/wiki/HekaFS
F16 HekaFS Feature wiki page is https://fedoraproject.org/wiki/Features/HekaFS  
* F16 HekaFS Feature wiki page is https://fedoraproject.org/wiki/Features/HekaFS  
 
* Jeff Darcy's HekaFS.org blog at http://hekafs.org/blog/
If you use the hekafs RPM on RHEL, change line 23 of /etc/init.d/hekafsd from python2.7 to python2.6 after installing.


===the nitty gritty===
===The nitty gritty===


    download the glusterfs, glusterfs-server, glusterfs-fuse, and hekafs RPMs
* On each server node, copy root's .ssh/id_dsa.pub to (root's) .ssh/authorized_keys file. Ensure that root can ssh between nodes without requiring a password to be entered.
    install the RPMs on all brick nodes and client nodes
* download the glusterfs, glusterfs-server, glusterfs-fuse, and hekafs RPMs
    make filesystems on the iSCSI LUNs and mount them, e.g. using ext4:
* install the RPMs on all server nodes and client nodes
        for lun in /dev/sd? ; do sudo mkfs.ext4 $lun; done
* on each server node make brick file systems for each of the LUNs and mount them, e.g. using ext4:
        for dev in /dev/sd? ; do sudo mkdir -p /bricks/`basename $dev`; done
** <code>for lun in /dev/sd? ; do sudo mkfs.ext4 $lun; done</code>
        for dev in /dev/sd? ; do sudo mount $dev /bricks/`basename $dev`; done
** <code>for dev in /dev/sd? ; do sudo mkdir -p /bricks/`basename $dev`; done</code>
        optional make /etc/fstab entries for the mounts
** <code>for dev in /dev/sd? ; do sudo mount $dev /bricks/`basename $dev`; done</code>
        Note: if you use qemu/kvm guests as bricks and use xfs on iSCSI LUNs shared from the qemu/kvm host, the guests will not always probe and initialize the iSCSI LUNs correctly (fast enough) and the guests will usually require manual intervention to boot. Unmount all bricks, run xfs_repair on each iSCSI device (e.g. /dev/sda), mount the bricks, and `exit` to continue to boot.  
** optional make /etc/fstab entries for the mounts
    open ports in the firewall using the firewall admin utility:
** Note: if you use xfs on iSCSI LUNs shared from the qemu/kvm host, the guests will not always probe and initialize the iSCSI LUNs correctly (fast enough?) and the guests will usually require manual intervention during boot. Unmount all bricks, run xfs_repair on each iSCSI device (e.g. /dev/sda), mount the bricks, and `exit` to continue to boot.
        on all bricks, open port 8080 tcp (Other Ports)
* on each server node open ports in the firewall using the firewall admin utility:
        on all bricks, open ports 24007-24029 tcp (Other Ports)
** open port 8080 tcp (Other Ports)
    setup gluster on the brick nodes
** open ports 24007-24029 tcp (Other Ports)
        enable glusterfsd, glusterd, and hekafsd on each brick:
* set up gluster on each brick node
            chkconfig glusterfsd on
*# on each server node enable glusterfsd, glusterd, and hekafsd:
            chkconfig glusterd on
*#* <code>chkconfig glusterfsd on</code>
            chkconfig hekafsd on  
*#* <code>chkconfig glusterd on</code>
        start glusterd and hekafsd on each brick:
*#* <code>chkconfig hekafsd on </code>
            service glusterd start
*# on each server node start glusterd and hekafsd:
            service hekafsd start  
*#* <code>service glusterd start</code>
        open a browser window to principal node port 8080. we use Google Chrome, Firefox seems not to work well
*#* <code>service hekafsd start</code>
        configure nodes in your cluster
*# open a browser window to principal server node port 8080 (http:192.168.122.21:8080).
            select Manage Servers
*# configure the nodes in your cluster
            the IP address of the first or principal node is already listed
*#* select ''Manage Servers''
            enter the IP address or node name and press Add
*#* the IP address of the first or principal node is already listed
            click 'Back to cluster configuration'
*#* enter the IP address or node name and press '''Add'''
            repeat for each node in your cluster
*#* click ''Back to cluster configuration''
            press Done
*#* repeat for each node in your cluster
        configure one or more volumes in your cluster
*#* press '''Done'''
            select Manage Volumes
*# configure one or more volumes in your cluster
            As per above, each node in my cluster has ten volumes: /dev/sda ... /dev/sdj, mounted on /bricks/sda ... /bricks/sdj
*#* select ''Manage Volumes''
            tick the checkbox for /bricks/sda on each node
*#* As described above, each node in my cluster has ten volumes: /dev/sda ... /dev/sdj, mounted on /bricks/sda ... /bricks/sdj
            leave Volume Type: set to Plain
*#* tick the checkbox for /bricks/sda on each node
            leave Replica or Strip count unset
*#* leave Volume Type: set to Plain
            enter testsda for Volume ID
*#* leave Replica or Strip count blank (unset)
            press Provision
*#* enter <code>testsda</code> for Volume ID
            add_local(testsda) OK ... is displayed for each node
*#* press '''Provision'''
            click 'Back to volume configuration'
*#* add_local(testsda) OK ... is displayed for all four nodes
            testsda is now shown in the list of Existing Volumes
*#* click ''Back to volume configuration''
            repeat as desired for additional volumes
*#* testsda is now shown in the list of Existing Volumes
            use the Back button in your browser to return to the Configuration Main menu
*#* repeat as desired for additional volumes
        configure one or more tenants in your cluster
*#* press '''Done'''
            select Manage Tenants
*# configure one or more tenants in your cluster
            enter bob as the Tenant Name
*#* select ''Manage Tenants''
            enter carol as the Tenant Password
*#* enter <code>bob</code> as the Tenant Name
            enter 10000 as the Tenant UID Range: Low
*#* enter <code>carol</code> as the Tenant Password
            enter 10999 as the Tenant GID Range: High
*#* enter <code>10000</code> as the Tenant UID Range: Low
            enter 10000 as the Tenant GID Range: Low
*#* enter <code>10999</code> as the Tenant GID Range: High
            enter 10999 as the Tenant UID Range: High
*#* enter <code>10000</code> as the Tenant GID Range: Low
            press Add
*#* enter <code>10999</code> as the Tenant UID Range: High
            add_local(bob) OK ... is displayed for each node
*#* press '''Add'''
            click 'Back to tenant configuration'
*#* add_local(bob) OK ... is displayed for all four nodes
            bob is now shown in the list of Existing Tenants
*#* click ''Back to tenant configuration''
            repeat as desired for additional tenants
*#* bob is now shown in the list of Existing Tenants
            click 'volumes' in the entry for bob
*#* repeat as desired for additional tenants
            testsda is shown in the Volume List
*#* click ''volumes'' in the entry for bob
            tick the Enabled checkbox for testsda
*#* testsda is shown in the Volume List
            press Update
*#* tick the Enabled checkbox for testsda
            Volumes enabled for bob ... is displayed for each node
*#* press '''Update'''
            click 'Back to tenant configuration'
*#* Volumes enabled for bob ... is displayed for all four nodes
        start the volume(s)
*#* click ''Back to tenant configuration''
            use the Back button in your browser to return to the Configuration Main menu
*# start the volume(s)
            select Manage Volumes
*#* press '''Done'''
            click 'start' testsda entry in the list of Existing Volumes
*#* select ''Manage Volumes''
            start_local(testsda) returned 0 ... is displayed for each node
*#* click ''start'' testsda entry in the list of Existing Volumes
    mount the volume(s) on the client(s)
*#* start_local(testsda) returned 0 ... is displayed for all four nodes
        sudo hfs_mount 192.168.122.21 testsda bob carol /mnt/testsda
* mount the volume(s) on the client(s)
    Treat yourself to a beer.
** <code>sudo hfs_mount 192.168.122.21 testsda bob carol /mnt/testsda</code>

Latest revision as of 13:23, 14 November 2011

Setting up a simple HekaFS cluster

Hypervisor/Host

I'm using an 8 CPU, 24GB RAM machine running F15, with a qlogic FC HBA to back-end NAS.

Brick node guests on host

I created four guests running F15 to use as brick nodes (servers):

  • F15node1 (192.168.122.21)
  • F15node2 (192.168.122.22)
  • F15node3 (192.168.122.23)
  • F15node4 (192.168.122.24)

(You may create as many as you want for your set-up.)

Client node guest(s) on host

I created a single guest as a client:

  • F15node5 (192.168.122.25)

(You may create more than one.)

Back-end storage

I provisioned eight 5G LUNs on a NAS device, allowing in this case for two LUNs per brick. The actual size and number of LUNs you use is up to you. Because the LUNs are attached in an apparently random order on every boot, I used the Disk Utility to add labels to each LUN. I labeled them guest1vol0, guest1vol1, guest2vol0, guest2vol1, guest3vol0, guest3vol1, guest4vol0, and guest4vol1. Add the LUNs to the guest brick VMs using Add Hardware->Storage in the guest detail window. Choose 'Select managed or other existing storage', enter /dev/disk/by-label/guestXvolY, Device Type: SCSI disk, Cache mode: none, Storage format: raw. Use other Cache mode and Storage format options at your discretion.

Important Links

The nitty gritty

  • On each server node, copy root's .ssh/id_dsa.pub to (root's) .ssh/authorized_keys file. Ensure that root can ssh between nodes without requiring a password to be entered.
  • download the glusterfs, glusterfs-server, glusterfs-fuse, and hekafs RPMs
  • install the RPMs on all server nodes and client nodes
  • on each server node make brick file systems for each of the LUNs and mount them, e.g. using ext4:
    • for lun in /dev/sd? ; do sudo mkfs.ext4 $lun; done
    • for dev in /dev/sd? ; do sudo mkdir -p /bricks/basename $dev; done
    • for dev in /dev/sd? ; do sudo mount $dev /bricks/basename $dev; done
    • optional make /etc/fstab entries for the mounts
    • Note: if you use xfs on iSCSI LUNs shared from the qemu/kvm host, the guests will not always probe and initialize the iSCSI LUNs correctly (fast enough?) and the guests will usually require manual intervention during boot. Unmount all bricks, run xfs_repair on each iSCSI device (e.g. /dev/sda), mount the bricks, and exit to continue to boot.
  • on each server node open ports in the firewall using the firewall admin utility:
    • open port 8080 tcp (Other Ports)
    • open ports 24007-24029 tcp (Other Ports)
  • set up gluster on each brick node
    1. on each server node enable glusterfsd, glusterd, and hekafsd:
      • chkconfig glusterfsd on
      • chkconfig glusterd on
      • chkconfig hekafsd on
    2. on each server node start glusterd and hekafsd:
      • service glusterd start
      • service hekafsd start
    3. open a browser window to principal server node port 8080 (http:192.168.122.21:8080).
    4. configure the nodes in your cluster
      • select Manage Servers
      • the IP address of the first or principal node is already listed
      • enter the IP address or node name and press Add
      • click Back to cluster configuration
      • repeat for each node in your cluster
      • press Done
    5. configure one or more volumes in your cluster
      • select Manage Volumes
      • As described above, each node in my cluster has ten volumes: /dev/sda ... /dev/sdj, mounted on /bricks/sda ... /bricks/sdj
      • tick the checkbox for /bricks/sda on each node
      • leave Volume Type: set to Plain
      • leave Replica or Strip count blank (unset)
      • enter testsda for Volume ID
      • press Provision
      • add_local(testsda) OK ... is displayed for all four nodes
      • click Back to volume configuration
      • testsda is now shown in the list of Existing Volumes
      • repeat as desired for additional volumes
      • press Done
    6. configure one or more tenants in your cluster
      • select Manage Tenants
      • enter bob as the Tenant Name
      • enter carol as the Tenant Password
      • enter 10000 as the Tenant UID Range: Low
      • enter 10999 as the Tenant GID Range: High
      • enter 10000 as the Tenant GID Range: Low
      • enter 10999 as the Tenant UID Range: High
      • press Add
      • add_local(bob) OK ... is displayed for all four nodes
      • click Back to tenant configuration
      • bob is now shown in the list of Existing Tenants
      • repeat as desired for additional tenants
      • click volumes in the entry for bob
      • testsda is shown in the Volume List
      • tick the Enabled checkbox for testsda
      • press Update
      • Volumes enabled for bob ... is displayed for all four nodes
      • click Back to tenant configuration
    7. start the volume(s)
      • press Done
      • select Manage Volumes
      • click start testsda entry in the list of Existing Volumes
      • start_local(testsda) returned 0 ... is displayed for all four nodes
  • mount the volume(s) on the client(s)
    • sudo hfs_mount 192.168.122.21 testsda bob carol /mnt/testsda