Sep 162016
 

My notes for installing Son of Grid Engine (SGE) on commodity cluster.

golden_h

Intro

Grab from here  the following RPM packages:

gridengine-8.1.9-1.el6.x86_64.rpm
gridengine-debuginfo-8.1.9-1.el6.x86_64.rpm
gridengine-devel-8.1.9-1.el6.noarch.rpm
gridengine-drmaa4ruby-8.1.9-1.el6.noarch.rpm
gridengine-execd-8.1.9-1.el6.x86_64.rpm
gridengine-guiinst-8.1.9-1.el6.noarch.rpm
gridengine-qmaster-8.1.9-1.el6.x86_64.rpm
gridengine-qmon-8.1.9-1.el6.x86_64.rpm

(at the time of writing version 8.1.9).

For your convenience, the following one liner should fetch these for you 🙂

cd /tmp; for i in gridengine-8.1.9-1.el6.x86_64.rpm gridengine-debuginfo-8.1.9-1.el6.x86_64.rpm gridengine-devel-8.1.9-1.el6.noarch.rpm gridengine-drmaa4ruby-8.1.9-1.el6.noarch.rpm gridengine-execd-8.1.9-1.el6.x86_64.rpm gridengine-guiinst-8.1.9-1.el6.noarch.rpm gridengine-qmaster-8.1.9-1.el6.x86_64.rpm gridengine-qmon-8.1.9-1.el6.x86_64.rpm; do wget https://arc.liv.ac.uk/downloads/SGE/releases/8.1.9/$i;done

Pick one server that will be serving as a master node in your cluster, referred later as qmaster.
For smaller clusters it can happily run on small VM (say 2x vCPU, 2GB RAM) maximising your resource usage.

Install EPEL on all nodes

rpm -Uvh http://dl.fedoraproject.org/pub/epel/epel-release-latest-6.noarch.rpm

Install prerequisits on all nodes

yum install -y perl-Env.noarch perl-Exporter.noarch perl-File-BaseDir.noarch perl-Getopt-Long.noarch perl-libs perl-POSIX-strptime.x86_64 perl-XML-Simple.noarch jemalloc munge-libs hwloc lesstif csh ruby xorg-x11-fonts xterm java xorg-x11-fonts-ISO8859-1-100dpi xorg-x11-fonts-ISO8859-1-75dpi mailx

Install GridEngine packages on all nodes

cd /tmp/
yum localinstall gridengine-*

Install Qmaster

cd /opt/sge
./install_qmaster

Accepting defaults should just work, well you might want to run it under different user than r00t so:

"Please enter a valid user name >> sgeadmin"

Make sure to add GridEngine to global environment:

cp /opt/sge/default/common/settings.sh /etc/profile.d/sge.sh

NFS export SGE root to nodes in your cluster

vim /etc/exports

/opt/sge 10.10.80.0/255.255.255.0(rw,no_root_squash,sync,no_subtree_check,nohide)

and mount share on exec nodes

vim /etc/fstab

qmaster:/opt/sge 	/opt/sge nfs	tcp,intr,noatime	0	0

 

Installing exec nodes

cd /opt/sge
./install_execd

Just go with the flow here. Once done you should be able to see your exec nodes:

# qhost 
HOSTNAME                ARCH         NCPU NSOC NCOR NTHR  LOAD  MEMTOT  MEMUSE  SWAPTO  SWAPUS
----------------------------------------------------------------------------------------------
global                  -               -    -    -    -     -       -       -       -       -
execnode01              lx-amd64        8    2    8    8  0.12   15.6G    5.2G   20.0G  104.9M
execnode02        	lx-amd64        8    2    8    8  0.00   15.7G    1.3G   21.1G     0.0
execnode03              lx-amd64        8    2    8    8  0.00   15.7G    1.4G   21.1G   18.6M

That means you can start submitting jobs to your cluster, either interactive with qlogin or qrsh or batch jobs with qsub.

Adding queues (for FSL)

In most cases it’s enough to have a default queue called all.q

This example will define new queues with different priorities (nice levels):

# change defaults for all.q
qconf -sq all.q |\
    sed -e 's/bin\/csh/bin\/sh/' |\
    sed -e 's/posix_compliant/unix_behavior/' |\
    sed -e 's/priority              0/priority 20/' >\
    /tmp/q.tmp
qconf -Mq /tmp/q.tmp

# add other queues
sed -e 's/all.q/verylong.q/' /tmp/q.tmp >\
   /tmp/verylong.q
qconf -Aq /tmp/verylong.q

sed -e 's/all.q/long.q/' /tmp/q.tmp |\
   sed -e 's/priority *20/priority 15/' >\
   /tmp/long.q
qconf -Aq /tmp/long.q

sed -e 's/all.q/short.q/' /tmp/q.tmp |\
   sed -e 's/priority *20/priority 10/' >\
   /tmp/short.q
qconf -Aq /tmp/short.q

sed -e 's/all.q/veryshort.q/' /tmp/q.tmp |\
   sed -e 's/priority *20/priority 5/' >\
   /tmp/veryshort.q
qconf -Aq /tmp/veryshort.q

Monitoring your cluster

 

Use qmon GUI or the following commands:

# qstat -f

queuename                      qtype resv/used/tot. load_avg arch          states
---------------------------------------------------------------------------------
all.q@execnode01 BIP   0/0/8          0.12     lx-amd64      
---------------------------------------------------------------------------------
all.q@execnode02 BIP   0/0/8          0.00     lx-amd64      
---------------------------------------------------------------------------------
all.q@execnode03 BIP   0/0/8          0.00     lx-amd64      
---------------------------------------------------------------------------------
long.q@execnode01 BIP   0/0/8          0.12     lx-amd64      
---------------------------------------------------------------------------------
long.q@execnode02 BIP   0/0/8          0.00     lx-amd64      
---------------------------------------------------------------------------------
long.q@execnode03 BIP   0/0/8          0.00     lx-amd64      
---------------------------------------------------------------------------------
short.q@execnode01 BIP   0/0/8          0.12     lx-amd64      
---------------------------------------------------------------------------------
short.q@execnode02 BIP   0/0/8          0.00     lx-amd64      
---------------------------------------------------------------------------------
short.q@execnode03 BIP   0/0/8          0.00     lx-amd64      
---------------------------------------------------------------------------------
verylong.q@execnode01 BIP   0/0/8          0.12     lx-amd64      
---------------------------------------------------------------------------------
verylong.q@execnode02 BIP   0/0/8          0.00     lx-amd64      
---------------------------------------------------------------------------------
verylong.q@execnode03 BIP   0/0/8          0.00     lx-amd64      
---------------------------------------------------------------------------------
veryshort.q@execnode01 BIP   0/0/8          0.12     lx-amd64      
---------------------------------------------------------------------------------
veryshort.q@execnode02 BIP   0/0/8          0.00     lx-amd64      
---------------------------------------------------------------------------------
veryshort.q@execnode03 BIP   0/0/8          0.00     lx-amd64   

# qhost -q

HOSTNAME                ARCH         NCPU NSOC NCOR NTHR  LOAD  MEMTOT  MEMUSE  SWAPTO  SWAPUS
----------------------------------------------------------------------------------------------
global                  -               -    -    -    -     -       -       -       -       -
execnode01            lx-amd64        8    2    8    8  0.12   15.6G    5.2G   20.0G  104.9M
   all.q                BIP   0/0/8         
   long.q               BIP   0/0/8         
   short.q              BIP   0/0/8         
   veryshort.q          BIP   0/0/8         
   verylong.q           BIP   0/0/8         
execnode02        lx-amd64        8   2    8    8  0.00   15.7G    1.3G   21.1G     0.0
   all.q                BIP   0/0/8         
   long.q               BIP   0/0/8         
   short.q              BIP   0/0/8         
   veryshort.q          BIP   0/0/8         
   verylong.q           BIP   0/0/8         
execnode03                lx-amd64    8    2    8    8  0.00   15.7G    1.4G   21.1G   18.6M
   all.q                BIP   0/0/8         
   long.q               BIP   0/0/8         
   short.q              BIP   0/0/8         
   veryshort.q          BIP   0/0/8         
   verylong.q           BIP   0/0/8         

Sep 122016
 

Intro

This one is interesting. I’ve got a few HP BL260 blade servers, out of warranty but packed with RAM and CPU cores. Wanted to use them as compute nodes in my OpenStack cloud but all (literally all! I mean every single one!) internal SFF SATA drives died within 6 years.

Instead of replacing I decided to get rid of internal hard drives altogether and use Centos ability to use remote storage device for root partition. In the similar manner as VmWare ESXi hosts booting from iSCSI SAN – so no spinning disks inside compute node, no heat or additional energy consumption.

These cheap Blades didn’t have fancy HBA that would be able to boot from iSCSI so I used PXE booting instead. Essentially:

  • we set blade to boot from NIC
  • blade gets IP address and PXE boot server information with DHCP packet
  • blade pulls kernel and initrd from PXE server
  • blade uses iSCSI target LUN as R/W root device

iSCSI targets

iSCSI targets (one per each blade) created first on my ZFS server (NAS4FREE) – added bonus is that we can zfs-snapshot each blade’s LUN before applying critical updates.

 

Extents (zvols):

Name Path
mielnet-compute016 /dev/zvol/tank/mielnet-compute016
mielnet-compute017 /dev/zvol/tank/mielnet-compute017
mielnet-compute018 /dev/zvol/tank/mielnet-compute018
mielnet-compute059 /dev/zvol/tank/mielnet-compute059

Targets:
Name Flags LUNs PG IG AG
iqn.2007-09.jp.ne.peach.istgt:mielnet-compute016 rw LUN0=/dev/zvol/tank/mielnet-compute016 1 1 1
iqn.2007-09.jp.ne.peach.istgt:mielnet-compute017 rw LUN0=/dev/zvol/tank/mielnet-compute017 1 3 3
iqn.2007-09.jp.ne.peach.istgt:mielnet-compute018 rw LUN0=/dev/zvol/tank/mielnet-compute018 1 4 4
iqn.2007-09.jp.ne.peach.istgt:mielnet-compute059 rw LUN0=/dev/zvol/tank/mielnet-compute059 1 2 2
Initiator Groups:

Tag Initiators Networks Comment
1 ALL 10.10.100.16/32 mielnet-compute016 Initiator Group
2 ALL 10.10.100.59/32 mielnet-compute059 Initiator Group
3 ALL 10.10.100.17/32 mielnet-compute017 Initiator Group
4 ALL 10.10.100.18/32 mielnet-compute018 Initiator Group

 

OS installation

I used standard Centos installer, using advanced “Storage” option. Note that installation wizard failed/stuck at Grub installation phase, at this point I’ve used  installer’s second console ALT+F2 to scp kernel and initrd image out to my  PXE server.

 

DHCP service

We need DHCP service in order to make it working. Just standard DHCP reservations for my blades and PXE server living at 10.10.100.57 address:

# cat /etc/dhcp/dhcpd.conf
#########################
deny unknown-clients;
authoritative;
option dhcp-max-message-size 2048;
use-host-decl-names on;
ddns-update-style none;
option domain-name "mielnet.pl";
option domain-name-servers 8.8.8.8, 8.8.4.4 ;
default-lease-time 86400;
max-lease-time 86400;
log-facility local7;
option time-servers ntp0.mielnet.pl,inti.mielnet.pl ;
option ntp-servers ntp0.mielnet.pl,inti.mielnet.pl ;
#########################

subnet 10.10.100.0 netmask 255.255.255.0 {
option routers 10.10.100.254 ;
next-server 10.10.100.57 ;
filename "pxelinux.0";
option tftp-server-name "10.10.100.57";

}
host mielnet-compute016 {hardware ethernet 00:24:81:cf:xx:xx;fixed-address mielnet-compute016;}
host mielnet-compute017 {hardware ethernet 00:24:81:cf:xx:yy;fixed-address mielnet-compute017;}
host mielnet-compute018 {hardware ethernet 00:24:81:cf:xx:xy;fixed-address mielnet-compute018;}
host mielnet-compute059 {hardware ethernet 00:0c:29:02:xx:yx;fixed-address mielnet-compute059;}

PXE booting

Command gethostip 10.10.100.16 will translate IP address into hexadecimal format. Then:

vim /var/lib/tftpboot/pxelinux.cfg/86977610

 

# cat 86977610
DEFAULT menu
PROMPT 0
MENU TITLE MIELNET IT Services || Boot Server
TIMEOUT 20
TOTALTIMEOUT 200
ONTIMEOUT Centos7-mielnet-compute016

LABEL Centos7-mielnet-compute016
MENU LABEL Centos7-mielnet-compute016
kernel /images/mielnet-compute016/vmlinuz-3.10.0-327.10.1.el7.x86_64 root=/dev/sda1 ro netroot=iscsi:mielnet-compute016:[email protected]::3260::iqn.2007-09.jp.ne.peach.istgt:mielnet-compute016 rd.iscsi.initiator=iqn.1994-05.com.redhat:4b7c6d70242b vconsole.font=latarcyrheb-sun16 vconsole.keymap=uk LANG=en_GB.UTF-8  console=tty0 ip=enp2s0f0:dhcp  rhgb quiet
append initrd=/images/mielnet-compute016/initramfs-3.10.0-327.10.1.el7.x86_64.img

LABEL Centos7-mielnet-compute016-bridge
MENU LABEL Centos7-mielnet-compute016-bridge
kernel /images/mielnet-compute016/vmlinuz-3.10.0-327.10.1.el7.x86_64 root=/dev/sda1 ro netroot=iscsi:mielnet-compute016:[email protected]::3260::iqn.2007-09.jp.ne.peach.istgt:mielnet-compute016 rd.iscsi.initiator=iqn.1994-05.com.redhat:4b7c6d70242b vconsole.font=latarcyrheb-sun16 vconsole.keymap=uk LANG=en_GB.UTF-8  bridge=br-ex:enp2s0f0 ip=br-ex:dhcp console=tty0 rd.shell rd.debug
append initrd=/images/mielnet-compute016/initramfs-3.10.0-327.10.1.el7.x86_64.img

LABEL Centos7-mielnet-compute016-rescue
MENU LABEL Centos7-mielnet-compute016-rescue
kernel /images/mielnet-compute016/vmlinuz-0-rescue-a8aafbe2565244fc8478818344af177d rescue vconsole.font=latarcyrheb-sun16 vconsole.keymap=uk LANG=en_GB.UTF-8 root=/dev/sda1 netroot=iscsi:mielnet-compute016:[email protected]::3260::iqn.2007-09.jp.ne.peach.istgt:mielnet-compute016 ip=enp2s0f0:dhcp rd.iscsi.initiator=iqn.1994-05.com.redhat:4b7c6d70242b
append initrd=/images/mielnet-compute016/initramfs-0-rescue-a8aafbe2565244fc8478818344af177d.img

MENU end

make sure to replace mielnet-compute016:xxxxxxxx with your iSCSI target unique CHAP auth.

Lastly make sure we have kernel and initrd.img in place:

 # ls -l /var/lib/tftpboot/images/mielnet-compute016/
total 172068
-rw-r--r--. 1 root root   126426 Nov 19  2015 config-3.10.0-327.el7.x86_64
drwxr-xr-x. 2 root root       26 Mar 16 17:19 grub
drwx------. 3 root root       19 Mar 16 17:20 grub2
-rw-r--r--. 1 root root 41572738 Mar 16 17:21 initramfs-0-rescue-a8aafbe2565244fc8478818344af177d.img
-rw-r--r--. 1 root root 20945730 Mar 23 14:20 initramfs-3.10.0-327.10.1.el7.x86_64.img
-rw-r--r--. 1 root root 21417384 Mar 16 17:21 initramfs-3.10.0-327.el7.x86_64.img
-rw-r--r--. 1 root root 20945730 Mar 23 14:49 initramfs.img
-rw-r--r--. 1 root root 41572738 Mar 16 17:21 initramfs-rescue.img
-rw-r--r--. 1 root root   602670 Mar 16 17:20 initrd-plymouth.img
-rw-r--r--. 1 root root   252612 Nov 19  2015 symvers-3.10.0-327.el7.x86_64.gz
-rw-------. 1 root root  2963044 Nov 19  2015 System.map-3.10.0-327.el7.x86_64
-rwxr-xr-x. 1 root root  5155536 Mar 23 14:50 vmlinuz
-rwxr-xr-x. 1 root root  5156528 Mar 16 17:22 vmlinuz-0-rescue-a8aafbe2565244fc8478818344af177d
-rwxr-xr-x. 1 root root  5155536 Feb 16  2016 vmlinuz-3.10.0-327.10.1.el7.x86_64
-rwxr-xr-x. 1 root root  5156528 Nov 19  2015 vmlinuz-3.10.0-327.el7.x86_64
-rwxr-xr-x. 1 root root  5156528 Mar 16 17:22 vmlinuz-rescue

That should get you going. The only downside I can see, after upgrading Linux kernel you need to manually copy new kernel/initrd to PXE server and then change kernel filename in PXE config file manually. Fortunately, with Centos it doesn’t happen that often so I can live with that.

Apart of that, been running these Blades as compute nodes like that for a few months now with zero problems so far.