devnull

Man of many talents. Server janitor, Chief Googler, Vice President of Pencil Sharpening, Director of Turning Things Off and On Again. Technology Plumber using Linux for stuff like Satellite STB, home CCTV system, kitchen sound bar, workstations, even car onboard computer. And servers, oh yeah - lots of them. I've been a Linux Mercenary for quite a while now, often using information posted by kind strangers on the Internet to solve problems during this journey. This blog is a humble attempt to give something back to the community.

Feb 162016
 

Intro

Pistyll Rhaeadr

I needed a job scheduling system for a single machine, to allow group of people run some number crunching scripts. Decided to try SLURM and was surprised that there are no rpm repo/packages available for Centos – sadly that ain’t as easy as apt-get install slurm-llnl

But I managed to get it working in the end and here you can find a journal from this journey.

 

We need EPEL repo

rpm -Uvh http://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm

Installing required bits and bobs

yum install -y munge-devel munge-libs readline-devel perl-ExtUtils-MakeMaker openssl-devel pam-devel rpm-build perl-DBI perl-Switch munge mariadb-devel

Downloading the latest stable version of Slurm

From http://www.schedmd.com/#repos

Building rpm packages

rpmbuild -ta slurm-15.08.7.tar.bz2

Once done install rpms

ls -l ~/rpmbuild/RPMS/x86_64/*.rpm
rpm -Uvh ~/rpmbuild/RPMS/x86_64/*.rpm

or even better upload it to your custom Spacewalk software channel. Don’t you use Spacewalk server? Check it out, if you have more Centos boxes then you gonna love it, it’s awesome.

We may also add user for slurm at that stage, we are going to need it at later.

useradd slurm
mkdir /var/log/slurm
chown slurm. /var/log/slurm

Install MariaDB

yum install mariadb-server -y
systemctl start mariadb
systemctl enable mariadb
mysql_secure_installation

# you can save mysql root password in root home dir,
# bad practise but from the other hand
# if someone can access root home dir
# then we are in troubles anyway

vim ~/.my.cnf
[client]
password = aksjdlowjedjw34dwnknxpw93e9032edwxbsx
# now root will have mysql root password-less shell.

Create SQL database

Start mysql shell and

mysql> grant all on slurm_acct_db.* TO 'slurm'@'localhost'
-> identified by 'some_pass' with grant option;
mysql> create database slurm_acct_db;

Configure SLURM db backend

# egrep -v '^#|^$' /etc/slurm/slurmdbd.conf
AuthType=auth/munge
DbdAddr=localhost
DbdHost=localhost
SlurmUser=slurm
DebugLevel=4
LogFile=/var/log/slurm/slurmdbd.log
PidFile=/var/run/slurmdbd.pid
StorageType=accounting_storage/mysql
StorageHost=localhost
StoragePass=some_pass
StorageUser=slurm
StorageLoc=slurm_acct_db

and enable service

systemctl start slurmdbd
systemctl enable slurmdbd
systemctl status slurmdbd

After starting service your shiny new database should be populated with tables:

MariaDB [slurm_acct_db]> show tables;
+-------------------------+
| Tables_in_slurm_acct_db |
+-------------------------+
| acct_coord_table |
| acct_table |
| clus_res_table |
| cluster_table |
| qos_table |
| res_table |
| table_defs_table |
| tres_table |
| txn_table |
| user_table |
+-------------------------+
10 rows in set (0.01 sec)

 

Time to configure Munge auth daemon

create-munge-key
systemctl start munge
systemctl status munge
systemctl enable munge

And finally the actual SLURM daemon

Stick something alongside these lines to your /etc/slurm/slurm.conf

# egrep -v '^#|^$' /etc/slurm/slurm.conf
ClusterName=efg
ControlMachine=efg01
SlurmUser=slurm
SlurmctldPort=6817
SlurmdPort=6818
AuthType=auth/munge
StateSaveLocation=/home/slurm/tmp
SlurmdSpoolDir=/tmp/slurmd
SwitchType=switch/none
MpiDefault=none
SlurmctldPidFile=/var/run/slurmctld.pid
SlurmdPidFile=/var/run/slurmd.pid
Proctracktype=proctrack/linuxproc
CacheGroups=0
ReturnToService=0
SlurmctldTimeout=300
SlurmdTimeout=300
InactiveLimit=0
MinJobAge=300
KillWait=30
Waittime=0
SchedulerType=sched/backfill
SelectType=select/linear
FastSchedule=1
SlurmctldDebug=3
SlurmdDebug=3
JobCompType=jobcomp/none
JobAcctGatherType=jobacct_gather/linux
JobAcctGatherFrequency=30
AccountingStorageType=accounting_storage/slurmdbd
NodeName=efg01 CPUs=16 State=UNKNOWN
PartitionName=debug Nodes=efg01 Default=YES MaxTime=INFINITE State=UP

and see if your service can start

systemctl start slurm
systemctl status slurm
systemctl enable slurm

 

 

Testing SLURM

 

scontrol show daemons
srun --ntasks=16 --label /bin/hostname
sbatch # submit script
salloc # create job alloc and start shell, interactive
srun # create job alloc and launch job step, MPI
sattach #
sinfo
sinfo --Node
sinfo -p debug
squeue -i60
squeue -u dyzio -t all
squeue -s -p debug
smap
sview
scontrol show partition
scontrol update PartitionName=debug MaxTime=60
scontrol show config
sacct -u dyzio
sacct -p debug
sstat
sreport
sacctmgr
sprio
sshare
sdiag
scancel --user=dyzio --state=pending
scancel 444445
strigger
# Submit a job array with index values between 0 and 31
sbatch --array=0-31 -N1 tmp
# Submit a job array with index values of 1, 3, 5 and 7
sbatch --array=1,3,5,7 -N1 tmp
# Submit a job array with index values between 1 and 7
# with a step size of 2 (i.e. 1, 3, 5 and 7)
sbatch --array=1-7:2 -N1 tmp

 

 

Any troubles?

Checkout /var/log/messages /var/log/slurm/slurmdbd.log and output from

systemctl status slurm slurmdbd munge -l

That should get you started. Drop a comment below if it did.

Feb 102016
 

4x_NvidiaGTX780 GPU

I’ve got Centos 7 based Bacula installation with storage daemon writing to file volumes located on ZFS filesystem. Chown’ing filesystem to user bacula was not enough, SElinux being SElinux didn’t particularly like bacula writing to location chosen by me (/tank/backup) as it expects Bacula to write to /bacula by default.

Lets identify available Bacula contexts and re-label /tank/backup accordingly

# semanage fcontext -l | grep bacula
 /bacula(/.*)? all files system_u:object_r:bacula_store_t:s0
 /etc/bacula.* all files system_u:object_r:bacula_etc_t:s0
 /var/bacula(/.*)? all files system_u:object_r:bacula_store_t:s0
 /var/lib/bacula.* all files system_u:object_r:bacula_var_lib_t:s0
 /var/log/bacula.* all files system_u:object_r:bacula_log_t:s0
 /var/run/bacula.* regular file system_u:object_r:bacula_var_run_t:s0
 /usr/sbin/bacula.* regular file system_u:object_r:bacula_exec_t:s0
 /var/spool/bacula.* all files system_u:object_r:bacula_spool_t:s0
 /var/spool/bacula/log(/.*)? all files system_u:object_r:var_log_t:s0
 /etc/rc\.d/init\.d/bacula.* regular file system_u:object_r:bacula_initrc_exec_t:s0
 /usr/sbin/bat regular file system_u:object_r:bacula_admin_exec_t:s0
 /usr/sbin/bconsole regular file system_u:object_r:bacula_admin_exec_t:s0

Ahh OK, so it’s called “system_u:object_r:bacula_store_t:s0” – lets apply it

chcon system_u:object_r:bacula_store_t:s0 /tank/backup
semanage fcontext -a -t bacula_store_t "/tank/backup(/.*)?"
restorecon -R -v /tank/backup

Same will work if your Centos 7 client will refuse to restore data to /bacula-restores, with message in server log:

26-Sep 14:40 death-star JobId 24822: Error: mkpath.c:138 Cannot create directory /bacula-restores/backup: ERR=Permission denied

and message in client log:

type=AVC msg=audit(1474897201.721:307): avc:  denied  { write } for  pid=26477 comm="bacula-fd" name="bacula-restores" dev="vda1" ino=159551617 scontext=system_u:system_r:bacula_t:s0 tcontext=unconfined_u:object_r:default_t:s0 tclass=dir

Simply run:

chcon system_u:object_r:bacula_store_t:s0 /bacula-restores
semanage fcontext -a -t bacula_store_t "/bacula-restores(/.*)?"
restorecon -R -v /bacula-restores
ls -lZ /

and now your restore job will run just fine. Magic.