Jul 172015
 

Intro

It took me a while to figure out optimal configuration for the tape library with two streamers used with Bacula backup software.

Exact model of tape library in use is Quantum Scalar i40 with two LTO5 streamers. It is hooked up directly to the main NFS server (so heavy backup traffic goes via localhost only) – server that runs bacula-sd and bacula-fd services only. Bacula director runs on separate, dedicated backup server.

Currently there are around 20 other servers connected to this system as clients, with various daily Incremental, weekly Differential and monthly Full backup level jobs scheduled for execution.

Some additional info about this setup in previous post – click here. Config files below:

insta-24

 


Relevant config files from Backup server


/etc/bacula/bacula-dir.conf

Director {  
  Name = prod-backup-dir
  QueryFile = "/etc/bacula/scripts/query.sql"
  WorkingDirectory = "/var/lib/bacula"
  PidDirectory = "/var/run/bacula"
  Password = "xxxxx"
  Messages = Daemon
  DirAddress = prod-backup.domain.com
  Maximum Concurrent Jobs = 20
}
@/etc/bacula/JobDefs/JobDefs.conf
@|"sh -c 'cat /etc/bacula/Job/*'"
@|"sh -c 'cat /etc/bacula/FileSet/*'"
@|"sh -c 'cat /etc/bacula/Schedule/*'"
@|"sh -c 'cat /etc/bacula/Clients-enabled/*'"
@|"sh -c 'cat /etc/bacula/Storage/*'"
@|"sh -c 'cat /etc/bacula/Pool/*'"
Catalog {
  Name = MyCatalog
  dbaddress = prod-db.domain.com ;
  dbname = "bacula"; dbuser = "bacula"; dbpassword = "xxxxx"
}
Messages {
  Name = Standard
  mailcommand = "/usr/lib/bacula/bsmtp -h prod-mailhub.domain.com -f \"\(Bacula\) \<%r\>\" -s \"Bacula: %t %e of %c %l\" %r"
  operatorcommand = "/usr/lib/bacula/bsmtp -h prod-mailhub.domain.com  -f \"\(Bacula\) \<%r\>\" -s \"Bacula: Intervention needed for %j\" %r"
  mail = [email protected] = all, !skipped            
  operator = [email protected] = mount
  console = all, !skipped, !saved
  append = "/var/lib/bacula/log" = all, !skipped
  catalog = all
}
Messages {
  Name = Daemon
  mailcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\" -s \"Bacula client %c job %n exit code %e  \" %r"
  mail = [email protected] = all, !skipped            
  console = all, !skipped, !saved
  append = "/var/lib/bacula/log" = all, !skipped
}
Console {
  Name = prod-backup-mon
  Password = "xxxxxxxxxxx"
  CommandACL = status, .status
}

Example job definition /etc/bacula/Job/Studies2010-1.conf

#----------------------------------
Job {
  Name = Studies2010-1
  Type = Backup
  Client = nfs-prod-fd
  Schedule = MonthlyCycle
  Messages = Daemon
  FileSet = Studies2010-1
  Level = Full
  Pool = lto5-pool
  Priority = 12
  Max Run Time = 1555200 # default limit is 6 days, 518400sec. bumped 3x just in case
  Spool Data = yes
  Spool Attributes = yes

}
#----------------------------------

Example fileset, /etc/bacula/FileSet/Studies2010-1.conf

#-------------------------------------------
FileSet {
  Name = "Studies2010-1"
  Include {
    Options {
      signature = MD5
      compression=GZIP5
      noatime=yes
      aclsupport = yes
      wilddir = "/export/studies/201007*"
      wilddir = "/export/studies/201008*"
     	    }
    Options {
      RegexDir = ".*"
      exclude = yes
	    }
    File = "/export/studies"
          }
}

Example Schedule, /etc/bacula/Schedule/MonthlyCycle3.conf

Schedule {
  Name = MonthlyCycle3
  Run = Level=Full Pool=lto5-pool 3rd fri at 23:30
}

Tape library, storage definition:

Storage {
  Name = TapeLibrary
  Address = prod-tapelib.comain.com
  SDPort = 9103
  Password = "xxxxxx"
  Device = QuantumScalar-I40
  Media Type = LTO-5
  Autochanger = yes
  Maximum Concurrent Jobs = 4
}

Pool of tapes defined here:

Pool {
  Name = lto5-pool
  Pool Type = Backup
  Volume Retention = 6 months
  Recycle = yes
  AutoPrune = yes
  Recycle = yes
  Label Format = LTO5
  Storage = TapeLibrary
}

 

Relevant config files from Tape Library server

 

Note that I spool data before saving to the tape – this prevents tape “shoe shine” during Incremental/Differential backups.

 

  
Storage { 
  Name = TapeLibrary
  WorkingDirectory = "/var/spool/bacula"
  Pid Directory = "/var/run"
}
Autochanger {
  Name = QuantumScalar-I40
  Device = Drive0
  Device = Drive1
  Changer Device = /dev/changer
  Changer Command = "/usr/libexec/bacula/mtx-changer %c %o %S %a %d"
}
Device {
  Name = Drive0
  Drive Index = 0
  Media Type = LTO-5
  Archive Device = /dev/nst0
  AutomaticMount = yes
  AlwaysOpen = yes
  RemovableMedia = yes
  RandomAccess = no
  AutoChanger = yes
  Alert Command = "sh -c 'smartctl -H -l error %c'"  
  Maximum Changer Wait = 600
  Maximum Rewind Wait = 600
  Maximum Open Wait = 600
  Spool Directory = /var/spool/bacula/Spool
  Maximum Spool Size = 45G
  Maximum Concurrent Jobs = 2
}
Device {
  Name = Drive1
  Drive Index = 1
  Media Type = LTO-5
  Archive Device = /dev/nst1
  AutomaticMount = yes
  AlwaysOpen = yes
  RemovableMedia = yes
  RandomAccess = no
  AutoChanger = yes
  Alert Command = "sh -c 'smartctl -H -l error %c'"
  Maximum Changer Wait = 600
  Maximum Rewind Wait = 600
  Maximum Open Wait = 600
  Spool Directory = /var/spool/bacula/Spool
  Maximum Spool Size = 45G
  Maximum Concurrent Jobs = 2
     }
Messages {
  Name = Standard
  director = prod-backup-dir = all
}
Director {
  Name = prod-backup-dir
  Password = "xxxxxxxx"
}
Director {
  Name = prod-backup-mon
  Password = "xxxxxxxxxx"
  Monitor = yes
}

Thoughts

Implementing Bacula driven backup solution requires some time and effort – but what you get in the end is sophisticated, enterprise grade backup system, capable of backing up TBs of data in organised and efficient manner.

Used in conjunction with Monitoring system it offers fully automated backup solution, with minimal operator effort required. Routine tasks boil down to:

 


Jul 022015
 

Time has come, I need to start switching my mentality from SysV to modern init systems. Turns out, that systemd thingy is not so bad! Actually, it’s pretty cool.

There is an interesting video from RedHat Summit 2015 (at the bottom of this page) which I wholeheartedly recommend, especially it doesn’t solely apply to Red Hat – Debian Jessie uses systemd too.

My notes for impatient:

 

  • Slice (each gets CPUShares=1024)
    user.slice
    system.slice # services
    machine.slice #vms, containers, etc
  • Scope
  • Service

 

systemctl list-unit-files --no-pager|grep lvm
systemctl -t service list-unit-files
systemctl -t service
systemctl -t socket
systemctl -t socket list-unit-files
systemctl get-default
systemctl set-default multi-user  # runlevel 3, no GUI
systemctl set-default graphical.target # aka runlevel 5, graphical.target
systemctl list-timers # can be used to periodically run fstrim for example?
systemd-delta # what changed on system comparing to originally shipped by distributor
systemctl rescue
systemctl emergency
systemd-cgtop # this is cool!
systemd-cgls
journalctl # logging. Trusted and untrusted fields in logs, untrusted are generated by logging app
journalctl -xn
journalctl -k -b -1
journalctl /dev/sda
journalctl /usr/bin/python-thinlinc
journalctl _SYSTEMD_UNIT=avahi-daemon.service

Checkout systemd based containers! Didn’t expect that

man systemd-nspawn
mkdir /var/lib/container/debian-tree
debootstrap --arch=amd64 unstable /var/lib/container/debian-tree/
systemd-nspawn -D /var/lib/container/debian-tree/

or assuming you have a bridge

systemd-nspawn --network-bridge=br-eth0 -D /var/lib/container/ka-lite

This installs a minimal Debian unstable distribution into the directory /var/lib/container/debian-tree/ and then spawns a shell in a namespace container in it. Wow. I know you can use like Linux Containers but this systemd-nspawn is already there waiting to be used.

Have fun.