Octopress Code Snippets Theming

| Comments

After a some comments from a buddy asking why I chose the dark theme in solarized, to which my answer was it was default. I decided to do a little customization. While solarized is a very nice theme and many enjoy it, its just a little to much ‘blue’ for me. I looked at the light theme and really wasn’t partial to it either.

In a perivous blog post I talk about my terminal setup and my use of Noah Fredrick’s Peppermint Theme. I find the dark background pleasing along with eye popping colors, so I decided to tweak the solarized theme by editing sass/custom/_colors.scss.

Taking the color pallet from a few projects I created what I think is a more pleasing color theme and one that better fits the overall theme of my blog. Below are some snippets for setting a new code snippets theme along with examples of what they look like.

Peppermint Theme

1
2
3
4
5
6
7
8
9
10
11
12
// Tomorrow Colors https://noahfrederick.com/log/lion-terminal-theme-peppermint/
// You must have $solarized: dark; set for this to work
$base03:          #181818;
$base02:          #282828;
$solar-yellow:    #FFDC72;
$solar-orange:    #F0B77D;
$solar-red:       #FF6685;
$solar-magenta:   #FE96FF;
$solar-violet:    #FF8FFF;
$solar-blue:      #5DC6F5;
$solar-cyan:      #86D1D7;
$solar-green:     #A6EBA6;

Tomorrow Theme

1
2
3
4
5
6
7
8
9
10
11
12
// Tomorrow Colors https://github.com/chriskempson/tomorrow-theme
// You must have $solarized: dark; set for this to work
//$base03:          #181818;
//$base02:          #282828;
//$solar-yellow:    #eab700;
//$solar-orange:    #f5871f;
//$solar-red:       #c82829;
//$solar-magenta:   #c828a9;
//$solar-violet:    #8959a8;
//$solar-blue:      #4271ae;
//$solar-cyan:      #3e999f;
//$solar-green:     #718c00;

Base16 Colors Theme

1
2
3
4
5
6
7
8
9
10
11
12
// Base16 Colors https://github.com/chriskempson/base16
// You must have $solarized: dark; set for this to work
$base03:          #181818; //darkest blue
//$base02:          #282828; //dark blue
//$solar-yellow:    #f7ca88;
//$solar-orange:    #dc9656;
//$solar-red:       #ab4642;
//$solar-magenta:   #a16946;
//$solar-violet:    #ba8baf;
//$solar-blue:      #7cafc2;
//$solar-cyan:      #86c1b9;
//$solar-green:     #a1b56c;

Icinga2 PagerDuty Configuration

| Comments

After some stumbling around the web finding a few nuggets of information on how to configure PagerDuty event reporting with Icinga2. I have what I belive to be a fully function solution.

The Icinga Integration Guide is a good basis to start off, with a few gotchas for Icinga2.

  1. The format for the notification and command objects needed for Icinga2
  2. The Icinga2 env vars need prefixed by “ICINGA_” as the pagerduty_icinga.pl script is looking for them
1
2
3
4
while ((my $k, my $v) = each %ENV) {
  next unless $k =~ /^ICINGA_(.*)$/;
  $event{$1} = $v;
}

Below is a manifest example that uses the Icinga2 Puppet Module with my commited objects to configure PagerDuty Integration. Note: You’ll need to download the pagerduty_icinga.pl script locally for this manifest to work correctly.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
# Configure PagerDuty Alerting Service
#
# Template Examples:
# http://monitoring-portal.org/wbb/index.php?page=Thread&postID=204321
# https://lists.icinga.org/pipermail/icinga-users/2014-May/008201.html
class icinga2::arin::pagerduty (
  $pagerduty_service_apikey = undef,
) {

  include stdlib

  # Install Perl dependencies
  $pagerduty_dependencies_packages = [ 'perl-libwww-perl', 'perl-Crypt-SSLeay' ]
  ensure_packages ( $pagerduty_dependencies_packages )

  # Install PagerDuty Alerting Script
  file { 'pagerduty_icinga.pl':
    ensure  => file,
    path    => '/etc/icinga2/scripts/pagerduty_icinga.pl',
    owner   => 'root',
    group   => 'root',
    mode    => '0755',
    content => template('icinga2/pagerduty_icinga.pl.erb'),
  }

  # Create PagerDuty Icinga User
  icinga2::object::user { 'pagerduty':
    display_name => 'PagerDuty Notification User',
    pager        => $pagerduty_service_apikey,
    target_dir   => '/etc/icinga2/objects/users',
    states       => [ 'OK', 'Critical' ],
  }

  ## Configure Cron for Icinga User
  cron { 'icinga pagerduty':
    ensure   => present,
    command  => '/usr/local/bin/pagerduty_icinga.pl flush',
    user     => 'icinga',
    minute   => '*',
    hour     => '*',
    monthday => '*',
    month    => '*',
    weekday  => '*',
  }

  ## Configure Icinga2 PagerDuty Notification Command for Service
  icinga2::object::notificationcommand { 'notify-service-by-pagerduty':
    command            => ['"/icinga2/scripts/pagerduty_icinga.pl"', '"enqueue"', '"-f"', '"pd_nagios_object=service"', '"--verbose"'],
    cmd_path           => 'SysconfDir',
    template_to_import => 'plugin-notification-command',
    env                         => {
      '"ICINGA_CONTACTPAGER"'     => '"$user.pager$"',
      '"ICINGA_NOTIFICATIONTYPE"' => '"$notification.type$"',
      '"ICINGA_SERVICEDESC"'      => '"$service.name$"',
      '"ICINGA_HOSTNAME"'         => '"$host.name$"',
      '"ICINGA_HOSTALIAS"'        => '"$host.display_name$"',
      '"ICINGA_SERVICESTATE"'     => '"$service.state$"',
      '"ICINGA_SERVICEOUTPUT"'    => '"$service.output$"',
    },
  }

  ## Configure Icinga2 PagerDuty Notification Command for Hosts
  icinga2::object::notificationcommand { 'notify-host-by-pagerduty':
    command            => ['"/icinga2/scripts/pagerduty_icinga.pl"', '"enqueue"', '"-f"', '"pd_nagios_object=host"', '"--verbose"'],
    cmd_path           => 'SysconfDir',
    template_to_import => 'plugin-notification-command',
    env                         => {
      '"ICINGA_CONTACTPAGER"'     => '"$user.pager$"',
      '"ICINGA_NOTIFICATIONTYPE"' => '"$notification.type$"',
      '"ICINGA_HOSTNAME"'         => '"$host.name$"',
      '"ICINGA_HOSTALIAS"'        => '"$host.display_name$"',
      '"ICINGA_HOSTSTATE"'        => '"$host.state$"',
      '"ICINGA_HOSTOUTPUT"'       => '"$host.output$"',
    },
  }

  ## Configure Apply Notification to Hosts
  icinga2::object::apply_notification_to_host { 'pagerduty-host':
    assign_where => 'host.vars.enable_pagerduty == "true"',
    command      => 'notify-host-by-pagerduty',
    users        => [ 'pagerduty' ],
    states       => [ 'Up', 'Down' ],
    types        => [ 'Problem', 'Acknowledgement', 'Recovery', 'Custom' ],
    period       => '24x7',
  }

  ## Configure Apply Notification to Services
  icinga2::object::apply_notification_to_service { 'pagerduty-service':
    assign_where => 'service.vars.enable_pagerduty == "true"',
    command      => 'notify-service-by-pagerduty',
    users        => [ 'pagerduty' ],
    states       => [ 'OK', 'Warning', 'Critical', 'Unknown' ],
    types        => [ 'Problem', 'Acknowledgement', 'Recovery', 'Custom' ],
    period       => '24x7',
  }
}

For those who want to configure this manually, below are examples of the objects needed.

Icinga2 Apply Objects

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
apply Notification "pagerduty-host" to Host {
    assign where host.vars.enable_pagerduty == "true"
  command = "notify-host-by-pagerduty"
  users = [ "pagerduty" ]
  period = "24x7"
  types = [ Problem, Acknowledgement, Recovery, Custom ]
  states = [ Up, Down ]
}

apply Notification "pagerduty-service" to Service {
    assign where service.vars.enable_pagerduty == "true"
  command = "notify-service-by-pagerduty"
  users = [ "pagerduty" ]
  period = "24x7"
  types = [ Problem, Acknowledgement, Recovery, Custom ]
  states = [ OK, Warning, Critical, Unknown ]
}

Icinga2 Notification Command Objects

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
object NotificationCommand "notify-host-by-pagerduty" {

  import "plugin-notification-command"

  command = [ SysconfDir + "/icinga2/scripts/pagerduty_icinga.pl", "enqueue", "-f", "pd_nagios_object=host", "--verbose" ]

  env = {
    "ICINGA_HOSTNAME" = "$host.name$"
    "ICINGA_HOSTALIAS" = "$host.display_name$"
    "ICINGA_NOTIFICATIONTYPE" = "$notification.type$"
    "ICINGA_CONTACTPAGER" = "$user.pager$"
    "ICINGA_HOSTOUTPUT" = "$host.output$"
    "ICINGA_HOSTSTATE" = "$host.state$"
  }
}

object NotificationCommand "notify-service-by-pagerduty" {

  import "plugin-notification-command"

  command = [ SysconfDir + "/icinga2/scripts/pagerduty_icinga.pl", "enqueue", "-f", "pd_nagios_object=service", "--verbose" ]

  env = {
    "ICINGA_HOSTNAME" = "$host.name$"
    "ICINGA_HOSTALIAS" = "$host.display_name$"
    "ICINGA_NOTIFICATIONTYPE" = "$notification.type$"
    "ICINGA_CONTACTPAGER" = "$user.pager$"
    "ICINGA_SERVICEDESC" = "$service.name$"
    "ICINGA_SERVICESTATE" = "$service.state$"
    "ICINGA_SERVICEOUTPUT" = "$service.output$"
  }
}

Icinga2 User Object

1
2
3
4
5
object User "pagerduty" {
  display_name = "PagerDuty Notification User"
  pager = "YOURAPIKEYGOESHERE"
  states = [  OK,  Critical, ]
}

My OS X Terminal Setup

| Comments

OS X 10.10 was released last week and I have made the jump from Mavericks 10.9. Along with the jump I’ve decided to attempt to stick to using native Apps in favor of thrid party. Such as user Terminal vs iTerm2

One of the main reasons for using iTerm2 was its built in support for spliting windows/panes and more recently its tmux integration (mouse support). The desire for for windows management is now managed via tmux, so the main requirement was tmux integration and of corse making it looks good.

For tmux integration OS X Terminal does well except the fact that it does not support tmux mouse integration. To resolve this I installed SIMBL and MouseTerm

Bingo! Mouse support….or semi mouse support. While I could not select panes and windows via the mouse I could not resize panes. My main reason for having mouse support. Its just MUCH MUCH quicker and more accurate.

Then after some digging in google I stumbled upon MouseTerm plus. A fork of the original MouseTerm project that provides the ability to resize window panes!

Next was to make the Terminal more readable and appealing to the eye in steps Noah Frederick’s Peppermint Theme

I’ll update this post or create a new one with getting the rest of tmux up and running

Building Apache Tomcat Connetor (Mod-jk) RPM With FPM on Centos 6.x

| Comments

The Apache Tomat Connector module (mod_jk) is not currently in included Centos or EPEL repositories.

To assist in the deployment and management of the Apache mod_jk module a RPM package will be created from the built source using FPM. If you havn’t stumbled upon or used FPM I highly recommend taking a look.

The basic overview of the process is as follows:

  1. Obtain/Download the lastest release source
  2. Verify you have the prerequisites installed for building the module
  3. Build the latest release
  4. Create FPM build root
    • Create configuration file directory structure (httpd.conf)
    • Create module directory structure
    • Create mod_jk.conf configuration file
    • Copy mod_jk.so into module directory structure
  5. Generate RPM package using FPM

Here is a more detailed article on the exact process listed above.

AppleTV Behind a Captive Portal

| Comments

While on vacation I ran into the issue with getting my AppleTV up and running behind a captive portal. While the AppleTV was getting an IP address from the DHCP server it was not able to connet to the iTunes servers to load any of the apps. This is because there is no browser or method to agree to the terms of service (ToS) via the AppleTV itself.

In order to get around this issue I was able to rely on MAC spoofing. I navigated the menu of the AppleTV to determine its MAC address then disconnected the AppleTV from the Wifi network. I then spoofed the MAC address following the procedure Spoofing MAC Address in OS X.

After which I opened a web browser on my laptop, navigated to the captive portal ToS page and authenticated with the spoofed MAC address. Once authentication was complete I reconnected the AppleTV to the WiFi network and connect to the iTunes servers. I then reset my laptops wireless interface back to its original MAC address so I could continue to maintain access the Internet.

Note you may need to perform this mulitple times during your stay based on the length of the lease for the IP address you obtain.

Revival

| Comments

New Logo, New Look

After moving moving my focus to snozberry.org, I”ve decided to bring techtaco.org back to life with a new look and new feel. I’m migrating the useful posts and information off snozberry and will be setting up a redirect in the near future.

Along with a new modified logo, TechTaco has moved to using the Oct2 theme with a few mods (background, sitemap, searchbox, G+)

Leave a comment and let me know what you think

Puppet 3.x Install Script

| Comments

Overview

This script is used to install/configure a basic puppet master on a RedHat/CentOS system.

During the installation two repositories are added EPEL and PUPPETLABS. The repositories are disabled during installation.

After installing the required packages the includepkgs parameter is added and the specific repositories are re-enabled. This will allow you to pull updates for these specific packages.

Please leave any suggestion or hints in the comments below.

Puppet Install
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
#!/bin/bash

# Install Puppet 3.x on Centos 6.x

#############
# Variables #
#############

elv=`cat /etc/redhat-release | gawk 'BEGIN {FS="release "} {print $2}' | gawk 'BEGIN {FS="."} {print $1}'`
arch=`uname -m`
fqdn=`hostname -f`

##############
# Functions  #
##############

disable_repo() {
        local conf=/etc/yum.repos.d/$1.repo
        if [ ! -e "$conf" ]; then
                echo "Yum repo config $conf not found -- exiting."
                exit 1
        else
                sudo sed -i -e 's/^enabled.*/enabled=0/g' $conf
        fi
}

include_repo_packages() {
  local conf=/etc/yum.repos.d/$1.repo
  if [ ! -e "$conf" ]; then
                echo "Yum repo config $conf not found -- exiting."
                exit 1
        else
      shift
     sudo sed -i -e "/\[$1\]/ a\includepkgs=$2" ${conf}
      sudo sed -i -i "/\[$1\]/,/\]/ s/^enabled.*/enabled=1/" ${conf}
  fi
}

enable_service() {
        try sudo /sbin/chkconfig $1 on
        try sudo /sbin/service $1 start
}

disable_service() {
        try sudo /sbin/chkconfig $1 off
        try sudo /sbin/service $1 stop
}


# Stop/Disable SELinux (Premissive Mode)
sudo /usr/sbin/setenforce 0

# Stop/Disable IPTables v4/v6
disable_service iptables
disable_service ip6tables

# Add Puppet Labs YUM repository
cat >> /etc/yum.repos.d/puppetlabs.repo << EOF
[puppetlabs]
name=Puppet Labs Packages
baseurl=http://yum.puppetlabs.com/el/\$releasever/products/\$basearch/
enabled=1
gpgcheck=1
gpgkey=http://yum.puppetlabs.com/RPM-GPG-KEY-puppetlabs
EOF

# Disable Puppet Labs YUM repository
disable_repo puppetlabs

# Add EPEL YUM repository
epel_rpm_url=http://dl.fedoraproject.org/pub/epel/$elv/$arch
sudo wget -4 -r -l1 --no-parent -A 'epel-release*.rpm' $epel_rpm_url
sudo yum -y --nogpgcheck localinstall dl.fedoraproject.org/pub/epel/$elv/$arch/epel-*.rpm
sudo rm -rf dl.fedoraproject.org

# Disable EPEL YUM repository
disable_repo epel

# Install Ruby prerequisites
# Packages from EPEL: ruby-augeas rubygem-json
sudo yum --enablerepo=epel -y install ruby ruby-lib ruby-rdoc ruby-augeas ruby-irb ruby-shadow rubygem-json rubygems libselinux-ruby

# Install Puppet Server
# Packages from PUPPETLABS: puppet puppet-server facter hiera
sudo yum --enablerepo=puppetlabs --enablerepo=epel -y install puppet puppet-server

# Start the puppetmaster service to create SSL certificate
/etc/init.d/puppetmaster start

# Stop/Disable the puppet master service as it will be controled via passenger.
disable_service puppetmaster

# Install Passenger Apache Module ( Because Webbrick...really?)
# Packages from EPEL: mod_passenger rubygem-passenger rubygem-passenger-native rubygem-passenger-navtive-libs libev rubygem-fastthread rubygem-rack
sudo yum --enablerepo=puppetlabs --enablerepo=epel install rubygem-passenger rubygem-passenger-native rubygem-passenger-native-libs mod_passenger

# Configure the Apache conf.d for passenger
cat >> /etc/httpd/conf.d/puppetmaster.conf << EOF
# you probably want to tune these settings
PassengerHighPerformance on
PassengerMaxPoolSize 12
PassengerPoolIdleTime 1500
# PassengerMaxRequests 1000
PassengerStatThrottleRate 120
RackAutoDetect Off
RailsAutoDetect Off

Listen 8140

<VirtualHost *:8140>
        SSLEngine on
        SSLProtocol -ALL +SSLv3 +TLSv1
        SSLCipherSuite ALL:!ADH:RC4+RSA:+HIGH:+MEDIUM:-LOW:-SSLv2:-EXP

        SSLCertificateFile      /var/lib/puppet/ssl/certs/${fqdn}.pem
        SSLCertificateKeyFile   /var/lib/puppet/ssl/private_keys/${fqdn}.pem
        SSLCertificateChainFile /var/lib/puppet/ssl/ca/ca_crt.pem
        SSLCACertificateFile    /var/lib/puppet/ssl/ca/ca_crt.pem
        # If Apache complains about invalid signatures on the CRL, you can try disabling
        # CRL checking by commenting the next line, but this is not recommended.
        SSLCARevocationFile     /var/lib/puppet/ssl/ca/ca_crl.pem
        SSLVerifyClient optional
        SSLVerifyDepth  1
        SSLOptions +StdEnvVars

        RequestHeader set X-SSL-Subject %{SSL_CLIENT_S_DN}e
        RequestHeader set X-Client-DN %{SSL_CLIENT_S_DN}e
        RequestHeader set X-Client-Verify %{SSL_CLIENT_VERIFY}e

        DocumentRoot /etc/puppet/rack/public/
        RackBaseURI /
        RailsEnv production
        <Directory /etc/puppet/rack/>
                Options None
                AllowOverride None
                Order allow,deny
                allow from all
        </Directory>
</VirtualHost>
EOF

# Create Ruby Rack within Puppet directory strucutre for ease of management.
sudo mkdir /etc/puppet/rack
sudo mkdir /etc/puppet/rack/public
sudo mkdir /etc/puppet/rack/tmp
sudo cp /usr/share/puppet/ext/rack/files/config.ru /etc/puppet/rack
sudo chown puppet:root /etc/puppet/rack/config.ru
sudo chmod 644 /etc/puppet/rack/config.ru

# Install Apache SSL (mod_ssl)
sudo yum install mod_ssl

# Start/Enable apache service (httpd)
enable_service httpd

# Enable/Include just required packages from EPEL
# include_repo_packages <repo conf file> <repo name> <"package list">
include_repo_packages epel epel "mod_passenger rubygem-passenger rubygem-passenger-native rubygem-passenger-navtive-libs libev rubygem-fastthread rubygem-rack ruby-augeas rubygem-json"

# Enable/Include just required packages from PUPPETLABS
include_repo_packages puppetlabs puppetlabs "puppet puppet-server facter hiera"

CoroSync/Pacemaker on Centos 6

| Comments

Install Pacemaker/Corosync

From my readings online you can also use heartbeat 3.x along side packmaker to achive similar results. I”ve decided to go with Corosync as its backed by RedHat and Suse and looks to have more active development. Not to memtion that the Pacemaker projects say you should now use Corosync :)

There are packages included in the Centos 6.x base/updates repositories so we can just use yum to installed the needed packages.

1
yum install pacemaker corosync

Setup Corosync

Generate AuthKey

Corosync requires an authkey for communication within its cluster. This file must be copied to each of the nodes that you want to add to the cluseter.

If a message “Invalid digest” appears from the corosync executive, the keys are not consistent between nodes

To generate the authkey Corosync has a utility corosync-keygen. Invoke this command as the root users to generate the authkey. The key will be generated at /etc/corosync/authkey

Grab a cup of coffee this process takes a while to complete as it pulls from the more secure /dev/random. You don’t have to press anything on the keyboard it will still generate the authkey.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
sudo corosync-keygen 
Corosync Cluster Engine Authentication key generator.
Gathering 1024 bits for key from /dev/random.
Press keys on your keyboard to generate entropy.
Press keys on your keyboard to generate entropy (bits = 128).
Press keys on your keyboard to generate entropy (bits = 192).
Press keys on your keyboard to generate entropy (bits = 256).
Press keys on your keyboard to generate entropy (bits = 328).
Press keys on your keyboard to generate entropy (bits = 392).
Press keys on your keyboard to generate entropy (bits = 456).
Press keys on your keyboard to generate entropy (bits = 520).
Press keys on your keyboard to generate entropy (bits = 592).
Press keys on your keyboard to generate entropy (bits = 656).
Press keys on your keyboard to generate entropy (bits = 720).
Press keys on your keyboard to generate entropy (bits = 784).
Press keys on your keyboard to generate entropy (bits = 848).
Press keys on your keyboard to generate entropy (bits = 912).
Press keys on your keyboard to generate entropy (bits = 976).
Writing corosync key to /etc/corosync/authkey.

Now you just need to copy this authkey to the other nodes in your cluster

1
sudo scp /etc/corosync/authkey root@<node2>:/etc/corosync/

Configure corosync.conf

All changes listed below will need to be performed on ALL nodes in the cluster.

The first thing we’ll need to do is copy the corosync.conf.example file to corosync.conf. I’ll be using the udp configuration here as we’ll only have two nodes.

1
cp /etc/corosync/corosync.conf.example.udpu /etc/corosynccorosync.conf

Now we’ll edit this file to set the the user corosync will run as. This is nessasary so that corosync can manage the pacemaker resources.

1
sudo vim /etc/corosync/corosyn.conf

Add the following to the top of the corosync.conf file.

1
2
3
4
5
aisexec {
        # Run as root - this is necessary to be able to manage resources with Pacemaker
        user:        root
        group:       root
}

Edit the totem section to include the members in your cluster and set the bindnetaddr that corosync will listen on. You can leave the other settings default for now.

Add cluster members

1
2
3
4
5
6
7
interface {
                member {
                        memberaddr: 10.1.22.28
                }
                member {
                        memberaddr: 10.1.22.29
                }

Set bindnetaddr, this will unique per node in the cluster

1
bindnetaddr: 10.1.22.28

Create pcmk service.d file

Now we’ll create a pacemaker service.d file to tell corosync to control/run the pacemaker resoucres.

1
sudo vim /etc/corosync/service.d/pcmk

Add the following into the file you just created.

Change the ver: to 1 will allow you to start the pacemaker service manually for trouble shooting

1
2
3
4
5
service {
# Load the Pacemaker Cluster Resource Manager
name: pacemaker
ver: 0
}

Start/Verify Corosync is correcly configured

Now lets start corosync on the first node in the cluster

1
sudo /etc/init.d/corosync start

Check to see if corosync is running as expected

1
2
sudo /etc/init.d/corosync status
corosync (pid  18376) is running...

or

1
2
3
4
5
6
7
8
9
10
11
sudo crm_mon
#
# Output from crm_mon 
============
Last updated: Wed May  2 07:51:20 2012
Last change: 
Current DC: NONE
0 Nodes configured, unknown expected votes
0 Resources configured.
============
Online: [ pg1.stage.net ]

This the first node up and running you can now start the second node.

Configure Active/Passive Cluster

The first step is to check the cluster configureation using crm_verify -L.

1
2
3
4
5
6
sudo crm_verify -L
crm_verify[19478]: 2012/05/02_07:52:37 ERROR: unpack_resources: Resource start-up disabled since no STONITH resources have been defined
crm_verify[19478]: 2012/05/02_07:52:37 ERROR: unpack_resources: Either configure some or disable STONITH with the stonith-enabled option
crm_verify[19478]: 2012/05/02_07:52:37 ERROR: unpack_resources: NOTE: Clusters with shared data need STONITH to ensure data integrity
Errors found during check: config not valid
  -V may provide more details

You’ll notice that you see a few errors, this is because by default pacemaker is set to make use of STONITH (Shoot The Other Node In The Head). For now we can disable this for our basic configuration.

1
sudo crm configure property stonith-enabled=false

Running crm_verify -L again will now complete without any errors.

Adding ClusterIP Resource

The first thing we need to do for a cluster is add a resource like an IP address so we can always contact and communicate with the cluster without regardless of where the cluster services are running. This must be a NEW address not associated with ANY node.

In the below example you’ll need to set the ip, cidr_netmask to the address for your cluster. You can also set the monitor interval to a lower number if you want a quicker failover. I have set mine to 1s so failover is almost instantaneous

1
2
3
4
5
crm configure primitive ClusterIP ocf:heartbeat:IPaddr2 \
params ip=172.25.3.20 cidr_netmask=21 \
op monitor interval=30s
# Output:
crm_verify[19566]: 2012/05/02_08:04:21 WARN: cluster_status: We do not have quorum - fencing and resource management disabled

View/verify that the ClusterIP has been added

1
2
3
4
5
6
7
8
9
10
sudo crm configure show
node pg1.stage.net
primitive ClusterIP ocf:heartbeat:IPaddr2 \
  params ip="172.25.3.20" cidr_netmask="21" \
  op monitor interval="30s"
property $id="cib-bootstrap-options" \
  dc-version="1.1.6-3.el6-a02c0f19a00c1eb2527ad38f146ebc0834814558" \
  cluster-infrastructure="openais" \
  expected-quorum-votes="2" \
  stonith-enabled="false"

Because we are setting up a 2 node cluster which is mathematically unable to attain quorum, we need to tell Pacemaker to ignore it.

1
sudo crm configure property no-quorum-policy=ignore

Now verify quorum is disabled

1
2
3
4
5
6
7
8
9
10
11
sudo crm configure show
node cloo.arin.net
primitive ClusterIP ocf:heartbeat:IPaddr2 \
  params ip="172.25.3.20" cidr_netmask="21" \
  op monitor interval="30s"
property $id="cib-bootstrap-options" \
  dc-version="1.1.6-3.el6-a02c0f19a00c1eb2527ad38f146ebc0834814558" \
  cluster-infrastructure="openais" \
  expected-quorum-votes="2" \
  stonith-enabled="false" \
  no-quorum-policy="ignore"

Resources

Bits & Bytes of Life

Clusters from Scratch

Setup PGPool-II

| Comments

Configure/Add Postgres Repo

  1. Download the Centos 6 Repo RPM from HERE
1
wget http://yum.pgrpms.org/9.1/redhat/rhel-6-x86_64/pgdg-redhat91-9.1-5.noarch.rpm
  1. Install the Repo RPM
1
sudo rpm -Uvh pgdg-redhat91-9.1-5.noarch.rpm

Install PGPool-II

Now that you have the repo install you can use YUM to install pgpool-II from the PG repo

1
sudo yum install pgpool-II-91

Configure PGPool-II

PGPool-II stores its configuration files in /etc/pgpool-II-91/. Installing the RPM will create sample configuration files.

Configure pgpool.conf

Copy the /etc/pgpool-II-91/pgpool.conf.sample to /etc/pgpool-II-91/pgpool.conf

1
cp /etc/pgpool-II-91/pgpool.conf.sample /etc/pgpool-II-91/pgpool.conf

~ Connection Settings ~

By default PGPool-II only accepts connections from the localhost using port 9999. If you wish to receive conenctions from other hosts, set listen_addresses to ‘*’.

1
2
3
4
5
From:
listen_addresses = 'localhost'

To:
listen_addresses = '*'

~ Backend Connection Settings ~

This section provides deatils about the nodes that PGPool-II is aware of and how it should interface with them. I’ll be show basic settings here for a 2 node “cluster”.

I’m assuming that PGPool-II is installed on the same host as your Postgresql server. To give a better example I’ll be using the hostname of the first node that has both postgresql and pgpool-II in place of using “localhost”.

Notice below that backend_hostname”X” has an incrementing numerial identifier for each additional node ( Example – node1 = 0, node2 = 1, node2 = 3).

Also notice that the backend_port”X” setting is set to the default postgresql port 5432 for every node.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
backend_hostname0 = 'pg1.domain.net'
                                   # Host name or IP address to connect to for backend 0
backend_port0 = 5432
                                   # Port number for backend 0
backend_weight0 = 1
                                   # Weight for backend 0 (only in load balancing mode)
backend_data_directory0 = '/var/lib/pgsql/9.1/data'
                                   # Data directory for backend 0
backend_flag0 = 'ALLOW_TO_FAILOVER'
                                   # Controls various backend behavior
                                   # ALLOW_TO_FAILOVER or DISALLOW_TO_FAILOVER
backend_hostname1 = 'pg2.domain.net'
backend_port1 = 5432
backend_weight1 = 1
backend_data_directory1 = '/var/lib/pgsql/9.1/data'
backend_flag1 = 'ALLOW_TO_FAILOVER'

~ REPLICATION MODE ~

In order to use Replication Mode with PGPool-II you’ll need to configure all setting in this section and all section above the REPLICATION MODE section.

For a basic setup you just need to enable replication by setting replication_mode = on. By default this setting is set to off

~ LOAD BALANCING MODE ~

When using Replication Mode with PGPool-II you have the option to enable load balancing by setting the load_balance_mode = on. By deafult this setting is set to off

Configure pcp.conf

PGPool-II has an interface for administration purpose to retrieve information on database nodes, shutdown PGPool-II, etc. via network. To use PCP commands, user authentication is required. This authentication is different from PostgreSQL’s user authentication. A username and password need to be defined in pcp.conf file. In the file, a username and password are listed as a pair on each line, and they are separated by a colon (:). Passwords are encrypted in md5 hash format.

1
postgres:e8a48653851e28c69d0506508fb27fc5

Copy the /etc/pgpool-II-91/pcp.conf.sample to /etc/pgpool-II-91/pcp.conf

1
cp /etc/pgpool-II-91/pcp.conf.sample /etc/pgpool-II-91/pcp.conf

To encrypt your password into md5 hash format, use pg_md5 command, which is installed as a part of pgpool-II executables. pg_md5 takes text as an command line argument, and displays its md5-hashed text.

For example, give “postgres” as the command line argument, at pg_md5 displays md5-hashed text to the standard output.

1
2
[smbambling@pg1 pgpool-II-91]$ /usr/bin/pg_md5 postgres
e8a48653851e28c69d0506508fb27fc5

PCP commands are executed via network, so the port number must be configured with pcp_port parameter in pgpool.conf file.

1
pcp_port 9898

Configure Postgres Client Access

By default Postgres only allows local users/connections. You need to grant access to the PGPool-II server in pg_hba.conf.

Reminder that PGPool-II doesn’t support replication over IPv6

Grant access to the PGPool-II server ( Even if postgres is installed on the same box). In the exapmle below 10.1.22.28/25 is the IP/Netmask fo the PGPool-II server and we are setting the method to trust

1
host    all             all             10.1.22.28/25            trust

Install/Configure Postgresql on Centos 6

| Comments

Configure/Add Postgres Repo

  1. Download the Centos 6 Repo RPM from HERE
1
wget http://yum.pgrpms.org/9.1/redhat/rhel-6-x86_64/pgdg-redhat91-9.1-5.noarch.rpm
  1. Install the Repo RPM
1
sudo rpm -Uvh pgdg-redhat91-9.1-5.noarch.rpm

Install Postgresql Server

Now that you have the repo install you can use YUM to install Postgresql from the PG repo

1
sudo yum install postgresql91-server

Initialize & Start Postgresql Server

You’ll first need to initalize the database for postgresql. If you attempt to start the service before initializing that database you’ll get an error like this….

1
2
3
smbambling@pg1 ~]$ sudo /etc/init.d/postgresql-9.1 start
/var/lib/pgsql/9.1/data is missing. Use "service postgresql-9.1.3 initdb" to initialize the cluster first.
                                            [FAILED]

To initialize the database you can call initdb the init script

1
2
[smbambling@pg1 ~]$ sudo /etc/init.d/postgresql-9.1 initdb
Initializing database:                                     [  OK  ]

Once the initialization is sucessfull you’ll configuration database files created in /var/lib/pgsql/9.1/data

1
2
3
[root@pg1 data]# ls
base    pg_clog      pg_ident.conf  pg_multixact  pg_serial    pg_subtrans  pg_twophase  pg_xlog          postmaster.opts
global  pg_hba.conf  pg_log         pg_notify     pg_stat_tmp  pg_tblspc    PG_VERSION   postgresql.conf  postmaster.pid

Now you will be able to successfully start the database with the init script

1
2
[smbambling@pg1 ~]$ sudo /etc/init.d/postgresql-9.1 start
Starting postgresql-9.1 service:                           [  OK  ]

To veryify that the service is running you can by issing status from the init script

1
2
[smbambling@pg1 ~]$ sudo /etc/init.d/postgresql-9.1 status
 (pid  22444) is running...

Configure Postgresql Access Permissions

Set the authentication method

When you called the initdb command above from RedHat’s init script it configured permissions on the database. These configuration settings are pg_hba.conf

RedHat calls the initdb like this:

1
initdb --pgdata='$PGDATA' --auth='ident sameuser'

This uses the not so popular ident scheme to determine if a user is allow to connect to the database.

ident: An authentication schema that relies on the currently logged in user. If you’ve su -s to postgres and then try to login as another user, ident will fail (as it’s not the currently logged in user).

This can be a sore spot if your not aware how it was configured and will give an error if trying to create a database with a user that is not currently logged into the system.

1
createdb: could not connect to database postgres: FATAL:  Ident authentication failed for user "myUser"

To get around this issue you can modify the pg_hba.conf file to to move from the ident scheme to the md5 scheme

1
2
3
4
# IPv4 local connections:
host    all             all             127.0.0.1/32            ident
# IPv6 local connections:
host    all             all             ::1/128                 ident
1
2
3
4
# IPv4 local connections:
host    all             all             127.0.0.1/32            md5
# IPv6 local connections:
host    all             all             ::1/128                 md5

Create “Super User”

By default only the postgres user on the system can create databases and manage the server. This user is granted superuser privilegdges to the postgres database(s) and server.

To work around this we will create an additional userser with superuser priviledges for management.

Its advised NOT to grant an application user superuser priviledges for security

To create users in postgres you can use the CREATE ROLE and priviledges can be modified with ALTER ROLE. To assist with user creation postgres provides a wrapper script createuser

Only superusers and users with CREATEROLE privilege can create new users, so creating the initial user must be done from the postgres account.

  1. Become the postgres user
  2. Invoke the createuser script with -p option. -p will issue a prompt for the password of the new user
    • Enter a username for the new user. We are entering root here.
    • When prompted grant the user superuser priviledges
1
2
3
4
5
6
7
[smbambling@pg1 ~]$ sudo su - postgres
-bash-4.1$ createuser -P
Enter name of role to add: root
Enter password for new role: 
Enter it again: 
Shall the new role be a superuser? (y/n) y
-bash-4.1$ exit