Unable to start Corosync Cluster Engine

Misters picture Misters · May 2, 2016 · Viewed 6.9k times · Source

I'm trying to create HA OpenStack cluster for controller nodes by following OpenStack HA-guide.
So I have three nodes in cluster:
controller-0
controller-1
controller-2

Setted up a password for hacluster user on each host.

[root@controller-0 ~]# yum install pacemaker pcs corosync libqb fence-agents-all resource-agents –y ;

Authenticated in all nodes using password which should make up the cluster

[root@controller-0 ~]# pcs cluster auth controller-0 controller-1 controller-2 -u hacluster -p password --force  
controller-2: Authorized
controller-1: Authorized
controller-0: Authorized

After that created cluster:

[root@controller-1 ~]# pcs cluster setup --force --name ha-controller controller-0 controller-1 controller-2
Redirecting to /bin/systemctl stop  pacemaker.service
Redirecting to /bin/systemctl stop  corosync.service
Killing any remaining services...
Removing all cluster configuration files...
controller-0: Succeeded
controller-1: Succeeded
controller-2: Succeeded
Synchronizing pcsd certificates on nodes controller-0, controller-1 controller-2...
controller-2: Success
controller-1: Success
controller-0: Success
Restaring pcsd on the nodes in order to reload the certificates...
controller-2: Success
controller-1: Success
controller-0: Success

Started cluster:

[root@controller-0 ~]# pcs cluster start --all
controller-0:
controller-2:
controller-1:

But when I start corosync, I get:

[root@controller-0 ~]# systemctl start corosync
Job for corosync.service failed because the control process exited with error code. 
See "systemctl status corosync.service" and "journalctl -xe" for details.

In message log:

controller-0 systemd: Starting Corosync Cluster Engine...
controller-0 corosync[23538]: [MAIN  ] Corosync Cluster Engine ('2.3.4'): started and ready to provide service.
controller-0 corosync[23538]: [MAIN  ] Corosync built-in features: dbus systemd xmlconf snmp pie relro bindnow
controller-0 corosync[23539]: [TOTEM ] Initializing transport (UDP/IP Unicast).
controller-0 corosync[23539]: [TOTEM ] Initializing transmit/receive security (NSS) crypto: none hash: none
controller-0 corosync: Starting Corosync Cluster Engine (corosync): [FAILED]
controller-0 systemd: corosync.service: control process exited, code=exited status=1
controller-0 systemd: Failed to start Corosync Cluster Engine.
controller-0 systemd: Unit corosync.service entered failed state.
controller-0 systemd: corosync.service failed.

My corosync config file:

[root@controller-0 ~]# cat /etc/corosync/corosync.conf    
totem {   
    version: 2    
    secauth: off    
    cluster_name: ha-controller    
    transport: udpu    
}    
nodelist {    
    node {    
        ring0_addr: controller-0    
        nodeid: 1     
    }
    node {
        ring0_addr: controller-1
        nodeid: 2
    }
    node {
        ring0_addr: controller-2
        nodeid: 3
    }
}
quorum {
    provider: corosync_votequorum
    expected_votes: 3
    wait_for_all: 1
    last_man_standing: 1
    last_man_standing_window: 10000
}
logging {
    to_logfile: yes
    logfile: /var/log/cluster/corosync.log
    to_syslog: yes
}

Also all names are resolvable

OS is CentOS Linux release 7.2.1511 (Core)

[root@controller-0 ~]# uname -a
Linux controller-0 3.10.0-327.13.1.el7.x86_64 #1 SMP Thu Mar 31 16:04:38 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux

Installed versions:

pacemaker.x86_64                1.1.13-10.el7_2.2   @updates
pacemaker-cli.x86_64            1.1.13-10.el7_2.2   @updates
pacemaker-cluster-libs.x86_64   1.1.13-10.el7_2.2   @updates
pacemaker-libs.x86_64           1.1.13-10.el7_2.2   @updates
corosync.x86_64                 2.3.4-7.el7_2.1     @updates
corosynclib.x86_64              2.3.4-7.el7_2.1     @updates
libqb.x86_64                    0.17.1-2.el7.1      @updates
fence-agents-all.x86_64         4.0.11-27.el7_2.7   @updates
resource-agents.x86_64          3.9.5-54.el7_2.9    @updates

Answer

zedix picture zedix · Oct 3, 2016

Had the exact same problem as you; no error message or anything in the systemctl output, but corosync always failed to start.

Oct 03 11:24:43 jf-pacemaker-1 systemd[1]: Starting Corosync Cluster Engine...
Oct 03 11:24:43 jf-pacemaker-1 corosync[11468]:  [MAIN  ] Corosync Cluster Engine ('2.3.4'): started and ready to provide service.
Oct 03 11:24:43 jf-pacemaker-1 corosync[11468]:  [MAIN  ] Corosync built-in features: dbus systemd xmlconf snmp pie relro bindnow
Oct 03 11:24:44 jf-pacemaker-1 corosync[11469]:  [TOTEM ] Initializing transmit/receive security (NSS) crypto: none hash: none
Oct 03 11:25:44 jf-pacemaker-1 corosync[11461]: Starting Corosync Cluster Engine (corosync): [FAILED]
Oct 03 11:25:44 jf-pacemaker-1 systemd[1]: corosync.service: control process exited, code=exited status=1
Oct 03 11:25:44 jf-pacemaker-1 systemd[1]: Failed to start Corosync Cluster Engine.
Oct 03 11:25:44 jf-pacemaker-1 systemd[1]: Unit corosync.service entered failed state.
Oct 03 11:25:44 jf-pacemaker-1 systemd[1]: corosync.service failed.

Turns out my name resolution was a little messed up, if I tried pinging my short hostname it would get resolved a localhost:

$ ping jf-pacemaker-1
PING jf-pacemaker-1.localdomain (127.0.0.1) 56(84) bytes of data.
64 bytes from localhost.localdomain (127.0.0.1): icmp_seq=1 ttl=64 time=0.017 ms

Which was due to an IPv6 entry in my /etc/hosts file, introduced by cloud-init:

::1 jf-pacemaker-2.localdomain jf-pacemaker-2

Removing that line properly (and making sure I had my hostname <-> IP entry in /etc/hosts) made my IP resolve properly to non-localhost:

$ ping jf-pacemaker-1
PING jf-pacemaker-1 (10.0.0.22) 56(84) bytes of data.
64 bytes from jf-pacemaker-1 (10.0.0.22): icmp_seq=1 ttl=64 time=0.039 ms

Any corosync now comes up fine.