Failed to allocate nodeid, error: 'Error: Could not alloc node id

anu picture anu · Sep 9, 2015 · Viewed 7.6k times · Source

ndb_mgmd does not seem to correctly read the config file

this is part of my config file

[ndbd]
# Options for data node "A":
                                # (one [ndbd] section per data node)
hostname=abhyas.db01            # Hostname or IP address
datadir=/usr/local/mysql/data   # Directory for this data node's data files

[ndbd]
# Options for data node "B":
hostname=abhyas.db02            # Hostname or IP address
datadir=/usr/local/mysql/data   # Directory for this data node's data files

[mysqld]
# SQL node options:
hostname=abhyas.dbmgr           # Hostname or IP address
                                # (additional mysqld connections can be
                                # specified for this node for various
                                # purposes such as running ndb_restore)

but

ndb_mgm

shows something different

[root@abhyas abhyas_mgr]# ndb_mgm
-- NDB Cluster -- Management Client --
ndb_mgm> SHOW
Connected to Management Server at: abhyas.dbmgr:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=2 (not connected, accepting connect from abhyas.db01)
id=3 (not connected, accepting connect from abhyas.db01)

[ndb_mgmd(MGM)] 1 node(s)
id=1    @192.168.102.134  (mysql-5.6.25 ndb-7.4.7)

[mysqld(API)]   1 node(s)
id=4 (not connected, accepting connect from abhyas.dbmgr)

ndb_mgm> EXIT

As you can see, in my config file I have abhyas.db01 and abhyas.db02 as the hosts.

But the cluster config shows NDB nodes as two nodes, both from abhyas.db01. (This is not what I want, not right now atleast).

[ndbd(NDB)] 2 node(s)
id=2 (not connected, accepting connect from abhyas.db01)
id=3 (not connected, accepting connect from abhyas.db01)

Now, I made a mistake to start the ndb_mgmd with the config.ini file had [ndbd] entries both pointing to abhyas.db01, but I promptly shut down the ndb_mgm and changed the entry in the config file to show as I have pasted above.

But for some reason, ndb_mgmd still takes the old configuration ?

How do I fix this ?

Thanks.

PS - No, this is not a firewall issue. iptables is off. Besides, ndbd from abhyas.db01 is able to connect successfully anyway.

Answer

anu picture anu · Sep 9, 2015

Nevermind, figured it out.

just had to specify --reload option while starting ndb_mgmd

i.e

ndb_mgmd --reload --config-file /home/abhyas_mgr/config.ini 

[root@abhyas bin]# ndb_mgmd --reload --config-file /home/abhyas_mgr/config.ini 
MySQL Cluster Management Server mysql-5.6.25 ndb-7.4.7
[root@abhyas bin]# ndb_mgm
-- NDB Cluster -- Management Client --
ndb_mgm> show
Connected to Management Server at: abhyas.dbmgr:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=2 (not connected, accepting connect from abhyas.db01)
id=3 (not connected, accepting connect from abhyas.db02)

[ndb_mgmd(MGM)] 1 node(s)
id=1    @192.168.102.134  (mysql-5.6.25 ndb-7.4.7)

[mysqld(API)]   1 node(s)
id=4 (not connected, accepting connect from abhyas.dbmgr)

ndb_mgm> 

viola!