HBASE ERROR: Failed setting up proxy interface org.apache.hadoop.hbase.ipc.HRegionInterface

devsri picture devsri · Jan 11, 2012 · Viewed 8.2k times · Source

I am currently trying to work on HDFS and HBASE. The Hadoop and HBASE are properly installed on a machine and my application runs perfectly when hosted on the same machine.

But when hosting on another machine. On first hit to HBASE I get an error saying:

org.apache.catalina.core.StandardWrapperValve invoke
SEVERE: Servlet.service() for servlet [sitepulsewebsite] in context with path [/SitePulseWeb] threw exception [Request processing failed; nested exception is javax.jdo.JDODataStoreException
NestedThrowables:org.apache.hadoop.hbase.MasterNotRunningException: localhost:60000] with root cause
org.apache.hadoop.hbase.MasterNotRunningException: localhost:60000

And on the second hit I am getting the exception:

org.apache.catalina.core.StandardWrapperValve invoke
SEVERE: Servlet.service() for servlet [sitepulsewebsite] in context with path [/SitePulseWeb] threw exception [Request processing failed; nested exception is javax.jdo.JDODataStoreException: Failed setting up proxy interface org.apache.hadoop.hbase.ipc.HRegionInterface to localhost/127.0.0.1:60020 after attempts=1
NestedThrowables: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed setting up proxy interface org.apache.hadoop.hbase.ipc.HRegionInterface to localhost/127.0.0.1:60020 after attempts=1] with root cause
java.net.ConnectException: Connection refused

My hbase-site.xml reads as follow :

<configuration>
<property>
    <name>hbase.rootdir</name>
    <value>hdfs://master:54310/hbase </value>
    <description>
        The directory shared by region servers. Should be
        fully-qualified to
        include the filesystem to use. E.g:
        hdfs://NAMENODE_SERVER:PORT/HBASE_ROOTDIR

    </description>

</property>

<property>
    <name>hbase.cluster.distributed</name>
    <value>true</value>
    <description>The mode the cluster will be in. Possible values are
        false: standalone and pseudo-distributed setups with managed
        Zookeeper
        true: fully-distributed with unmanaged Zookeeper Quorum (see
        hbase-env.sh)
    </description>
</property>
<property>
    <name>hbase.zookeeper.quorum</name>
    <value>master</value>
    <description>Comma separated list of servers in the ZooKeeper Quorum.
        If HBASE_MANAGES_ZK is set in hbase-env.sh this is the list of
        servers which we will start/stop ZooKeeper on.
    </description>
</property>
<property>
    <name>hbase.master</name>
    <value>master:60010</value>
</property>
<property>
    <name>hbase.zookeeper.property.clientPort</name>
    <value>2181</value>
</property></configuration>

UPDATED LOGS

Looking into the logs (DEBUG Level) created by my Java application, I found the following logs:

1 2012-01-12 17:12:13,328 DEBUG Thread-1320 org.apache.hadoop.ipc.HBaseClient - IPC Client (47) connection to localhost/127.0.0.1:60020 from an unknown user: closed
2 2012-01-12 17:12:13,328 INFO Thread-1320 org.apache.hadoop.ipc.HbaseRPC - Server at localhost/127.0.0.1:60020 could not be reached after 1 tries, giving up.
3 2012-01-12 17:12:13,328 DEBUG Thread-1320 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation - locateRegionInMeta parentTable=-ROOT-, metaLocation=address: localhost:60020, regioninfo: -ROOT-,,0.70236052, attempt=0 of 10 failed; retrying after sleep of 1000 because: Failed setting up proxy interface org.apache.hadoop.hbase.ipc.HRegionInterface to localhost/127.0.0.1:60020 after attempts=1
4 2012-01-12 17:12:13,328 DEBUG Thread-1320 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation - Lookedup root region location, connection=org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@9d1e83; hsa=localhost:60020
5 2012-01-12 17:12:13,736 DEBUG Thread-1268 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation - Lookedup root region location, connection=org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@9d1e83; hsa=localhost:60020
6 2012-01-12 17:12:13,736 DEBUG Thread-1268 org.apache.hadoop.ipc.HBaseClient - Connecting to localhost/127.0.0.1:60020
7 2012-01-12 17:12:13,737 DEBUG Thread-1268 org.apache.hadoop.ipc.HBaseClient - closing ipc connection to localhost/127.0.0.1:60020: Connection refused
8 java.net.ConnectException: Connection refused

In /etc/hosts file when the mapping was changed from

127.0.0.1 localhost

to

<my_server_IP> localhost

My application worked perfectly fine. Hence I need some way to tell the application to connect to desired hostname and not localhost.

I have tried debugging it, without any success.

Answer

Robert J Berger picture Robert J Berger · Jan 18, 2012

I don't know if this is your problem, but it generally is a problem to use localhost if you are not accessing everything from the same host.

So don't use localhost!

And in general don't change the definition of localhost. Localhost is 127.0.0.1 by defintion.

You define hbase.rootdir as hdfs://master:54310/hbase and hbase.zookeeper.quorum as master.

What is master? It really should be a fully qualified domain name of the main ethernet interface of your host. The reverse DNS of the IP address of that interface should resolve to the same FQDN that you fill in to these fields. (Or just us the raw IP address if you can't control the reverse dns)

Make sure your HDFS configs also use the same FQDN's or IP addresses or synchronized /etc/hosts files. Synchronized /etc/hosts files should make sure the forward and reverse DNS is the same as long as all the hosts (all the HDFS and HBase and your clients) use the same /etc/hosts and there is no OS stuff overriding the /etc/hosts. In general I don't like to do anything with /etc/hosts. It will eventually bite you.

Your remote client should then access your HBase master via the same FQDN or IP address.

I have found that this kind of DNS issues can cause quite a bit of confusion.

If you need a reality check, just use IP addresses everywhere till you make it work. Then experiment with Fully Qualified Domain Names or synchronized /etc/hosts files.