Khaled Elmeleegy | 1 Nov 00:53 2014
Picon

s3n with hbase

Hi,

I am trying to use hbase with s3, using s3n, but I get the below errors, when starting the master. I am testing
this in a pseudo distributed mode on my laptop.
I've also set hbase.rootdir to s3n://kdiaa-hbase.s3-us-west-2.amazonaws.com:80/root, where the
corresponding bucket and directory are already created on s3. I've also set fs.s3n.awsAccessKeyId,
and fs.s3n.awsSecretAccessKey to the appropriate values in hbase-site.xml

So, I must be missing something. Any advice is appreciated.

2014-10-31 16:47:15,312 WARN  [master:172.16.209.239:60000] httpclient.RestS3Service: Response
'/root' - Unexpected response code 404, expected 200
2014-10-31 16:47:15,349 WARN  [master:172.16.209.239:60000] httpclient.RestS3Service: Response
'/root_%24folder%24' - Unexpected response code 404, expected 200
2014-10-31 16:47:15,420 WARN  [master:172.16.209.239:60000] httpclient.RestS3Service: Response
'/' - Unexpected response code 404, expected 200
2014-10-31 16:47:15,420 WARN  [master:172.16.209.239:60000] httpclient.RestS3Service: Response
'/' - Received error response with XML message
2014-10-31 16:47:15,601 FATAL [master:172.16.209.239:60000] master.HMaster: Unhandled exception.
Starting shutdown.
org.apache.hadoop.fs.s3.S3Exception: org.jets3t.service.S3ServiceException: S3 GET failed for
'/' XML Error Message: <?xml version="1.0"
encoding="UTF-8"?><Error><Code>NoSuchBucket</Code><Message>The specified bucket does not exist</Message><BucketName>kdiaa-hbase.s3-us-west-2.amazonaws.com</BucketName><RequestId>1589CC5DB70ED750</RequestId><HostId>cb2ZGGlNkxtf5fredweXt/wxJlAHLkioUJC86pkh0JxQfBJ1CMYoZuxHU1g+CnTB</HostId></Error>
        at org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.handleServiceException(Jets3tNativeFileSystemStore.java:245)
        at org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.list(Jets3tNativeFileSystemStore.java:181)
        at org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.list(Jets3tNativeFileSystemStore.java:158)
        at org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.list(Jets3tNativeFileSystemStore.java:151)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
(Continue reading)

Bora, Venu | 31 Oct 22:19 2014
Picon

How to findout Hbase PUT inserts a new Row or Update an exisiting row

Hello,
We have a requirement to determine whether a PUT will create a new row or update an existing one. I looked at
using preBatchMutate in a co-processor and have the code below.

Few things I need to ask:
1) Is there a more efficient way of doing this?
2) Will region.getClosestRowBefore() add additional IO to go to disk? or will the row be in memory since the
row lock was already acquired before preBatchMutate is called?
3) Will region.getClosestRowBefore() always give the correct result? Or are there scenarios where the
previous state will not be visible?

     <at> Override
    public void preBatchMutate(ObserverContext<RegionCoprocessorEnvironment> c,
MiniBatchOperationInProgress<Mutation> miniBatchOp) throws IOException {
        for (int i = 0; i < miniBatchOp.size(); i++) {
            Mutation operation = miniBatchOp.getOperation(i);
            byte[] rowKey = operation.getRow();
            NavigableMap<byte[], List<Cell>> familyCellMap = operation.getFamilyCellMap();

            for (Entry<byte[], List<Cell>> entry : familyCellMap.entrySet()) {
                for (Iterator<Cell> iterator = entry.getValue().iterator(); iterator.hasNext();) {
                    Cell cell = iterator.next();
                    byte[] family = CellUtil.cloneFamily(cell);
                    Result closestRowBefore = c.getEnvironment().getRegion().getClosestRowBefore(rowKey, family);
                    // closestRowBefore would return null if there is not record for the rowKey and family
                    if (closestRowBefore != null) {
                        // PUT is doing an update for the given rowKey, family
                    } else {
                        // PUT is doing an insert for the given rowKey, family
                    }
(Continue reading)

张桂林 | 31 Oct 17:38 2014

Hbase0.984 debug

 hellon, hbase groups, hadppy to write the message for you .
    Today,My Hbase cluster  the logs error  as same as your 
https://issues.apache.org/jira/browse/HBASE-12063  privode about :

wal.ProtobufLogWriter: Got IOException while writing trailer

more logs message you can see mine attachment name by logs.txt.  I'm really worried 。Hope receice you
answer  .

Here is my logs message:   

2014-10-31 22:33:52,197 DEBUG [regionserver60020-WAL.AsyncNotifier] wal.FSHLog:
regionserver60020-WAL.AsyncNotifier interrupted while waiting for  notification from AsyncSyncer thread

2014-10-31 22:33:52,197 INFO  [regionserver60020-WAL.AsyncNotifier] wal.FSHLog:
regionserver60020-WAL.AsyncNotifier exiting

2014-10-31 22:33:52,197 DEBUG [regionserver60020-WAL.AsyncSyncer0] wal.FSHLog:
regionserver60020-WAL.AsyncSyncer0 interrupted while waiting for notification from AsyncWriter thread
 By guilin.zhang
张桂林 | 31 Oct 17:40 2014

Hbase0.984 debug

 hellon, hbase groups, hadppy to write the message for you .
    Today,My Hbase cluster  the logs error  as same as your  https://issues.apache.org/jira/browse/HBASE-12063  privode about :

wal.ProtobufLogWriter: Got IOException while writing trailer

more logs message you can see mine attachment name by logs.txt.  I'm really worried 。Hope receice you answer  .

Here is my logs message:   

2014-10-31 22:33:52,197 DEBUG [regionserver60020-WAL.AsyncNotifier] wal.FSHLog: regionserver60020-WAL.AsyncNotifier interrupted while waiting for  notification from AsyncSyncer thread

2014-10-31 22:33:52,197 INFO  [regionserver60020-WAL.AsyncNotifier] wal.FSHLog: regionserver60020-WAL.AsyncNotifier exiting
2014-10-31 22:33:52,197 DEBUG [regionserver60020-WAL.AsyncSyncer0] wal.FSHLog: regionserver60020-WAL.AsyncSyncer0 interrupted while waiting for notification from AsyncWriter thread
 By guilin.zhang




2,wgdata_jingdong.product_info:1169421832:2014-10-23,1414615885356.374c13df97145a9e89eb21f0bf2f76e3.
2014-10-31 22:33:50,789 DEBUG [RS_OPEN_REGION-datanode33:60020-0] zookeeper.ZKAssign:
regionserver:60020-0x5493865d4f4016a,
quorum=datanode33.shadoop.co
m:2181,datanode32.shadoop.com:2181,namenode31.shadoop.com:2181,namenode30.shadoop.com:2181,datanode34.shadoop.com:2181,
baseZNode=/hbase Transitioning 3
74c13df97145a9e89eb21f0bf2f76e3 from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED
2014-10-31 22:33:50,792 DEBUG [RS_OPEN_REGION-datanode33:60020-0] zookeeper.ZKAssign:
regionserver:60020-0x5493865d4f4016a,
quorum=datanode33.shadoop.co
m:2181,datanode32.shadoop.com:2181,namenode31.shadoop.com:2181,namenode30.shadoop.com:2181,datanode34.shadoop.com:2181,
baseZNode=/hbase Transitioned no
de 374c13df97145a9e89eb21f0bf2f76e3 from RS_ZK_REGION_OPENING to RS_ZK_REGION_OPENED
2014-10-31 22:33:50,792 DEBUG [RS_OPEN_REGION-datanode33:60020-0] handler.OpenRegionHandler:
Transitioned 374c13df97145a9e89eb21f0bf2f76e3 to OPENED in
zk on datanode33.shadoop.com,60020,1414766007046
2014-10-31 22:33:50,792 DEBUG [RS_OPEN_REGION-datanode33:60020-0] handler.OpenRegionHandler:
Opened test2,wgdata_jingdong.product_info:1169421832:2014-1
0-23,1414615885356.374c13df97145a9e89eb21f0bf2f76e3. on datanode33.shadoop.com,60020,1414766007046
2014-10-31 22:33:50,809 INFO  [regionserver60020-EventThread]
replication.ReplicationTrackerZKImpl: /hbase/rs/datanode41.shadoop.com,60020,1414766007342
 znode expired, triggering replicatorRemoved event
2014-10-31 22:33:50,825 DEBUG [regionserver60020-EventThread] regionserver.SplitLogWorker:
tasks arrived or departed
2014-10-31 22:33:50,882 DEBUG [regionserver60020-EventThread] regionserver.SplitLogWorker:
tasks arrived or departed
2014-10-31 22:33:51,084 INFO  [regionserver60020-EventThread]
replication.ReplicationTrackerZKImpl: /hbase/rs/datanode45.shadoop.com,60020,1414766009327
 znode expired, triggering replicatorRemoved event
2014-10-31 22:33:51,102 DEBUG [regionserver60020-EventThread] regionserver.SplitLogWorker:
tasks arrived or departed
2014-10-31 22:33:51,158 DEBUG [regionserver60020-EventThread] regionserver.SplitLogWorker:
tasks arrived or departed
2014-10-31 22:33:51,170 INFO  [PriorityRpcServer.handler=3,queue=0,port=60020]
regionserver.HRegionServer: Open SYSTEM.CATALOG,,1414743520507.8c82c108a0
2f9cf385788430199b4e07.
2014-10-31 22:33:51,180 DEBUG [RS_OPEN_REGION-datanode33:60020-2] zookeeper.ZKAssign:
regionserver:60020-0x5493865d4f4016a,
quorum=datanode33.shadoop.co
m:2181,datanode32.shadoop.com:2181,namenode31.shadoop.com:2181,namenode30.shadoop.com:2181,datanode34.shadoop.com:2181,
baseZNode=/hbase Transitioning 8
c82c108a02f9cf385788430199b4e07 from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING
2014-10-31 22:33:51,186 DEBUG [RS_OPEN_REGION-datanode33:60020-2] zookeeper.ZKAssign:
regionserver:60020-0x5493865d4f4016a,
quorum=datanode33.shadoop.co
m:2181,datanode32.shadoop.com:2181,namenode31.shadoop.com:2181,namenode30.shadoop.com:2181,datanode34.shadoop.com:2181,
baseZNode=/hbase Transitioned no
de 8c82c108a02f9cf385788430199b4e07 from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING
2014-10-31 22:33:51,187 DEBUG [RS_OPEN_REGION-datanode33:60020-2] regionserver.HRegion: Opening
region: {ENCODED => 8c82c108a02f9cf385788430199b4e07, NA
ME => 'SYSTEM.CATALOG,,1414743520507.8c82c108a02f9cf385788430199b4e07.', STARTKEY => '', ENDKEY
=> ''}
2014-10-31 22:33:51,197 DEBUG [RS_OPEN_REGION-datanode33:60020-2] coprocessor.CoprocessorHost:
Loading coprocessor class org.apache.phoenix.coprocessor.
MetaDataRegionObserver with path null and priority 2
2014-10-31 22:33:51,198 ERROR [RS_OPEN_REGION-datanode33:60020-2] coprocessor.CoprocessorHost:
The coprocessor org.apache.phoenix.coprocessor.MetaDataRe
gionObserver threw an unexpected exception
java.io.IOException: No jar path specified for org.apache.phoenix.coprocessor.MetaDataRegionObserver
        at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.load(CoprocessorHost.java:200)
        at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.loadTableCoprocessors(RegionCoprocessorHost.java:207)
        at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.<init>(RegionCoprocessorHost.java:163)
        at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:622)
        at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:529)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:4208)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4519)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4492)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4448)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4399)
        at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:465)
        at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:139)
        at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
2014-10-31 22:33:51,201 FATAL [RS_OPEN_REGION-datanode33:60020-2] regionserver.HRegionServer:
ABORTING region server datanode33.shadoop.com,60020,141476
6007046: The coprocessor org.apache.phoenix.coprocessor.MetaDataRegionObserver threw an
unexpected exception
java.io.IOException: No jar path specified for org.apache.phoenix.coprocessor.MetaDataRegionObserver
        at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.load(CoprocessorHost.java:200)
        at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.loadTableCoprocessors(RegionCoprocessorHost.java:207)
        at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.<init>(RegionCoprocessorHost.java:163)
        at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:622)
        at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:529)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:4208)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4519)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4492)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4448)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4399)
        at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:465)
        at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:139)
        at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
2014-10-31 22:33:51,202 FATAL [RS_OPEN_REGION-datanode33:60020-2] regionserver.HRegionServer:
RegionServer abort: loaded coprocessors are: []
2014-10-31 22:33:51,216 INFO  [RS_OPEN_REGION-datanode33:60020-2] regionserver.HRegionServer:
STOPPED: The coprocessor org.apache.phoenix.coprocessor.Me
taDataRegionObserver threw an unexpected exception
2014-10-31 22:33:51,216 INFO  [regionserver60020] ipc.RpcServer: Stopping server on 60020
2014-10-31 22:33:51,217 INFO  [RpcServer.listener,port=60020] ipc.RpcServer:
RpcServer.listener,port=60020: stopping
2014-10-31 22:33:51,217 INFO  [RpcServer.responder] ipc.RpcServer: RpcServer.responder: stopped
2014-10-31 22:33:51,221 DEBUG [RS_OPEN_REGION-datanode33:60020-2] coprocessor.CoprocessorHost:
Loading coprocessor class org.apache.phoenix.coprocessor.
MetaDataEndpointImpl with path null and priority 1
2014-10-31 22:33:51,217 INFO  [regionserver60020] regionserver.SplitLogWorker: Sending interrupt
to stop the worker thread
2014-10-31 22:33:51,418 INFO  [regionserver60020] regionserver.HRegionServer: Stopping infoServer
2014-10-31 22:33:51,396 INFO  [RpcServer.responder] ipc.RpcServer: RpcServer.responder: stopping
2014-10-31 22:33:51,418 ERROR [RS_OPEN_REGION-datanode33:60020-2] coprocessor.CoprocessorHost:
The coprocessor org.apache.phoenix.coprocessor.MetaDataEn
dpointImpl threw an unexpected exception
java.io.IOException: No jar path specified for org.apache.phoenix.coprocessor.MetaDataEndpointImpl
        at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.load(CoprocessorHost.java:200)
        at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.loadTableCoprocessors(RegionCoprocessorHost.java:207)
        at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.<init>(RegionCoprocessorHost.java:163)
        at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:622)
        at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:529)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:4208)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4519)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4492)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4448)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4399)
        at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:465)
        at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:139)
        at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
2014-10-31 22:33:51,631 FATAL [RS_OPEN_REGION-datanode33:60020-2] regionserver.HRegionServer:
ABORTING region server datanode33.shadoop.com,60020,141476
6007046: The coprocessor org.apache.phoenix.coprocessor.MetaDataEndpointImpl threw an unexpected exception
java.io.IOException: No jar path specified for org.apache.phoenix.coprocessor.MetaDataEndpointImpl
        at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.load(CoprocessorHost.java:200)
        at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.loadTableCoprocessors(RegionCoprocessorHost.java:207)
        at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.<init>(RegionCoprocessorHost.java:163)
        at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:622)
        at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:529)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:4208)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4519)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4492)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4448)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4399)
        at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:465)
        at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:139)
        at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
2014-10-31 22:33:51,418 INFO  [SplitLogWorker-datanode33.shadoop.com,60020,1414766007046]
regionserver.SplitLogWorker: SplitLogWorker interrupted while
waiting for task, exiting: java.lang.InterruptedException
2014-10-31 22:33:51,784 FATAL [RS_OPEN_REGION-datanode33:60020-2] regionserver.HRegionServer:
RegionServer abort: loaded coprocessors are: []
2014-10-31 22:33:51,784 INFO  [SplitLogWorker-datanode33.shadoop.com,60020,1414766007046]
regionserver.SplitLogWorker: SplitLogWorker datanode33.shadoop
.com,60020,1414766007046 exiting
2014-10-31 22:33:51,785 INFO  [regionserver60020] mortbay.log: Stopped SelectChannelConnector <at> 0.0.0.0:60030
2014-10-31 22:33:51,789 DEBUG [RS_OPEN_REGION-datanode33:60020-2] coprocessor.CoprocessorHost:
Loading coprocessor class org.apache.phoenix.coprocessor.
ServerCachingEndpointImpl with path null and priority 1
2014-10-31 22:33:51,791 ERROR [RS_OPEN_REGION-datanode33:60020-2] coprocessor.CoprocessorHost:
The coprocessor org.apache.phoenix.coprocessor.ServerCach
ingEndpointImpl threw an unexpected exception
java.io.IOException: No jar path specified for org.apache.phoenix.coprocessor.ServerCachingEndpointImpl
        at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.load(CoprocessorHost.java:200)
        at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.loadTableCoprocessors(RegionCoprocessorHost.java:207)
        at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.<init>(RegionCoprocessorHost.java:163)
        at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:622)
        at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:529)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:4208)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4519)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4492)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4448)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4399)
        at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:465)
        at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:139)
        at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
2014-10-31 22:33:51,791 FATAL [RS_OPEN_REGION-datanode33:60020-2] regionserver.HRegionServer:
ABORTING region server datanode33.shadoop.com,60020,141476
6007046: The coprocessor org.apache.phoenix.coprocessor.ServerCachingEndpointImpl threw an
unexpected exception
java.io.IOException: No jar path specified for org.apache.phoenix.coprocessor.ServerCachingEndpointImpl
        at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.load(CoprocessorHost.java:200)
        at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.loadTableCoprocessors(RegionCoprocessorHost.java:207)
        at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.<init>(RegionCoprocessorHost.java:163)
        at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:622)
        at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:529)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:4208)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4519)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4492)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4448)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4399)
        at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:465)
        at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:139)
        at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
2014-10-31 22:33:51,792 FATAL [RS_OPEN_REGION-datanode33:60020-2] regionserver.HRegionServer:
RegionServer abort: loaded coprocessors are: []
2014-10-31 22:33:51,796 DEBUG [RS_OPEN_REGION-datanode33:60020-2] coprocessor.CoprocessorHost:
Loading coprocessor class org.apache.phoenix.coprocessor.
GroupedAggregateRegionObserver with path null and priority 1
2014-10-31 22:33:51,797 ERROR [RS_OPEN_REGION-datanode33:60020-2] coprocessor.CoprocessorHost:
The coprocessor org.apache.phoenix.coprocessor.GroupedAgg
regateRegionObserver threw an unexpected exception
java.io.IOException: No jar path specified for org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver
        at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.load(CoprocessorHost.java:200)
        at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.loadTableCoprocessors(RegionCoprocessorHost.java:207)
        at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.<init>(RegionCoprocessorHost.java:163)
        at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:622)
        at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:529)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:4208)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4519)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4492)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4448)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4399)
        at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:465)
        at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:139)
        at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
2014-10-31 22:33:51,797 FATAL [RS_OPEN_REGION-datanode33:60020-2] regionserver.HRegionServer:
ABORTING region server datanode33.shadoop.com,60020,141476
6007046: The coprocessor org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver threw an
unexpected exception
java.io.IOException: No jar path specified for org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver
        at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.load(CoprocessorHost.java:200)
        at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.loadTableCoprocessors(RegionCoprocessorHost.java:207)
        at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.<init>(RegionCoprocessorHost.java:163)
        at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:622)
        at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:529)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:4208)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4519)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4492)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4448)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4399)
        at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:465)
        at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:139)
        at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
2014-10-31 22:33:51,798 FATAL [RS_OPEN_REGION-datanode33:60020-2] regionserver.HRegionServer:
RegionServer abort: loaded coprocessors are: []
2014-10-31 22:33:51,802 DEBUG [RS_OPEN_REGION-datanode33:60020-2] coprocessor.CoprocessorHost:
Loading coprocessor class org.apache.phoenix.coprocessor.
UngroupedAggregateRegionObserver with path null and priority 1
2014-10-31 22:33:51,803 ERROR [RS_OPEN_REGION-datanode33:60020-2] coprocessor.CoprocessorHost:
The coprocessor org.apache.phoenix.coprocessor.UngroupedA
ggregateRegionObserver threw an unexpected exception
java.io.IOException: No jar path specified for org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver
        at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.load(CoprocessorHost.java:200)
        at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.loadTableCoprocessors(RegionCoprocessorHost.java:207)
        at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.<init>(RegionCoprocessorHost.java:163)
        at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:622)
        at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:529)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:4208)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4519)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4492)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4448)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4399)
        at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:465)
        at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:139)
        at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
2014-10-31 22:33:51,804 FATAL [RS_OPEN_REGION-datanode33:60020-2] regionserver.HRegionServer:
ABORTING region server datanode33.shadoop.com,60020,141476
6007046: The coprocessor org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver threw
an unexpected exception
java.io.IOException: No jar path specified for org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver
        at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.load(CoprocessorHost.java:200)
        at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.loadTableCoprocessors(RegionCoprocessorHost.java:207)
        at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.<init>(RegionCoprocessorHost.java:163)
        at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:622)
        at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:529)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:4208)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4519)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4492)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4448)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4399)
        at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:465)
        at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:139)
        at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
2014-10-31 22:33:51,804 FATAL [RS_OPEN_REGION-datanode33:60020-2] regionserver.HRegionServer:
RegionServer abort: loaded coprocessors are: []
2014-10-31 22:33:51,809 DEBUG [RS_OPEN_REGION-datanode33:60020-2] coprocessor.CoprocessorHost:
Loading coprocessor class org.apache.phoenix.coprocessor.
ScanRegionObserver with path null and priority 1
2014-10-31 22:33:51,809 ERROR [RS_OPEN_REGION-datanode33:60020-2] coprocessor.CoprocessorHost:
The coprocessor org.apache.phoenix.coprocessor.ScanRegion
Observer threw an unexpected exception
java.io.IOException: No jar path specified for org.apache.phoenix.coprocessor.ScanRegionObserver
        at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.load(CoprocessorHost.java:200)
        at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.loadTableCoprocessors(RegionCoprocessorHost.java:207)
        at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.<init>(RegionCoprocessorHost.java:163)
        at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:622)
        at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:529)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:4208)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4519)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4492)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4448)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4399)
        at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:465)
        at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:139)
        at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
2014-10-31 22:33:51,810 FATAL [RS_OPEN_REGION-datanode33:60020-2] regionserver.HRegionServer:
ABORTING region server datanode33.shadoop.com,60020,141476
6007046: The coprocessor org.apache.phoenix.coprocessor.ScanRegionObserver threw an unexpected exception
java.io.IOException: No jar path specified for org.apache.phoenix.coprocessor.ScanRegionObserver
        at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.load(CoprocessorHost.java:200)
        at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.loadTableCoprocessors(RegionCoprocessorHost.java:207)
        at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.<init>(RegionCoprocessorHost.java:163)
        at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:622)
        at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:529)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:4208)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4519)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4492)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4448)
        at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4399)
        at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:465)
        at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:139)
        at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
2014-10-31 22:33:51,811 FATAL [RS_OPEN_REGION-datanode33:60020-2] regionserver.HRegionServer:
RegionServer abort: loaded coprocessors are: []
2014-10-31 22:33:51,812 DEBUG [RS_OPEN_REGION-datanode33:60020-2]
regionserver.MetricsRegionSourceImpl: Creating new MetricsRegionSourceImpl for table S
YSTEM.CATALOG 8c82c108a02f9cf385788430199b4e07
2014-10-31 22:33:51,813 DEBUG [RS_OPEN_REGION-datanode33:60020-2] regionserver.HRegion:
Instantiated SYSTEM.CATALOG,,1414743520507.8c82c108a02f9cf385788
430199b4e07.
2014-10-31 22:33:51,818 INFO  [StoreOpener-8c82c108a02f9cf385788430199b4e07-1]
compactions.CompactionConfiguration: size [134217728, 9223372036854775807
); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; delete expired;
major period 604800000, major jitter 0.500000
2014-10-31 22:33:51,823 DEBUG [RS_OPEN_REGION-datanode33:60020-2] regionserver.HRegion: Found 0
recovered edits file(s) under hdfs://myhadoop/hbase/data
/default/SYSTEM.CATALOG/8c82c108a02f9cf385788430199b4e07
2014-10-31 22:33:51,825 INFO  [RS_OPEN_REGION-datanode33:60020-2] regionserver.HRegion: Onlined
8c82c108a02f9cf385788430199b4e07; next sequenceid=1
2014-10-31 22:33:51,825 DEBUG [RS_OPEN_REGION-datanode33:60020-2] zookeeper.ZKAssign:
regionserver:60020-0x5493865d4f4016a,
quorum=datanode33.shadoop.co
m:2181,datanode32.shadoop.com:2181,namenode31.shadoop.com:2181,namenode30.shadoop.com:2181,datanode34.shadoop.com:2181,
baseZNode=/hbase Attempting to r
etransition opening state of node 8c82c108a02f9cf385788430199b4e07
2014-10-31 22:33:51,827 DEBUG [RS_OPEN_REGION-datanode33:60020-2] regionserver.HRegion: Closing SYSTEM.CATALOG,,1414743520507.8c82c108a02f9cf38578843019
9b4e07.: disabling compactions & flushes
2014-10-31 22:33:51,827 DEBUG [RS_OPEN_REGION-datanode33:60020-2] regionserver.HRegion: Updates
disabled for region SYSTEM.CATALOG,,1414743520507.8c82c1
08a02f9cf385788430199b4e07.
2014-10-31 22:33:51,828 INFO 
[StoreCloserThread-SYSTEM.CATALOG,,1414743520507.8c82c108a02f9cf385788430199b4e07.-1]
regionserver.HStore: Closed 0
2014-10-31 22:33:51,830 INFO  [RS_OPEN_REGION-datanode33:60020-2] regionserver.HRegion: Closed SYSTEM.CATALOG,,1414743520507.8c82c108a02f9cf385788430199
b4e07.
2014-10-31 22:33:51,830 INFO  [RS_OPEN_REGION-datanode33:60020-2] handler.OpenRegionHandler:
Opening of region {ENCODED => 8c82c108a02f9cf385788430199b4
e07, NAME => 'SYSTEM.CATALOG,,1414743520507.8c82c108a02f9cf385788430199b4e07.', STARTKEY => '',
ENDKEY => ''} failed, transitioning from OPENING to FAIL
ED_OPEN in ZK, expecting version 1
2014-10-31 22:33:51,830 DEBUG [RS_OPEN_REGION-datanode33:60020-2] zookeeper.ZKAssign:
regionserver:60020-0x5493865d4f4016a,
quorum=datanode33.shadoop.co
m:2181,datanode32.shadoop.com:2181,namenode31.shadoop.com:2181,namenode30.shadoop.com:2181,datanode34.shadoop.com:2181,
baseZNode=/hbase Transitioning 8
c82c108a02f9cf385788430199b4e07 from RS_ZK_REGION_OPENING to RS_ZK_REGION_FAILED_OPEN
2014-10-31 22:33:51,841 DEBUG [RS_OPEN_REGION-datanode33:60020-2] zookeeper.ZKAssign:
regionserver:60020-0x5493865d4f4016a,
quorum=datanode33.shadoop.co
m:2181,datanode32.shadoop.com:2181,namenode31.shadoop.com:2181,namenode30.shadoop.com:2181,datanode34.shadoop.com:2181,
baseZNode=/hbase Transitioned no
de 8c82c108a02f9cf385788430199b4e07 from RS_ZK_REGION_OPENING to RS_ZK_REGION_FAILED_OPEN
2014-10-31 22:33:51,887 INFO  [regionserver60020] snapshot.RegionServerSnapshotManager: Stopping
RegionServerSnapshotManager abruptly.
2014-10-31 22:33:51,887 INFO  [MemStoreFlusher.0] regionserver.MemStoreFlusher:
MemStoreFlusher.0 exiting
2014-10-31 22:33:51,887 INFO  [regionserver60020.compactionChecker]
regionserver.HRegionServer$CompactionChecker: regionserver60020.compactionChecker ex
iting
2014-10-31 22:33:51,887 INFO  [regionserver60020.nonceCleaner]
regionserver.ServerNonceManager$1: regionserver60020.nonceCleaner exiting
2014-10-31 22:33:51,887 INFO  [MemStoreFlusher.1] regionserver.MemStoreFlusher:
MemStoreFlusher.1 exiting
2014-10-31 22:33:51,887 INFO  [regionserver60020.logRoller] regionserver.LogRoller: LogRoller exiting.
2014-10-31 22:33:51,889 DEBUG [RS_CLOSE_REGION-datanode33:60020-0] handler.CloseRegionHandler:
Processing close of test2,wgdata_jingdong.product_info:11
69421832:2014-10-23,1414615885356.374c13df97145a9e89eb21f0bf2f76e3.
2014-10-31 22:33:51,890 DEBUG [RS_CLOSE_REGION-datanode33:60020-1] handler.CloseRegionHandler:
Processing close of test2,wgdata_amazon.sort_product_list
:B00OCKSE66:2014-10-16,1414618497171.ddf80e37847ce808a5b19e1a8339288b.
2014-10-31 22:33:51,891 DEBUG [RS_CLOSE_REGION-datanode33:60020-2] handler.CloseRegionHandler:
Processing close of test2,wgdata_jingdong.sort_product_li
st:1296217225:2014-10-18,1414614046578.bb2c7118c10ac3ec23476a9fc2e9742b.
2014-10-31 22:33:51,891 INFO  [regionserver60020] regionserver.HRegionServer: aborting server datanode33.shadoop.com,60020,1414766007046
2014-10-31 22:33:51,891 DEBUG [RS_CLOSE_REGION-datanode33:60020-0] regionserver.HRegion:
Closing test2,wgdata_jingdong.product_info:1169421832:2014-10-2
3,1414615885356.374c13df97145a9e89eb21f0bf2f76e3.: disabling compactions & flushes
2014-10-31 22:33:51,891 DEBUG [RS_CLOSE_REGION-datanode33:60020-1] regionserver.HRegion:
Closing test2,wgdata_amazon.sort_product_list:B00OCKSE66:2014-1
0-16,1414618497171.ddf80e37847ce808a5b19e1a8339288b.: disabling compactions & flushes
2014-10-31 22:33:51,891 DEBUG [regionserver60020] catalog.CatalogTracker: Stopping catalog
tracker org.apache.hadoop.hbase.catalog.CatalogTracker <at> 6d8b74
5f
2014-10-31 22:33:51,892 DEBUG [RS_CLOSE_REGION-datanode33:60020-1] regionserver.HRegion:
Updates disabled for region test2,wgdata_amazon.sort_product_li
st:B00OCKSE66:2014-10-16,1414618497171.ddf80e37847ce808a5b19e1a8339288b.
2014-10-31 22:33:51,892 DEBUG [RS_CLOSE_REGION-datanode33:60020-2] regionserver.HRegion:
Closing test2,wgdata_jingdong.sort_product_list:1296217225:2014
-10-18,1414614046578.bb2c7118c10ac3ec23476a9fc2e9742b.: disabling compactions & flushes
2014-10-31 22:33:51,892 INFO  [regionserver60020]
client.HConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x24938657b810191
2014-10-31 22:33:51,891 DEBUG [RS_CLOSE_REGION-datanode33:60020-0] regionserver.HRegion:
Updates disabled for region test2,wgdata_jingdong.product_info:
1169421832:2014-10-23,1414615885356.374c13df97145a9e89eb21f0bf2f76e3.
2014-10-31 22:33:51,892 DEBUG [RS_CLOSE_REGION-datanode33:60020-2] regionserver.HRegion:
Updates disabled for region test2,wgdata_jingdong.sort_product_
list:1296217225:2014-10-18,1414614046578.bb2c7118c10ac3ec23476a9fc2e9742b.
2014-10-31 22:33:51,894 INFO  [regionserver60020] zookeeper.ZooKeeper: Session: 0x24938657b810191 closed
2014-10-31 22:33:51,992 INFO  [regionserver60020-EventThread] zookeeper.ClientCnxn: EventThread
shut down
2014-10-31 22:33:51,996 INFO  [regionserver60020] regionserver.HRegionServer: Waiting on 5 regions
to close
2014-10-31 22:33:51,996 DEBUG [regionserver60020] regionserver.HRegionServer: {374c13df97145a9e89eb21f0bf2f76e3=test2,wgdata_jingdong.product_info:11694
21832:2014-10-23,1414615885356.374c13df97145a9e89eb21f0bf2f76e3., ddf80e37847ce808a5b19e1a8339288b=test2,wgdata_amazon.sort_product_list:B00OCKSE66:2014
-10-16,1414618497171.ddf80e37847ce808a5b19e1a8339288b., bb2c7118c10ac3ec23476a9fc2e9742b=test2,wgdata_jingdong.sort_product_list:1296217225:2014-10-18,1
414614046578.bb2c7118c10ac3ec23476a9fc2e9742b., 6d2021fc1cdfbd1a9081b27d0eb66ccb=test2,wgdata_tmall.product_info:15646793963:2014-10-26,1414590389813.6d
2021fc1cdfbd1a9081b27d0eb66ccb., cc8dea54f876071db04340ade5b5a2ca=test2,wgdata_tmall.product_info:40248082087:2014-10-23,1414588652332.cc8dea54f876071db
04340ade5b5a2ca.}
2014-10-31 22:33:52,060 INFO  [StoreCloserThread-test2,wgdata_jingdong.sort_product_list:1296217225:2014-10-18,1414614046578.bb2c7118c10ac3ec23476a9fc2e
9742b.-1] regionserver.HStore: Closed in
2014-10-31 22:33:52,060 INFO  [StoreCloserThread-test2,wgdata_jingdong.product_info:1169421832:2014-10-23,1414615885356.374c13df97145a9e89eb21f0bf2f76e3
.-1] regionserver.HStore: Closed in
2014-10-31 22:33:52,060 INFO  [StoreCloserThread-test2,wgdata_amazon.sort_product_list:B00OCKSE66:2014-10-16,1414618497171.ddf80e37847ce808a5b19e1a83392
88b.-1] regionserver.HStore: Closed in
2014-10-31 22:33:52,061 INFO  [RS_CLOSE_REGION-datanode33:60020-2] regionserver.HRegion: Closed test2,wgdata_jingdong.sort_product_list:1296217225:2014-
10-18,1414614046578.bb2c7118c10ac3ec23476a9fc2e9742b.
2014-10-31 22:33:52,062 INFO  [RS_CLOSE_REGION-datanode33:60020-0] regionserver.HRegion: Closed test2,wgdata_jingdong.product_info:1169421832:2014-10-23
,1414615885356.374c13df97145a9e89eb21f0bf2f76e3.
2014-10-31 22:33:52,063 DEBUG [RS_CLOSE_REGION-datanode33:60020-2] handler.CloseRegionHandler:
Closed test2,wgdata_jingdong.sort_product_list:1296217225
:2014-10-18,1414614046578.bb2c7118c10ac3ec23476a9fc2e9742b.
2014-10-31 22:33:52,063 DEBUG [RS_CLOSE_REGION-datanode33:60020-0] handler.CloseRegionHandler:
Closed test2,wgdata_jingdong.product_info:1169421832:2014
-10-23,1414615885356.374c13df97145a9e89eb21f0bf2f76e3.
2014-10-31 22:33:52,063 INFO  [RS_CLOSE_REGION-datanode33:60020-1] regionserver.HRegion: Closed test2,wgdata_amazon.sort_product_list:B00OCKSE66:2014-10
-16,1414618497171.ddf80e37847ce808a5b19e1a8339288b.
2014-10-31 22:33:52,063 DEBUG [RS_CLOSE_REGION-datanode33:60020-0] handler.CloseRegionHandler:
Processing close of test2,wgdata_tmall.product_info:40248
082087:2014-10-23,1414588652332.cc8dea54f876071db04340ade5b5a2ca.
2014-10-31 22:33:52,063 DEBUG [RS_CLOSE_REGION-datanode33:60020-1] handler.CloseRegionHandler:
Closed test2,wgdata_amazon.sort_product_list:B00OCKSE66:2
014-10-16,1414618497171.ddf80e37847ce808a5b19e1a8339288b.
2014-10-31 22:33:52,063 DEBUG [RS_CLOSE_REGION-datanode33:60020-2] handler.CloseRegionHandler:
Processing close of test2,wgdata_tmall.product_info:15646
793963:2014-10-26,1414590389813.6d2021fc1cdfbd1a9081b27d0eb66ccb.
2014-10-31 22:33:52,063 DEBUG [RS_CLOSE_REGION-datanode33:60020-0] regionserver.HRegion:
Closing test2,wgdata_tmall.product_info:40248082087:2014-10-23,
1414588652332.cc8dea54f876071db04340ade5b5a2ca.: disabling compactions & flushes
2014-10-31 22:33:52,063 DEBUG [RS_CLOSE_REGION-datanode33:60020-0] regionserver.HRegion:
Updates disabled for region test2,wgdata_tmall.product_info:402
48082087:2014-10-23,1414588652332.cc8dea54f876071db04340ade5b5a2ca.
2014-10-31 22:33:52,064 DEBUG [RS_CLOSE_REGION-datanode33:60020-2] regionserver.HRegion:
Closing test2,wgdata_tmall.product_info:15646793963:2014-10-26,
1414590389813.6d2021fc1cdfbd1a9081b27d0eb66ccb.: disabling compactions & flushes
2014-10-31 22:33:52,064 DEBUG [RS_CLOSE_REGION-datanode33:60020-2] regionserver.HRegion:
Updates disabled for region test2,wgdata_tmall.product_info:156
46793963:2014-10-26,1414590389813.6d2021fc1cdfbd1a9081b27d0eb66ccb.
2014-10-31 22:33:52,066 INFO  [StoreCloserThread-test2,wgdata_tmall.product_info:40248082087:2014-10-23,1414588652332.cc8dea54f876071db04340ade5b5a2ca.-
1] regionserver.HStore: Closed in
2014-10-31 22:33:52,066 INFO  [StoreCloserThread-test2,wgdata_tmall.product_info:15646793963:2014-10-26,1414590389813.6d2021fc1cdfbd1a9081b27d0eb66ccb.-
1] regionserver.HStore: Closed in
2014-10-31 22:33:52,066 INFO  [RS_CLOSE_REGION-datanode33:60020-0] regionserver.HRegion: Closed test2,wgdata_tmall.product_info:40248082087:2014-10-23,1
414588652332.cc8dea54f876071db04340ade5b5a2ca.
2014-10-31 22:33:52,066 INFO  [RS_CLOSE_REGION-datanode33:60020-2] regionserver.HRegion: Closed test2,wgdata_tmall.product_info:15646793963:2014-10-26,1
414590389813.6d2021fc1cdfbd1a9081b27d0eb66ccb.
2014-10-31 22:33:52,067 DEBUG [RS_CLOSE_REGION-datanode33:60020-0] handler.CloseRegionHandler:
Closed test2,wgdata_tmall.product_info:40248082087:2014-1
0-23,1414588652332.cc8dea54f876071db04340ade5b5a2ca.
2014-10-31 22:33:52,067 DEBUG [RS_CLOSE_REGION-datanode33:60020-2] handler.CloseRegionHandler:
Closed test2,wgdata_tmall.product_info:15646793963:2014-1
0-26,1414590389813.6d2021fc1cdfbd1a9081b27d0eb66ccb.
2014-10-31 22:33:52,196 INFO  [regionserver60020] regionserver.HRegionServer: stopping server
datanode33.shadoop.com,60020,1414766007046; all regions cl
osed.
2014-10-31 22:33:52,197 DEBUG [regionserver60020-WAL.AsyncNotifier] wal.FSHLog:
regionserver60020-WAL.AsyncNotifier interrupted while waiting for  notif
ication from AsyncSyncer thread
2014-10-31 22:33:52,197 INFO  [regionserver60020-WAL.AsyncNotifier] wal.FSHLog:
regionserver60020-WAL.AsyncNotifier exiting
2014-10-31 22:33:52,197 DEBUG [regionserver60020-WAL.AsyncSyncer0] wal.FSHLog:
regionserver60020-WAL.AsyncSyncer0 interrupted while waiting for notifica
tion from AsyncWriter thread
2014-10-31 22:33:52,198 INFO  [regionserver60020-WAL.AsyncSyncer0] wal.FSHLog:
regionserver60020-WAL.AsyncSyncer0 exiting
2014-10-31 22:33:52,198 DEBUG [regionserver60020-WAL.AsyncSyncer1] wal.FSHLog:
regionserver60020-WAL.AsyncSyncer1 interrupted while waiting for notifica
tion from AsyncWriter thread
2014-10-31 22:33:52,198 INFO  [regionserver60020-WAL.AsyncSyncer1] wal.FSHLog:
regionserver60020-WAL.AsyncSyncer1 exiting
2014-10-31 22:33:52,198 DEBUG [regionserver60020-WAL.AsyncSyncer2] wal.FSHLog:
regionserver60020-WAL.AsyncSyncer2 interrupted while waiting for notifica
tion from AsyncWriter thread
2014-10-31 22:33:52,198 INFO  [regionserver60020-WAL.AsyncSyncer2] wal.FSHLog:
regionserver60020-WAL.AsyncSyncer2 exiting
2014-10-31 22:33:52,199 DEBUG [regionserver60020-WAL.AsyncSyncer3] wal.FSHLog:
regionserver60020-WAL.AsyncSyncer3 interrupted while waiting for notifica
tion from AsyncWriter thread
2014-10-31 22:33:52,199 INFO  [regionserver60020-WAL.AsyncSyncer3] wal.FSHLog:
regionserver60020-WAL.AsyncSyncer3 exiting
2014-10-31 22:33:52,199 DEBUG [regionserver60020-WAL.AsyncSyncer4] wal.FSHLog:
regionserver60020-WAL.AsyncSyncer4 interrupted while waiting for notifica
tion from AsyncWriter thread
2014-10-31 22:33:52,199 INFO  [regionserver60020-WAL.AsyncSyncer4] wal.FSHLog:
regionserver60020-WAL.AsyncSyncer4 exiting
2014-10-31 22:33:52,199 DEBUG [regionserver60020-WAL.AsyncWriter] wal.FSHLog:
regionserver60020-WAL.AsyncWriter interrupted while waiting for newer writ
es added to local buffer
2014-10-31 22:33:52,199 INFO  [regionserver60020-WAL.AsyncWriter] wal.FSHLog:
regionserver60020-WAL.AsyncWriter exiting
2014-10-31 22:33:52,200 DEBUG [regionserver60020] wal.FSHLog: Closing WAL writer in hdfs://myhadoop/hbase/WALs/datanode33.shadoop.com,60020,141476600704
6
2014-10-31 22:33:52,313 INFO  [regionserver60020] regionserver.Leases: regionserver60020 closing leases
2014-10-31 22:33:52,313 INFO  [regionserver60020] regionserver.Leases: regionserver60020 closed leases
2014-10-31 22:33:52,315 DEBUG [regionserver60020-EventThread] regionserver.SplitLogWorker:
tasks arrived or departed
2014-10-31 22:33:52,449 INFO  [regionserver60020.periodicFlusher]
regionserver.HRegionServer$PeriodicMemstoreFlusher: regionserver60020.periodicFlusher
exiting
2014-10-31 22:33:52,449 INFO  [regionserver60020] regionserver.CompactSplitThread: Waiting for
Split Thread to finish...
2014-10-31 22:33:52,449 INFO  [regionserver60020] regionserver.CompactSplitThread: Waiting for
Merge Thread to finish...
2014-10-31 22:33:52,449 INFO  [regionserver60020] regionserver.CompactSplitThread: Waiting for
Large Compaction Thread to finish...
2014-10-31 22:33:52,449 INFO  [regionserver60020] regionserver.CompactSplitThread: Waiting for
Small Compaction Thread to finish...
2014-10-31 22:33:52,450 INFO  [regionserver60020.leaseChecker] regionserver.Leases:
regionserver60020.leaseChecker closing leases
2014-10-31 22:33:52,452 INFO  [regionserver60020.leaseChecker] regionserver.Leases:
regionserver60020.leaseChecker closed leases
2014-10-31 22:33:52,455 INFO  [regionserver60020]
client.HConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x4493865871c019c
2014-10-31 22:33:52,456 INFO  [regionserver60020] zookeeper.ZooKeeper: Session: 0x4493865871c019c closed
2014-10-31 22:33:52,456 INFO  [regionserver60020-EventThread] zookeeper.ClientCnxn: EventThread
shut down
2014-10-31 22:33:52,461 INFO  [regionserver60020] zookeeper.ZooKeeper: Session: 0x5493865d4f4016a closed
2014-10-31 22:33:52,461 INFO  [regionserver60020] regionserver.HRegionServer: stopping server
datanode33.shadoop.com,60020,1414766007046; zookeeper conn
ection closed.
2014-10-31 22:33:52,461 INFO  [regionserver60020-EventThread] zookeeper.ClientCnxn: EventThread
shut down
2014-10-31 22:33:52,461 INFO  [regionserver60020] regionserver.HRegionServer: regionserver60020 exiting
2014-10-31 22:33:52,462 ERROR [main] regionserver.HRegionServerCommandLine: Region server exiting
java.lang.RuntimeException: HRegionServer Aborted
        at org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.start(HRegionServerCommandLine.java:66)
        at org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.run(HRegionServerCommandLine.java:85)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
        at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
        at org.apache.hadoop.hbase.regionserver.HRegionServer.main(HRegionServer.java:2421)
2014-10-31 22:33:52,464 INFO  [Thread-9] regionserver.ShutdownHook: Shutdown hook starting;
hbase.shutdown.hook=true; fsShutdownHook=org.apache.hadoop.f
s.FileSystem$Cache$ClientFinalizer <at> 6c565c85
2014-10-31 22:33:52,465 INFO  [Thread-9] regionserver.ShutdownHook: Starting fs shutdown hook thread.
2014-10-31 22:33:52,466 INFO  [Thread-9] regionserver.ShutdownHook: Shutdown hook finished.
Gautam | 31 Oct 18:22 2014
Picon

Increasing write throughput..

I'm trying to increase write throughput of our hbase cluster. we'r currently doing around 7500 messages per sec per node. I think we have room for improvement. Especially since the heap is under utilized and memstore size doesn't seem to fluctuate much between regular and peak ingestion loads. 

We mainly have one large table that we write most of the data to. Other tables are mainly opentsdb and some relatively small summary tables. This table is read in batch once a day but otherwise is mostly serving writes 99% of the time. This large table has 1 CF and get's flushed at around ~128M fairly regularly like below..

{log}

2014-10-31 16:56:09,499 INFO org.apache.hadoop.hbase.regionserver.HRegion: Finished memstore flush of ~128.2 M/134459888, currentsize=879.5 K/900640 for region msg,00102014100515impression\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x002014100515040200049358\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x004138647301\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0002e5a329d2171149bcc1e83ed129312b\x00\x00\x00\x00,1413909604591.828e03c0475b699278256d4b5b9638a2. in 640ms, sequenceid=16861176169, compaction requested=true

{log}

Here's a pastebin of my hbase site : http://pastebin.com/fEctQ3im 

What i'v tried.. 
-  turned of major compactions , and handling these manually.
-  bumped up heap Xmx from 24G to 48 G 
-  hbase.hregion.memstore.flush.size = 512M   
- lowerLimit/ upperLimit on memstore are defaults (0.38 , 0.4) since the global heap has enough space to accommodate the default percentages. 
 - Currently running Hbase 98.1 on an 8 node cluster that's scaled up to 128GB RAM. 


There hasn't been any appreciable increase in write perf. Still hovering around the 7500 per node write throughput number. The flushes still seem to be hapenning at 128M (instead of the expected 512)

I'v attached a snapshot of the memstore size vs. flushQueueLen. the block caches are utilizing the extra heap space but not the memstore. The flush Queue lengths have increased which leads me to believe that it's flushing way too often without any increase in throughput. 

Please let me know where i should dig further. That's a long email, thanks for reading through :-)



Cheers,
-Gautam.
Li Li | 31 Oct 06:16 2014
Picon

can't start hbase.

hi all,
   I am using hbase and also phoenix(some tables are managed by myself
and some are created by phoenix).
  Last night, the disk is full . I killed the hbase
and hadoop related processes. But After that I can't start hbase
anymore.
I am using ubuntu 12.04 and hadoop-1.2.1 and hbase 0.98.5 with phoenix 4.1.0
The region server print error like:

2014-10-31 12:33:49,835 INFO
[mobvoi-knowledge-graph-0,60020,1414729587199-recovery-writer--pool4-t2]
client.AsyncProcess: #9, waiting for some tasks to finish. Expected
max=0, tasksSent=31, tasksDone=30, currentTasksDone=30, retries=30
hasError=false, tableName=BAIDUMUSIC.BAIDUMUSIC_IDX

2014-10-31 12:33:49,842 INFO  [htable-pool9-t1] client.AsyncProcess:
#9, table=BAIDUMUSIC.BAIDUMUSIC_IDX, attempt=31/350 failed 1 ops, last
exception: org.apache.hadoop.hbase.exceptions.RegionOpeningException:
org.apache.hadoop.hbase.exceptions.RegionOpeningException: Region
BAIDUMUSIC.BAIDUMUSIC_IDX,\x0D\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00,1414488087936.69e9a7efb9ee00b1ecfe50f825e7cc5b.
is opening on mobvoi-knowledge-graph-0,60020,1414729587199

        at org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:2692)

        at org.apache.hadoop.hbase.regionserver.HRegionServer.getRegion(HRegionServer.java:4139)

        at org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3363)

        at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29593)

        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2026)

        at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:98)

        at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114)

        at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94)

        at java.lang.Thread.run(Thread.java:745)

 on mobvoi-knowledge-graph-0,60020,1414484323072, tracking started Fri
Oct 31 12:26:39 CST 2014, retrying after 20119 ms, replay 1 ops.

2014-10-31 12:33:55,747 INFO
[mobvoi-knowledge-graph-0,60020,1414729587199-recovery-writer--pool4-t3]
client.AsyncProcess: #25, waiting for some tasks to finish. Expected
max=0, tasksSent=31, tasksDone=30, currentTasksDone=30, retries=30
hasError=false, tableName=BAIDUMUSIC.BAIDUMUSIC_IDX

2014-10-31 12:33:55,755 INFO  [htable-pool24-t1] client.AsyncProcess:
#25, table=BAIDUMUSIC.BAIDUMUSIC_IDX, attempt=31/350 failed 2 ops,
last exception:
org.apache.hadoop.hbase.exceptions.RegionOpeningException:
org.apache.hadoop.hbase.exceptions.RegionOpeningException: Region
BAIDUMUSIC.BAIDUMUSIC_IDX,\x0C\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00,1414488087936.378bef2d741a9c00761160670329fc7c.
is opening on mobvoi-knowledge-graph-0,60020,1414729587199

        at org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:2692)

        at org.apache.hadoop.hbase.regionserver.HRegionServer.getRegion(HRegionServer.java:4139)

        at org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3363)

        at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29593)

        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2026)

        at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:98)

        at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114)

        at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94)

        at java.lang.Thread.run(Thread.java:745)

 on mobvoi-knowledge-graph-0,60020,1414484323072, tracking started Fri
Oct 31 12:26:45 CST 2014, retrying after 20001 ms, replay 2 ops.

I also find warns like
wal.FSHLog: HDFS pipeline error detected. Found 1 replicas but
expecting no less than 3 replicas.  Requesting close of hlog.
But my hbase cluster is pesudo cluster that all processes are running
in a single machine and the hdfs (I have set dfs.replication=1 in
hdfs-site.xml) I have tried to set replication factor to 1 by hadoop
fs -setrep -R 1 / but no luck

Patrick Dignan | 30 Oct 22:09 2014

No IPC Handlers showing up in Web UI

Hey all,

I'm seeing an issue where IPC Handlers aren't showing up in the Region
server Web UI, even though jstack shows that they are waiting/processing.
This is on Hbase 0.94.15-cdh4.7.0

For example, when it's working, it looks something like this:
https://gist.github.com/bbeaudreault/aa72cf59e4d65d7ea385

And when it's not working it just looks like this:
https://gist.github.com/bbeaudreault/02d6186ee95ef6dacfe9

Note that it's not an issue with the JSON interface, the HTML version
doesn't show tasks either.  However, compactions DO show up in the
interface, so it's possible it is some difference in how
MonitoredRPCHandlerImpl and MonitoredTaskImpl work.

Restarting does fix the issue, but it returns after a while.

Does anybody know what might be happening here?

Thanks!
Birdsall, Dave | 30 Oct 16:47 2014
Picon

[ANNOUNCE] Trafodion 0.9.0 is now available

Hello HBase Users,

I wanted to let you know that a new release 0.9.0 of the Trafodion project is now available.

Here is a sampling of its new features:

1. Rebase to HBase 0.98, support for Cloudera CDH 5.1 and Hortonworks HDP 2.1

2. Significant performance improvements on transactional and operational workloads 

3. Better integration of transaction management with HBase: use of coprocessors and HLOG 

4. Transaction recovery from catastrophic HBase failures 

5. Bulk loader 

6. Support for GRANT/REVOKE

To find out more, visit www.trafodion.org.

Dave Birdsall, on behalf of the Trafodion Team

Andrejs Dubovskis | 30 Oct 16:20 2014
Picon

OOM when fetching all versions of single row

Hi!

We have a bunch of rows on HBase which store varying sizes of data
(1-50MB). We use HBase versioning and keep up to 10000 column
versions. Typically each column has only few versions. But in rare
cases it may has thousands versions.

The Mapreduce alghoritm uses full scan and our algorithm requires all
versions to produce the result. So, we call scan.setMaxVersions().

In worst case Region Server returns one row only, but huge one. The
size is unpredictable and can not be controlled, because using
parameters we can control row count only. And the MR task can throws
OOME even if it has 50Gb heap.

Is it possible to handle this situation? For example, RS should not
send the raw to client, if the last has no memory to handle the row.
In this case client can handle error and fetch each row's version in a
separate get request.

Best wishes,
--
Andrejs Dubovskis

jeevi tesh | 30 Oct 11:56 2014
Picon

Hbase is crashing need help

Hi,

I'm using hbase0.94.3, hadoop-2.2.0 and jdk 1.7.71, single node machine(not
yet made cluster),oracle linux.

Table size of hbase is nearly 2GB ( I checked files under the property
hbase.rootdir)

Now wanted to count the number of rows of data in the above mentioned table.

So i Used this below command

count '<tablename>', CACHE => 1000

It started giving me error

ERROR zookeeper.RecoverableZooKeeper: ZooKeeper exists failed after 3
retries.

After this one very wired things happen in the network all the nodes start
throwing this same above errors.

If a node is already started (I mean I used start-hbase.sh command ). I
won’t have above any issues.

Note: I’m on single node cluster even then how it is affecting the entire
network.? Is it because zookeeper has awareness of the cluster.

I feel like I’m missing some configuration please help me in resolving the
same.

Thanks
jackie | 30 Oct 03:07 2014

which tool can i choose to bulid the Hbase table ?like the use the powerdesign tool build the oracle database tables!

Hi
    I'm using the Hbase 0.96.2(base on hadoop 2.2.0), which tool can i choose to  build the Hbase table,like the
use the powerdesign tool build the oracle database tables!

Thank u very much!

jackie!


Gmane