lewis john mcgibbney | 20 Sep 21:25 2014

[ANNOUNCE] Apache Gora 0.5 Release

Hi Folks,
Apologies for cross posting.
The Apache Gora team are pleased to announce the immediate availability of
Apache Gora 0.5.

The Apache Gora open source framework provides an in-memory data model and
persistence for big data. Gora supports persisting to column stores, key
value stores, document stores and RDBMSs, and analyzing the data with
extensive Apache Hadoop™ MapReduce support. Gora uses the Apache Software
License v2.0.

This release addresses no fewer than 44 issues [0] with many being
improvements and new functionality. Most notably the release includes the
addition of a new module for MongoDB, Shim ffunctionality to support
multiple Hadoop versions, improved authentication for Accumulo, better
documentation for many modules, and pluggable solrj implementations
supporting a default value of http for HttpSolrServer. Available options
include http (HttpSolrServer), cloud (CloudSolrServer), concurrent
(ConcurrentUpdateSolrServer) and loadbalance (LBHttpSolrServer).

Suggested Gora database support is as follows

   - Apache Avro 1.7.6
   - Apache Hadoop 1.0.1 and 2.4.0
   - Apache HBase 0.94.14
   - Apache Cassandra 2.0.2
   - Apache Solr 4.8.1
   - MongoDB 2.6
   - Apache Accumlo 1.5.1

(Continue reading)

Santosh Gdr | 19 Sep 12:22 2014

Problem With Snapshot


         I  enabled snapshot in hbase-site.xml file. to:



         But when I go to hbase shell, I can not find snapshotting related

           hbase(main):005:0> snapshot 'test', 'testsnapshot'

           NoMethodError: undefined method `snapshot' for

        Am I missing something?.

                               Thank You

Hbase Ave Load work heavily ??

My Hadoop works very well execpt the HBASE.
It displayed that Hbase Ave Load work heavily,but i cann't find out which 
area is hot ...... 

dongyanhui <at> nnct-nsn.com 

Kiran Kumar.M.R | 19 Sep 16:46 2014

HTTPS WebUI in Trunk Version

We could have enabled it on 0.98.x as it was based on Hadoop HTTPServer. (Using hadoop.ssl.enabled)
I did not find any way to enable HTTPS for WebUI in trunk version. Trunk version is using its own HTTPServer.
Am I missing any configuration?

This e-mail and its attachments contain confidential information from HUAWEI, which is intended only for
the person or entity whose address is listed above. Any use of the information contained herein in any way
(including, but not limited to, total or partial disclosure, reproduction, or dissemination) by
persons other than the intended recipient(s) is prohibited. If you receive this e-mail in error, please
notify the sender by phone or email immediately and delete it!

Dai, Kevin | 19 Sep 04:33 2014

How to let hbase just return value or subset of the value


The value of my table is a Map.

I want to know how can I get only value(no any key sent from the region server) or get a subset of the value(Map)
from hbase.

Shaun Elliott | 19 Sep 01:21 2014

view decoded thrift in hbase shell?

I have a column which has thrift encoded data and would like to view it in
hbase shell.

I have this so far:

public class ThriftToString {
    private static final TDeserializer tDeSerializer = new TDeserializer(new

    public static String getStringFromThrift(byte[] value) {
            InventoryRecord inventoryRecord = new InventoryRecord();
            try {
                tDeSerializer.deserialize(inventoryRecord, value);
                return inventoryRecord.toString();
            } catch (TException e) {

        return null;

I've figured out that I can then jar this up and add it to my
HBASE_CLASSPATH, which lets me import it in to jruby. But... how do I use it
in a get or a scan? I'm stuck. This seems like it should be relatively easy.

View this message in context: http://apache-hbase.679495.n3.nabble.com/view-decoded-thrift-in-hbase-shell-tp4064130.html
Sent from the HBase User mailing list archive at Nabble.com.

(Continue reading)

Andrew Purtell | 18 Sep 20:40 2014

[ANNOUNCE] HBase is now available for download

Apache HBase is a patch release for 0.98.6 fixing a
regression in 0.98.6 involving non-superuser table creation when
security is active (HBASE-11972). Please use this release instead of
0.98.6. (We have removed 0.98.6 distribution artifacts in the
mirrors.) Get from an Apache mirror [1] or Maven repository.

The issues resolved in this release are:

    HBASE-11963 Synchronize peer cluster replication connection attempts
    HBASE-11972 The "doAs user" used in the update to hbase:acl table
RPC is incorrect

For other changes since 0.98.5, see http://s.apache.org/Ur4

The HBase Dev Team

1. http://www.apache.org/dyn/closer.cgi/hbase/

Tinte garcia, Miguel Angel | 18 Sep 12:16 2014

HBase Rest decoding responses

Hi everyone,
I am using the HBase REST Stargate service: http://wiki.apache.org/hadoop/Hbase/Stargate

I’ve been able to make requests to it but it is returning the data in Base64 encoding like this:
column: "c2ltdWxhdGlvbl9pbmZvOnBhcnRSZXN1bHQ="
timestamp: 1410192287778
$: "V2VsY29tZQ=="

Does anybody knows how can I decode this base64 text keeping JSON format?

Thanks in advance

This e-mail and the documents attached are confidential and intended solely for the addressee; it may also
be privileged. If you receive this e-mail in error, please notify the sender immediately and destroy it.
As its integrity cannot be secured on the Internet, the Atos group liability cannot be triggered for the
message content. Although the sender endeavors to maintain a computer virus-free network, the sender
does not warrant that this transmission is virus-free and will not be liable for any damages resulting
from any virus transmitted.

Este mensaje y los ficheros adjuntos pueden contener información confidencial destinada solamente a
la(s) persona(s) mencionadas anteriormente y pueden estar protegidos por secreto profesional.
Si usted recibe este correo electrónico por error, gracias por informar inmediatamente al remitente y
destruir el mensaje.
Al no estar asegurada la integridad de este mensaje sobre la red, Atos no se hace responsable por su
contenido. Su contenido no constituye ningún compromiso para el grupo Atos, salvo ratificación
escrita por ambas partes.
Aunque se esfuerza al máximo por mantener su red libre de virus, el emisor no puede garantizar nada al
respecto y no será responsable de cualesquiera daños que puedan resultar de una transmisión de virus.
(Continue reading)

Kiran Kumar.M.R | 18 Sep 12:56 2014


Our customers were using Hbase-0.94 through thrift1 (C++ clients).
Now HBase is getting upgraded to 0.98.x

I see that thrift2 development is going on (https://issues.apache.org/jira/browse/HBASE-8818)
Customers are interested in continuing to use thrift1 as they are not interested in new capability given in
thrift2 and also minimize their application changes as much as possible.

What should be our direction in using thrift interface?

-        Shall we continue to use thrift1? (Will this continue to be supported, I see some mail threads on making it deprecated)

-        Or suggest our customers to switch to thrift2?

This e-mail and its attachments contain confidential information from HUAWEI, which is intended only for
the person or entity whose address is listed above. Any use of the information contained herein in any way
(including, but not limited to, total or partial disclosure, reproduction, or dissemination) by
persons other than the intended recipient(s) is prohibited. If you receive this e-mail in error, please
notify the sender by phone or email immediately and delete it!

tobe | 18 Sep 10:50 2014

HBase establishes session with ZooKeeper and close the session immediately

I have found that our RegionServers connect to the ZooKeeper frequently.
They seems to constantly establish the session, close it and reconnect the
ZooKeeper. Here is the log for both server and client sides. I have no idea
why this happens and how to deal with it? We're using HBase 0.94.11 and
ZooKeeper 3.4.4.

The log from HBase RegionServer:

2014-09-18,16:38:17,867 INFO org.apache.zookeeper.ZooKeeper: Initiating
client connection, connectString=,,,,
watcher=catalogtracker-on-org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation <at> 69d892a1
2014-09-18,16:38:17,868 INFO
org.apache.zookeeper.client.ZooKeeperSaslClient: Client will use GSSAPI as
SASL mechanism.
2014-09-18,16:38:17,868 INFO org.apache.zookeeper.ClientCnxn: Opening
socket connection to server lg-hadoop-srv-ct01.bj/ Will
attempt to SASL-authenticate using Login Context section 'Client'
2014-09-18,16:38:17,868 INFO
org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: The identifier of
this process is 11787@...
2014-09-18,16:38:17,868 INFO org.apache.zookeeper.ClientCnxn: Socket
connection established to lg-hadoop-srv-ct01.bj/,
initiating session
2014-09-18,16:38:17,870 INFO org.apache.zookeeper.ClientCnxn: Session
establishment complete on server lg-hadoop-srv-ct01.bj/,
sessionid = 0x248782700e52b3c, negotiated timeout = 30000
2014-09-18,16:38:17,876 INFO org.apache.zookeeper.ZooKeeper: Session:
0x248782700e52b3c closed
(Continue reading)

Josh Williams | 18 Sep 00:21 2014

Performance oddity between AWS instance sizes

Hi, everyone.  Here's a strange one, at least to me.

I'm doing some performance profiling, and as a rudimentary test I've
been using YCSB to drive HBase (originally 0.98.3, recently updated to
0.98.6.)  The problem happens on a few different instance sizes, but
this is probably the closest comparison...

On m3.2xlarge instances, works as expected.
On c3.2xlarge instances, HBase barely responds at all during workloads
that involve read activity, falling silent for ~62 second intervals,
with the YCSB throughput output resembling:

 0 sec: 0 operations;
 2 sec: 918 operations; 459 current ops/sec; [UPDATE AverageLatency(us)=1252778.39] [READ AverageLatency(us)=1034496.26]
 4 sec: 918 operations; 0 current ops/sec;
 6 sec: 918 operations; 0 current ops/sec;
 62 sec: 918 operations; 0 current ops/sec;
 64 sec: 5302 operations; 2192 current ops/sec; [UPDATE AverageLatency(us)=7715321.77] [READ AverageLatency(us)=7117905.56]
 66 sec: 5302 operations; 0 current ops/sec;
 68 sec: 5302 operations; 0 current ops/sec;
(And so on...)

While that happens there's almost no activity on either side, the CPU's
and disks are idle, no iowait at all.

There isn't much that jumps out at me when digging through the Hadoop
and HBase logs, except that those 62-second intervals are often (but
note always) associated with ClosedChannelExceptions in the regionserver
logs.  But I believe that's just HBase finding that a TCP connection it
(Continue reading)