James Taylor | 2 Sep 03:23 2014
Picon

[ANNOUNCE] Apache Phoenix 3.1 and 4.1 released

Hello everyone,

On behalf of the Apache Phoenix [1] project, a SQL database on top of
HBase, I'm pleased to announce the immediate availability of our 3.1
and 4.1 releases [2].

These include many bug fixes along with support for nested/derived
tables, tracing, and local indexing. For details of the release,
please see our announcement [3].

Regards,
The Apache Phoenix team

[1] http://phoenix.apache.org/
[2] http://phoenix.apache.org/download.html
[3] https://blogs.apache.org/phoenix/entry/announcing_phoenix_3_1_and

Ayache Khettar | 1 Sep 14:05 2014

Hbase 0.98 not able to connect to Hadoop 2.4 running on VM

Hi

I have installed a hadoop 2.4 cluster on a virtual machine and everything
is up and running. Here is my settings in core-site.xml

<configuration>
  <property>
    <name>fs.default.name</name>
    <value>hdfs://localhost:9000</value>
  </property>
</configuration>

Hbase settings:

<configuration>

 <property>
   <name>hbase.rootdir</name>
   <value>hdfs://hadoop:54310/hbase</value>
 </property>

 <property>
    <name>hbase.security.authentication</name>
    <value>simple</value>
 </property>
 <property>
 <name>hbase.cluster.distributed</name>
 <value>true</value>
 </property>
 <property>
(Continue reading)

Otis Gospodnetic | 29 Aug 20:58 2014
Picon

HBase usage by HBase version?

Hi,

Does anyone know or have any guesses about which HBase versions are being
used the most?

I'd love to know what percentage of HBase clusters out there are still
using 0.94.x or below?
Are there more users using 0.94.x and below or 0.96.x and above?

Any educated guesses?

Thanks,
Otis
--
Performance Monitoring * Log Analytics * Search Analytics
Solr & Elasticsearch Support * http://sematext.com/
Gary Helmling | 28 Aug 22:10 2014
Picon

[ANNOUNCE] Tephra 0.2.1 release

Hi all,

I'm happy to announce the 0.2.1 release of Tephra.

Tephra provides globally consistent transactions on top of Apache
HBase by leveraging HBase's native data versioning to provide
multi-versioned concurrency control (MVCC) for transactional reads and
writes. With MVCC capability, each transaction sees its own consistent
"snapshot" of data, providing snapshot isolation of concurrent
transactions.

This release fixes the following issues:

* TransactionProcessor use of FilterList on flush and compaction causes abort
* Support coverage report generation
* TransactionProcessor should use TransactionVisibilityFilter on flush
and compact
* Cleanup class and package names
* Use Configuration instead of HBaseConfiguration when possible
* TransactionVisibilityFilter should support additional filtering on
visible cells
* Assigned transaction IDs should reflect current timestamp
* Remove using of HBase Bytes in non-HBase related classes in tephra-core

Please note that a number of the Tephra packages and classes have been
renamed for clarity.  Any existing code will need to be updated.

Binary and source distributions of the release are available at:
https://github.com/continuuity/tephra/releases/tag/v0.2.1

(Continue reading)

Ted Tuttle | 28 Aug 20:19 2014

state-of-the-art method for merging regions on v0.94

Hello-

We recently realized our region size is 1G and need to increase it to get our region count under control.  I've
done some research on merging regions and have come away confused.

There is the ops handbook:

http://hbase.apache.org/book/ops.regionmgt.html

And then there is this horror story:

http://metabroadcast.com/blog/so-you-broke-hbase

Is there someone out there that has done a large scale (i.e. 10:1 reduction on 10k's of regions) merge
successfully on HBase 0.94?  If so, how did you do it?

Thanks,
Ted

Guillermo Ortiz | 28 Aug 15:18 2014
Picon

How to know regions in a RegionServer?

How could I know with Java which Regions are served for each RegionServer?
I want to execute an parallel scan, one thread for each regionServer
because I think that's better than one for region, or is it not??
徐景辉 | 28 Aug 11:18 2014

Hbase does not closing a closed socket resulting in thousand of CLOSE_WAIT sockets

Guys:

I got the fatal error: HBase dose not close a dead connection with the
datanode. And I have reviewed the codes of Hadoop 2.4.0/HBase 0.98.0. I
found the patched codes of
<https://issues.apache.org/jira/browse/HBASE-9393> HBASE-9393 and
<https://issues.apache.org/jira/browse/HDFS-5671> HDFS-5671 have added into
the release of Hadoop 2.4.0/HBase 0.98.0. These codes are in the end of
method getRemoteBlockReaderFromTcp() in class BlockReaderFactory:

      } finally {

        if (blockReader == null) {

          IOUtils.cleanup(LOG, peer);

        }

      }

So I guess a new bug lead my problem and I create a new issue
<https://issues.apache.org/jira/browse/HBASE-11833> HBASE-11833. Do you guys
got the same issue? If that, please let me know. Thanks a lot.

Problem Description:
This resulting in over 30K+ CLOSE_WAIT and at some point HBase can not
connect to the datanode because too many mapped sockets from one host to
another on the same port:50010. 
After I restart all RSs, the count of CLOSE_WAIT will increase always.
$ netstat -an|grep CLOSE_WAIT|wc -l
(Continue reading)

Fahri Surucu | 27 Aug 21:09 2014
Picon

TableInputFormat and number of mappers == number of regions

Hi,

I would like to find out how I can really reduce number of mappers to less
than the number of regions in the hbase table.
Could someone please let me know how to do that in pig while using load
command as:

LOAD 'hbase://$HBASE_TABLE' USING
org.apache.pig.backend.hadoop.hbase.HBaseStorage

Regards,

*Fahri Surucu*
@Sanjiv Singh | 27 Aug 12:09 2014
Picon

Writing Custom - KeyComparator !!!

Hi All,

As we know,  All rows are always sorted lexicographically by their row key.
In lexicographical order, each key is compared at binary level, byte by
byte and from left to right.

See the example below , where row key is some integer value and output of
scan show lexicographical order of rows in table.

hbase(main):001:0> scan 'table1'
ROW        COLUMN+CELL
1               column=cf1:, timestamp=1297073325971 ...
11             column=cf 1:, timestamp=1297073337383 ...
11000        column=cf1 :, timestamp=1297073340493 ...
2               column=cf1:, timestamp=1297073329851 ...
22             column=cf1:, timestamp=1297073344482 ...
22000        column=cf1:, timestamp=1297073333504 ...
23             column=cf1:, timestamp=1297073349875 ...

I want to see these rows ordered as integer, not the default way. I can pad
keys with '0' to get a proper sorting order(i don't like it).

I wanted to see these rows sorted as integer , not just as output of scan
OR get method , but also to store rows with consecutive integer row keys in
same block.

Now the question is :

   - Can we define our own custom KeyComparator ?
   - If Yes , can we enforce it for PUT method ?  so that rows would be
(Continue reading)

Picon

Re: Compilation error: HBASE 0.98.4 with Snappy

Hi,

Many thanks for your advices!

Finally, I managed to make it work.

I needed to add:
export JAVA_LIBRARY_PATH="$HBASE_HOME/lib/native/Linux-amd64-64”

then run:
bin/hbase org.apache.hadoop.hbase.util.CompressionTest file:///tmp/snappy-test snappy
2014-08-27 15:51:39,459 INFO  [main] Configuration.deprecation: hadoop.native.lib is deprecated.
Instead, use io.native.lib.available
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/mnt/hadoop/hbase-0.98.4-hadoop2/lib/slf4j-log4j12-1.6.4.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/mnt/hadoop/hadoop-2.4.1/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
2014-08-27 15:51:39,785 INFO  [main] util.ChecksumType: Checksum using org.apache.hadoop.util.PureJavaCrc32
2014-08-27 15:51:39,786 INFO  [main] util.ChecksumType: Checksum can use org.apache.hadoop.util.PureJavaCrc32C
2014-08-27 15:51:39,926 INFO  [main] compress.CodecPool: Got brand-new compressor [.snappy]
2014-08-27 15:51:39,930 INFO  [main] compress.CodecPool: Got brand-new compressor [.snappy]
2014-08-27 15:51:39,934 ERROR [main] hbase.KeyValue: Unexpected getShortMidpointKey result,
fakeKey:testkey, firstKeyInBlock:testkey
2014-08-27 15:51:40,185 INFO  [main] compress.CodecPool: Got brand-new decompressor [.snappy]
SUCCESS

bin/hbase org.apache.hadoop.hbase.util.CompressionTest file:///tmp/snappy-test gz
2014-08-27 15:57:18,633 INFO  [main] Configuration.deprecation: hadoop.native.lib is deprecated.
Instead, use io.native.lib.available
SLF4J: Class path contains multiple SLF4J bindings.
(Continue reading)

Praveen G | 27 Aug 08:56 2014
Picon

Getting "Table Namespace Manager not ready yet" while creating table in hbase

I tried to create a table but it is giving me below error. Kindly check

hbase(main):003:0> create 't1', 'f1'

ERROR: java.io.IOException: Table Namespace Manager not ready yet, try
again later
        at
org.apache.hadoop.hbase.master.HMaster.getNamespaceDescriptor(HMaster.java:3205)
        at
org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1730)
        at
org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1860)
        at
org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:38221)
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2008)
        at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:92)
        at
org.apache.hadoop.hbase.ipc.FifoRpcScheduler$1.run(FifoRpcScheduler.java:73)
        at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:744)

Here is some help for this command:
Creates a table. Pass a table name, and a set of column family
specifications (at least one), and, optionally, table configuration.
(Continue reading)


Gmane