Krishna Rao | 24 Apr 14:55 2014
Picon

HBase checksum vs HDFS checksum

Hi all,

I understand that there is a significant improvement gain when turning on
short circuit reads, and additionally by setting HBase to do checksums
rather than HDFS.

However, I'm a little confused by this, do I need to turn of checksum
within HDFS for the entire file system? We don't just use HBase on our
cluster, so this would seem to be a bad idea right?

Cheers,

Krishna
Li Li | 24 Apr 05:20 2014
Picon

how to split region in hbase shell?

I found one of 4 region server is heavy load than other. and I want to
split region manully.

from the web ui
name: vc2.url_db,,1398174763371.35a8599a5eb457b9e0210f86d8b6d19f.
region serverapp-hbase-1:60030
start key
end key \x1F\xFE\x9B\xFA\x95\x91\xB7\xF0\x9FX\x83\xC9\xBFw\xBD\xDE
request 107360630

I want to split this region. I tried
hbase(main):004:0> split 'vc2.url_db'
,'\x1F\xFE\x9B\xFA\x95\x91\xB7\xF0\x9FX\x83\xC9\xBFw\xBD\xDE'
0 row(s) in 0.8130 seconds
but nothing happens.

LEI Xiaofeng | 24 Apr 04:19 2014
Picon

hbase bulk load

Hi,
In my case, I need to bulk load large data into hbase into the same table from time to time. The speed is very low
and is only 17MB/s. There are 5 server nodes and a master node in my cluster. To some reasons, I use MapReduce
to call a c++ thrift client to communicate with hbase.

Does anyone have any suggestions for me to promote the write speed?

Thanks,
Xiaofeng

Lukáš Drbal | 23 Apr 21:15 2014
Picon

HBase 0.94 on hadoop 2.2.0 (2.4.0)

Hi,

iam trying run HBase on hadoop 2.2 but master can't start.

I found a message in master log (snipped):

2014-04-23 20:58:24,246 FATAL org.apache.hadoop.hbase.master.HMaster: HBase
is having a problem with its Hadoop jars.  You may need to recompile HBase
against Hadoop version 2.2.0 or change your hadoop jars to start properly
java.lang.NoClassDefFoundError:
org/apache/hadoop/hdfs/protocol/FSConstants$SafeModeAction
        at
org.apache.hadoop.hbase.util.FSUtils.isInSafeMode(FSUtils.java:240)
        at
org.apache.hadoop.hbase.util.FSUtils.waitOnSafeMode(FSUtils.java:634)
        at
org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:423)
        at
org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:148)
        at
org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:133)
        at
org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:573)
        at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:433)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.ClassNotFoundException:
org.apache.hadoop.hdfs.protocol.FSConstants$SafeModeAction
        at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
        at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
        at java.security.AccessController.doPrivileged(Native Method)
(Continue reading)

Brian Jeltema | 23 Apr 20:01 2014
Picon

how to get source table from MultiTableInputFormat

If I’m using MultiTableInputFormat to process process input from several tables in
a map/reduce job, is there any way in the mapper to determine which table a given
Result is coming from?

Brian
anil gupta | 23 Apr 02:12 2014
Picon

Date DataType broken when Migrating from Phoenix2.0.2 to Phoenix3.0.0

Hi All,

We have written data into our HBase tables using PDataType of Phoenix2.0.2.
We have custom MR loaders that use PDataType so that we can use Phoenix for
adhoc querying
I am trying to migrate to Phoenix3.0.0, but all the Date type columns
values are not coming out correctly. These are big tables(TB's of data) and
i would like to avoid making changes in my tables due to Phoenix upgrade to
3.0.0.
Is Phoenix3.0.0 not backward compatible with Phoenix2.0.2? Is there any
easy fix for this(without rewriting Date Columns)? Also, i am curious to
know what was broken in Date DataType in Phoenix2.0.2?

--

-- 
Thanks & Regards,
Anil Gupta
Srikanth Srungarapu | 23 Apr 01:44 2014
Picon

unable to access apache hbase blogs website

Hi,
I'm unable to access http://blogs.apache.org/hbase/ webpage. Can someone
please let me know if this is just me or the website is down?
Thanks,
Srikanth.
Jack Levin | 22 Apr 19:34 2014
Picon

unable to delete rows in some regions

Hey All, I was wondering if anyone had this issue with 0.90.5 HBASE.
I have a table 'img611', I issue delete of keys like this:

hbase(main):004:0> describe 'img611'
DESCRIPTION
                                                         ENABLED
 {NAME => 'img611', FAMILIES => [{NAME => 'att', BLOOMFILTER => 'ROW',
VERSIONS => '1', COMPRESSION => 'NONE', TTL => '21474836 true
 47', BLOCKSIZE => '350000', IN_MEMORY => 'false', BLOCKCACHE =>
'true'}]}
1 row(s) in 3.3440 seconds

hbase(main):003:0> delete 'img611', '611:u4rpx.jpg', 'att:data'
0 row(s) in 0.0210 seconds

hbase(main):005:0> major_compact 'img611'

After compaction completed on a region, the cells are still there,
even tho I am unable to 'get' them anymore.

hbase(main):005:0> get 'img611', '611:u4rpx.jpg'
COLUMN                                             CELL
0 row(s) in 0.0140 seconds

Here is compaction log:

2014-04-22 10:19:38,266 INFO
org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
of 1 file(s), new
file=hdfs://namenode-rd.imageshack.us:9000/hbase/img611/cf0a557ff4030c238fc5a6ad732be45f/att/8509526122540116685,
(Continue reading)

yeshwanth kumar | 22 Apr 14:18 2014
Picon

Unable to get data of znode /hbase/table/mytable.

Hi,

i am running webapp written on jaxrs framework which performs CRUD
opereations on hbase.

app was working fine till last week,
now when i perform reading opeartion  from hbase i don't see any data, i
don't see any errors or exceptions but i found this lines in the log

*""Unable to get data of znode /hbase/table/myTable because node does not
exist (not an error)"".*

i followed this<https://blog.cloudera.com/blog/2013/10/what-are-hbase-znodes/>
cloudera
article about znodes and this is what i found,

*$[zk: localhost:2181(CONNECTED) 14] ls /hbase*
*[splitlog, online-snapshot, unassigned, table94, root-region-server, rs,
backup-masters, table, draining, master, shutdown, hbaseid]*

all the tables were present in /hbase/table94, where as /hbase/table is
empty.

i know what is the problem now,
but i don't know how to solve it.

can someone help me with this issue.

Thanks,
Yeshwanth
(Continue reading)

Asaf Mesika | 22 Apr 09:07 2014
Picon

Coprocessor coprocessor execution result saved in buffer as whole - why?

Hi,

I've noticed that in 0.94.7, when you execute a coprocessor, the result
object is converted into a byte buffer, using write() method which is on
the result object.
So, if my result object is 500mb in size, another 500mb is consumed from
the heap, since it is converted to a byte buffer before sent over the wire.

I was wondering why the result object isn't streamed into the Connection.
It's write method gets a DataOutput which is essentially an OutputStream,
so it makes sense that its write will directly to the connection instead of
first to a byte buffer as a whole, and then get sent from there, by chunks.

Any Idea?

The code section is here in HBaseServer.java line 1442:

            call.setResponse(value,

              errorClass == null? Status.SUCCESS: Status.ERROR,

                errorClass, error);

And in HBaseServer$Call.setResponse() method:

      Writable result = null;

      if (value instanceof Writable) {

        result = (Writable) value;
(Continue reading)

sunweiwei | 22 Apr 08:58 2014

oldWALs too large

Hi

I’m using hbase0.96.0, with 1 hmaster,3 regionservers.  

Write request is About 1~10w/s.

 

Today I found HBase Master Hangs ,Regionservers dead and oldWALs dir is Very Large.

/apps/hbase/data/data is about 800G. /apps/hbase/data/oldWALs is about 4.2T.

This cause HDFS Full.

 

any suggestion will be appreciated. Thanks.


Gmane