Li Li | 28 Nov 08:14 2014
Picon

copy from one cluster to another of different version

I have a hbase cluster of version 0.98.5 with hadoop-1.2.1(no mapreduce)
I want to copy all the tables to another cluster whose version is
0.98.1-cdh5.1.0 with 2.3.0-cdh5.1.0.
And also I want specify the hdfs replication factor of the files in
new cluster. is it possible?

dhamodharan.ramalingam | 28 Nov 07:03 2014

Unable to run map reduce in HBase

Hi, 

I am importing a csv file into HBase using the command bin/hbase 
org.apache.hadoop.hbase.mapreduce.ImportTsv 

When I execute the this map reduce program I am getting the following 
error. I am using Hadoop 2.4.1 and HBase 0.98.8-hadoop2

I have set export JAVA_OPTS="-Xms1024m -Xmx10240m" in .bashrc, the server 
has 32 GB of RAM.

2014-11-28 18:56:44,029 INFO [IPC Server listener on 56283] 
org.apache.hadoop.ipc.Server: IPC Server listener on 56283: starting
2014-11-28 18:56:44,031 FATAL [main] 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Error starting MRAppMaster
java.lang.OutOfMemoryError: unable to create new native thread
        at java.lang.Thread.start0(Native Method)
        at java.lang.Thread.start(Thread.java:693)
        at org.apache.hadoop.ipc.Server.start(Server.java:2392)
        at 
org.apache.hadoop.mapred.TaskAttemptListenerImpl.startRpcServer(TaskAttemptListenerImpl.java:137)
        at 
org.apache.hadoop.mapred.TaskAttemptListenerImpl.serviceStart(TaskAttemptListenerImpl.java:107)
        at 
org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
        at 
org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:120)
        at 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceStart(MRAppMaster.java:1071)
        at 
(Continue reading)

Aleks Laz | 27 Nov 22:27 2014
Picon

Newbie Question about 37TB binary storage on HBase

Dear All.

Hi Wilm ;-)

I have started this question on hadoop-user list.

https://mail-archives.apache.org/mod_mbox/hadoop-user/201411.mbox/%3C0dacebda87d76ce0b72f7c53f02464cb-922TnBLj4uE <at> public.gmane.org%3E

I hope you can help me.

We have since ~2012 collected a lot of binary data (jpg's).
The size per file is ~1-5 MB currently but this could be changed.

There are much more then 41 055 670 Files, the count still
run, in ~680 <ID> dirs with this hierarchy

The Storage hierarchy is like this.

                      <YEAR>/<MONTH>/<DAY>
<MOUNT_ROOT>/cams/≤ID>/2014/11/19/

The binary data are in the directory below <DAY> ~1000 Files per
directory and mounted with xfs.

The pictures are more or less volatile.
Means: After saved on the disc there are seldom and never changes on the
images.

Due to the fact that the platform now grows up we need to create a more
scalable setup.
(Continue reading)

Néstor Boscán | 27 Nov 18:33 2014
Picon

Using HBase Thrift API to move a number of rows

Hi

Is there a way to use the HBase Thrift Scanner to just jump a number of
rows instead of reading them one by one. This is very useful for paging.

Regards,

Néstor
dhamodharan.ramalingam | 27 Nov 14:13 2014

Unable to fetch data from HBase.

Hi,

I am using Hadoop 2.5.1 and HBase 0.98.8-hadoop2 stand-alone mode,  when I 
use the following client side code

public static void main(final String[] args) {

        HTableInterface table = null;

        try {

                final HBaseManager tableManager = 
HBaseManager.getInstance();

                table = tableManager
 .getHTable(HBaseConstants.TABLE_EMP_DETAILS);

                final Scan scan = new Scan();

                final ResultScanner resultScanner = 
table.getScanner(scan);

                for (final Result result : resultScanner) {

                        LOG.debug("The Employee id : "
                                        + 
HBaseHelper.getValueFromResult(result,
 HBaseConstants.COLUME_FAMILY,
 HBaseConstants.EMP_ID));

(Continue reading)

dhamodharan.ramalingam | 27 Nov 09:29 2014

Re: Zookeeper shuting down.

Hi,

Please find the log from the master node, I am using hbase-0.94.12 and 
zookeeper-3.4.5

014-11-27 12:32:21,444 [myid:0] - INFO 
[Thread-1:QuorumCnxManager$Listener <at> 486] - My election bind port: 
0.0.0.0/0.0.0.0:3888
2014-11-27 12:32:21,459 [myid:0] - INFO 
[QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:2181:QuorumPeer <at> 670] - LOOKING
2014-11-27 12:32:21,461 [myid:0] - INFO 
[QuorumPeer[myid=0]/0:0:0:0:0:0:0:0:2181:FastLeaderElection <at> 740] - New 
election. My id =  0, proposed zxid=0x40
2014-11-27 12:32:21,464 [myid:0] - INFO 
[WorkerReceiver[myid=0]:FastLeaderElection <at> 542] - Notification: 0 
(n.leader), 0x40 (n.zxid), 0x1 (n.round), LOOKING (n.state), 0 (n.sid), 
0x1 (n.peerEPoch), LOOKING (my state)
2014-11-27 12:32:21,468 [myid:0] - WARN 
[WorkerSender[myid=0]:QuorumCnxManager <at> 368] - Cannot open channel to 1 at 
election address /172.10.195.299:3888
java.net.ConnectException: Connection refused
 at java.net.PlainSocketImpl.socketConnect(Native Method)
 at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
 at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
 at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
 at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
 at java.net.Socket.connect(Socket.java:579)
(Continue reading)

Adam Wilhelm | 26 Nov 22:43 2014

Region Server Crashing with : IOE in log roller

We are running an 80 node cluster:
Hdfs version: 0.20.2-cdh3u5
Hbase version: 0.90.6-cdh3u5

The issue we have is that infrequently region servers are crashing. So far it has been once a week, not on the
same day or time.

The error we are getting in RegionServer logs is:

2014-11-26 09:11:04,460 FATAL org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING
region server serverName=hd073.xxxxxxxx,60020,1407311682582, load=(requests=0, regions=227,
usedHeap=9293, maxHeap=12250): IOE in log roller
java.io.IOException: cannot get log writer
        at org.apache.hadoop.hbase.regionserver.wal.HLog.createWriter(HLog.java:677)
        at org.apache.hadoop.hbase.regionserver.wal.HLog.createWriterInstance(HLog.java:624)
        at org.apache.hadoop.hbase.regionserver.wal.HLog.rollWriter(HLog.java:560)
        at org.apache.hadoop.hbase.regionserver.LogRoller.run(LogRoller.java:96)
Caused by: java.io.IOException: java.io.IOException: Call to %NAMENODE%:8020 failed on local
exception: java.io.IOException: Connection reset by peer
        at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter.init(SequenceFileLogWriter.java:106)
        at org.apache.hadoop.hbase.regionserver.wal.HLog.createWriter(HLog.java:674)
        ... 3 more
Caused by: java.io.IOException: Call to %NAMENODE%:8020 failed on local exception:
java.io.IOException: Connection reset by peer
        at org.apache.hadoop.ipc.Client.wrapException(Client.java:1187)
        at org.apache.hadoop.ipc.Client.call(Client.java:1155)
        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
        at $Proxy7.create(Unknown Source)
        at sun.reflect.GeneratedMethodAccessor46.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
(Continue reading)

Néstor Boscán | 26 Nov 12:12 2014
Picon

Invalid Comparater or Expected 4 or 6 but got: N using filter with Java Thrift API

Hi

I've tried to apply the filters using the Java Thrift API but I get in the
Thrift server log screen:

SingleColumnValueFilter('familycolumn', 'column', =, 'value') =>
IllegalArgumentException: Invalid comparator
SingleColumnValueFilter('familycolumn', 'column', EQUAL, 'value') =>
IllegalArgumentException: Invalid comparator
SingleColumnValueFilter('familycolumn', 'column', 'EQUAL', 'value') =>
IllegalArgumentException: Invalid comparator
SingleColumnValueFilter('familycolumn', 'column',
CompareFilter::CompareOp.valueOf('EQUAL'), 'value') =>
IllegalArgumentException: Expected 4 or 6 but got: 3
SingleColumnValueFilter(Bytes.toBytes('familycolumn'),
Bytes.toBytes('column'), CompareFilter::CompareOp.valueOf('EQUAL'),
Bytes.toBytes('value')) => IllegalArgumentException: Expected 4 or 6 but
got: 1

I couldn't find on the Internet an example of a SingleColumnValueFilter
with the Thrift Java API.

Regards,

Néstor
guxiaobo1982 | 26 Nov 02:57 2014

port confilct with hadoop 2.5.2 and hbase 0.99.1

Hi,


I tried to install a single node hbase 0.99.1 with hadoop 2.5.2, but failed with regionserver says:


 
2014-11-26 09:05:42,594 INFO  [main] util.ServerCommandLine:
vmInputArguments=[-Dproc_regionserver, -XX:OnOutOfMemoryError=kill -9 %p,
-XX:+UseConcMarkSweepGC, -Dhbase.log.dir=/opt/hbase-0.99.1/
 
bin/../logs, -Dhbase.log.file=hbase-xiaobogu-regionserver-lix3.bh.com.log,
-Dhbase.home.dir=/opt/hbase-0.99.1/bin/.., -Dhbase.id.str=xiaobogu,
-Dhbase.root.logger=INFO,RFA, -Dhbase.security.logg
 
er=INFO,RFAS]
 
2014-11-26 09:05:42,956 INFO  [main] regionserver.RSRpcServices:
regionserver/lix3.bh.com/192.168.100.5:16020 server-side HConnection retries=350
 
2014-11-26 09:05:43,240 INFO  [main] ipc.SimpleRpcScheduler: Using deadline as user call queue, count=3
 
2014-11-26 09:05:43,252 ERROR [main] regionserver.HRegionServerCommandLine: Region server exiting
 
java.lang.RuntimeException: Failed construction of Regionserver: class org.apache.hadoop.hbase.regionserver.HRegionServer
 
	at org.apache.hadoop.hbase.regionserver.HRegionServer.constructRegionServer(HRegionServer.java:2443)
 
	at org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.start(HRegionServerCommandLine.java:64)
 
(Continue reading)

guxiaobo1982 | 26 Nov 02:18 2014

port confilct with hadoop 2.5.2 and hbase 0.99.1

Hi,


I tried to install a single node hbase 0.99.1 with hadoop 2.5.2, but failed with regionserver says:


 
2014-11-26 09:05:42,594 INFO  [main] util.ServerCommandLine:
vmInputArguments=[-Dproc_regionserver, -XX:OnOutOfMemoryError=kill -9 %p,
-XX:+UseConcMarkSweepGC, -Dhbase.log.dir=/opt/hbase-0.99.1/
 
bin/../logs, -Dhbase.log.file=hbase-xiaobogu-regionserver-lix3.bh.com.log,
-Dhbase.home.dir=/opt/hbase-0.99.1/bin/.., -Dhbase.id.str=xiaobogu,
-Dhbase.root.logger=INFO,RFA, -Dhbase.security.logg
 
er=INFO,RFAS]
 
2014-11-26 09:05:42,956 INFO  [main] regionserver.RSRpcServices:
regionserver/lix3.bh.com/192.168.100.5:16020 server-side HConnection retries=350
 
2014-11-26 09:05:43,240 INFO  [main] ipc.SimpleRpcScheduler: Using deadline as user call queue, count=3
 
2014-11-26 09:05:43,252 ERROR [main] regionserver.HRegionServerCommandLine: Region server exiting
 
java.lang.RuntimeException: Failed construction of Regionserver: class org.apache.hadoop.hbase.regionserver.HRegionServer
 
	at org.apache.hadoop.hbase.regionserver.HRegionServer.constructRegionServer(HRegionServer.java:2443)
 
	at org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.start(HRegionServerCommandLine.java:64)
 
(Continue reading)

Sunil B | 26 Nov 01:09 2014
Picon

HBase 0.98.1 Put operations never timeout

Hi All,

     I am using 0.98.1 version of HBase server and client. My application
has strict response time requirements. As far as HBase is concerned, I
would like to abort the HBase operation if the execution exceeds 1 or 2
seconds. This task timeout is useful in case of Region-Server being
non-responsive or has crashed.

     I tired configuring
        1) HBASE_RPC_TIMEOUT_KEY = "hbase.rpc.timeout";
        2) HBASE_CLIENT_RETRIES_NUMBER = "hbase.client.retries.number";

     However, the Put operations never timeout (I am using sync flush). The
operations return only after the Put is successful.

    I looked through the code and found that the function
receiveGlobalFailure in AsyncProcess class keeps resubmitting the task
without any check on the retires. This is in version 0.98.1

    I do see that in 0.99.1 there have been some changes to AsyncProcess
class that might do what I want. I have not verified it though.

    My questions are:
        1) Is there any other configuration that I missed that can give me
the desired functionality.
        2) Do I have to use 0.99.1 client to solve my problem? Does 0.99.1
solve my problem?
        4) If I have to use 0.99.1 client, then do I have to use 0.99.1
server or can I still use my existing 0.98.1 region-server.

(Continue reading)


Gmane