Andrew Purtell | 1 Apr 19:53 2015
Picon

Please welcome new HBase committer Jing Chen (Jerry) He

On behalf of the Apache HBase PMC, I am pleased to announce that Jerry He
has accepted the PMC's invitation to become a committer on the project. We
appreciate all of Jerry's hard work and generous contributions thus far,
and look forward to his continued involvement.

Congratulations and welcome, Jerry!

--

-- 
​​

Best regards,

   - Andy

Problems worthy of attack prove their worth by hitting back. - Piet Hein
(via Tom White)
Andrew Purtell | 1 Apr 19:53 2015
Picon

Please welcome new HBase committer Srikanth Srungarapu

On behalf of the Apache HBase PMC, I am pleased to announce that Srikanth
Srungarapu has accepted the PMC's invitation to become a committer on the
project. We appreciate all of Srikanth's hard work and generous
contributions thus far, and look forward to his continued involvement.

Congratulations and welcome, Srikanth!

--

-- 
​​

Best regards,

   - Andy

Problems worthy of attack prove their worth by hitting back. - Piet Hein
(via Tom White)
Enrico Olivelli - Diennea | 1 Apr 15:35 2015

Upgrade client to 1.0.0 - Dependency on legacy Guava Version ?

Hi all,
I'm going to upgrade my Java client  from version 0.94 to version 1.0.0.
The new 1.0.0 client needs version 15.0 of Guava (due to a dependency of a deprecated constructor of
StopWatch,
http://docs.guava-libraries.googlecode.com/git/javadoc/com/google/common/base/Stopwatch.html
), while I'm using the latest version.
This is a real showstopper for me
Any suggestion ?

Enrico Olivelli
Software Development Manager  <at> Diennea
Tel.: (+39) 0546 066100 - Int. 925
Viale G.Marconi 30/14 - 48018 Faenza (RA)

MagNews - E-mail Marketing Solutions
http://www.magnews.it<http://www.magnews.it/>
Diennea - Digital Marketing Solutions
http://www.diennea.com<http://www.diennea.com/>

________________________________
Iscriviti alla nostra newsletter per rimanere aggiornato su digital ed email marketing! http://www.magnews.it/newsletter/
James Teng | 31 Mar 09:09 2015

Phoenix client connecting to hbase failure.

Hi all,
i have encountered the problem connecting to hbase via phoenix client, below is the debug log  i got from the
client side.and the most notable error should be 

retrying after sleep of 10013 because: No server address listed in hbase:meta for region
SYSTEM.CATALOG,,1427426385032.7e46009a444329769d8e851cb5900006. containing row 
but i totally have no idea how this happens on my laptop.any comment on this is appreciated.thanks
uknow.

SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
 INFO [main] (Configuration.java:840) - dfs.df.interval is deprecated. Instead, use fs.df.interval
 INFO [main] (Configuration.java:840) - hadoop.native.lib is deprecated. Instead, use io.native.lib.available
 INFO [main] (Configuration.java:840) - fs.default.name is deprecated. Instead, use fs.defaultFS
 INFO [main] (Configuration.java:840) - topology.script.number.args is deprecated. Instead, use net.topology.script.number.args
 INFO [main] (Configuration.java:840) - dfs.umaskmode is deprecated. Instead, use fs.permissions.umask-mode
 INFO [main] (Configuration.java:840) - topology.node.switch.mapping.impl is deprecated. Instead,
use net.topology.node.switch.mapping.impl
DEBUG [main] (MutableMetricsFactory.java:42) - field org.apache.hadoop.metrics2.lib.MutableRate
org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess with annotation
 <at> org.apache.hadoop.metrics2.annotation.Metric(about=, value=[Rate of successful kerberos
logins and latency (milliseconds)], valueName=Time, always=false, type=DEFAULT, sampleName=Ops)
DEBUG [main] (MutableMetricsFactory.java:42) - field org.apache.hadoop.metrics2.lib.MutableRate
org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginFailure with annotation
 <at> org.apache.hadoop.metrics2.annotation.Metric(about=, value=[Rate of failed kerberos logins and
latency (milliseconds)], valueName=Time, always=false, type=DEFAULT, sampleName=Ops)
DEBUG [main] (MetricsSystemImpl.java:220) - UgiMetrics, User and group related metrics
DEBUG [main] (Groups.java:180) -  Creating new Groups object
DEBUG [main] (NativeCodeLoader.java:46) - Trying to load the custom-built native-hadoop library...
(Continue reading)

Arun Mishra | 31 Mar 05:29 2015

Copy from CDH5 to CDH4

Hello hbase users,

I have a requirement to migrate data between CDH4 and CDH5. To migrate data from CDH4 to CDH5, I am using
export -> distcp -> import hbase tools and its very well. But the same doesn't work from CDH5 to CDH4. In
mapreduce task logs, I see below exception. 

java.lang.NegativeArraySizeException
	at org.apache.hadoop.hbase.client.Result.readFields(Result.java:464)
	at org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:73)
	at org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:44)
	at org.apache.hadoop.io.SequenceFile$Reader.deserializeValue(SequenceFile.java:2180)
	at org.apache.hadoop.io.SequenceFile$Reader.getCurrentValue(SequenceFile.java:2153)
	at org.apache.hadoop.mapreduce.lib.input.SequenceFileRecordReader.nextKeyValue(SequenceFileRecordReader.java:74)
	at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:458)
	at org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:76)
	at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:85)
	at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:139)
	at org.apache.hadoop.mapred.MapTask.runNewMapper

Has any one tried copying data from CDH5 (0.98) to CDH (0.92) ?? Any advice is appreciated. Thanks.

- arun
Ted Yu | 28 Mar 16:28 2015
Picon

Re: Zookeper Error

bq. zookeeper.ClientCnxn: Opening socket connection to server hbase.local/
192.168.15.20:2181

Looks like the connection attempt was to 192.168.15.20 instead of 192.168.1.101
(value for hbase.zookeeper.quorum)

Can you double check config on these two machines ?

Cheers

On Wed, Mar 25, 2015 at 2:00 AM, Ali Rahmani <ali.ielts99@...> wrote:

> Hi guys,
> I'v tried to install HBASE 0.98 on a CentOS linux server. I tried different
> settings but always get one specific Zookeeper error message. my final
> configuration is as follow
> /etc/hosts:
> 127.0.0.1 localhost
> 192.168.1.101 node1
> -----------------------------------------
> hbase-site.xml:
> <property>
>     <name>hbase.rootdir</name>
>     <value>file:///hadoop/hbase/hbase-0.98.11-hadoop2/data/≤/value>
>   </property>
>   <property>
>     <name>hbase.cluster.distributed</name>
>     <value>true</value>
>   </property>
>   <property>
(Continue reading)

Abraham Tom | 28 Mar 03:03 2015
Picon

Thrift reverse scan out of order exception

Every so often using the reverse key scan on the thrift API seems to throw
an error
We have even isolated to specific record.
What is surprising to us is that when we make are start row the very next
record before it, it scans correctly.  Of course that is not feasible in
any means

Thoughts on this

--

-- 
Abraham Tom
Email:   work2much@...
Phone:  415-515-3621
Andrew Mains | 27 Mar 19:53 2015

Multiple scans for mapreduce over snapshots

Hi all,

We're looking into using TableSnapshotInputFormat on a salted table, and 
we need to push down conditions on the rest of the rowkey to each bucket 
(using hive with my patch for 
https://issues.apache.org/jira/browse/HIVE-7805). MultiTableInputFormat 
allows us to do this on HBase proper, but it seems like this isn't yet 
supported on snapshots. There's nothing in either google or JIRA 
discussing such a feature afaict, so I thought I'd ask here:

Would it be reasonable for HBase to support an equivalent of 
`MultiTableInputFormat` over snapshots? Is there a better alternative 
that we should be using instead?

Thanks!

Andrew

James Teng | 27 Mar 08:47 2015

Can't connect to hbase via phoenix client

HI,
i tried to connect to local deployed hbase with my codes below, but can't connect to it successfully. and
attach the log data i can see from hbase side.
if anyone give me any tip on it, will be very appreciating.thanks in advance.uknow.
-----------java codes----------------import java.sql.Connection;import
java.sql.DriverManager;import java.sql.ResultSet;import java.sql.SQLException;import java.sql.Statement;
public class Test {	public static void main(String args[]) throws SQLException {		try
{			Class.forName("org.apache.phoenix.jdbc.PhoenixDriver");		} catch
(ClassNotFoundException e) {			e.printStackTrace();			return;		}		Connection connection =
DriverManager				.getConnection("jdbc:phoenix:localhost:2181");				System.out.println("step
1 ...");		Statement statement = connection.createStatement();		ResultSet set =
statement.executeQuery("select * from user");		while (set.next()) {			System.out.println(set.getObject(1));		}
		connection.close();	}}---------------------hbase logs --------------------2015-03-27
15:45:27,144 INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181] server.NIOServerCnxnFactory:
Accepted socket connection from /0:0:0:0:0:0:0:1:568862015-03-27 15:45:27,150 INFO 
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181] server.ZooKeeperServer: Client attempting to
establish new session at /0:0:0:0:0:0:0:1:568862015-03-27 15:45:27,157 INFO  [SyncThread:0]
server.ZooKeeperServer: Established session 0x14c59bbda3c003d with negotiated timeout 40000 for
client /0:0:0:0:0:0:0:1:568862015-03-27 15:45:32,933 INFO 
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181] server.NIOServerCnxnFactory: Accepted socket
connection from /fe80:0:0:0:0:0:0:1%1:568872015-03-27 15:45:32,933 INFO 
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181] server.ZooKeeperServer: Client attempting to
establish new session at /fe80:0:0:0:0:0:0:1%1:568872015-03-27 15:45:32,942 INFO  [SyncThread:0]
server.ZooKeeperServer: Established session 0x14c59bbda3c003e with negotiated timeout 40000 for
client /fe80:0:0:0:0:0:0:1%1:56887 		 	   		  
James Teng | 27 Mar 04:47 2015

HBase Error -- Can not scan table anymore.

hi,i have deployed hbase on my laptop as pseduo-distributed mode, however suddenly encountered an error
and can not scan a hbase table anymore?could anyone show me any comments on this?thanks in advance.
uknow.
2015-03-27 11:35:00,152 WARN  [CatalogJanitor-10.128.120.35:16020] master.CatalogJanitor: Failed
scan of catalog tableorg.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after
attempts=351, exceptions:Fri Mar 27 11:35:00 CST 2015, null, java.net.SocketTimeoutException:
callTimeout=60000, callDuration=68068: row '' on table 'hbase:meta' at
region=hbase:meta,,1.1588230740, hostname=10.128.120.35,16201,1427426329960, seqNum=0
	at
org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.throwEnrichedException(RpcRetryingCallerWithReadReplicas.java:264)	at
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:199)	at
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:56)	at
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)	at
org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:287)	at
org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:267)	at
org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:139)	at
org.apache.hadoop.hbase.client.ClientScanner.<init>(ClientScanner.java:134)	at
org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:823)	at
org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:187)	at
org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:89)	at
org.apache.hadoop.hbase.master.CatalogJanitor.getMergedRegionsAndSplitParents(CatalogJanitor.java:169)	at
org.apache.hadoop.hbase.master.CatalogJanitor.getMergedRegionsAndSplitParents(CatalogJanitor.java:121)	at
org.apache.hadoop.hbase.master.CatalogJanitor.scan(CatalogJanitor.java:222)	at
org.apache.hadoop.hbase.master.CatalogJanitor.chore(CatalogJanitor.java:103)	at
org.apache.hadoop.hbase.Chore.run(Chore.java:80)	at
java.lang.Thread.run(Thread.java:745)Caused by: java.net.SocketTimeoutException:
callTimeout=60000, callDuration=68068: row '' on table 'hbase:meta' at
region=hbase:meta,,1.1588230740, hostname=10.128.120.35,16201,1427426329960, seqNum=0	at
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:159)	at
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:294)	at
(Continue reading)

Ted Tuttle | 26 Mar 20:18 2015

master consumes large amount of CPU for days

Hello-

Our master process started consuming a large amount of CPU (75% of box) several days back and hasn't
stopped.  I have 2 questions:

                1) what is it doing (stack dump and log below)
                2) is it safe to restart the master without taking the whole cluster down?

Master stack dump:

                http://pastebin.com/G0iNNEpC

Master log from last 15 mins or so:

                http://pastebin.com/WQNjhFGf

Thanks,
Ted


Gmane