Tianying Chang | 27 Jun 22:14 2016
Picon

Does 1.2 support p999 metrics?

Hi,

We want to expose p999 metrics for latency, e.g Get, Put, RPC latency. It
seems it only support up to p99. Is p999 only supported after 1.3?  If so,
any patch to port into 1.2 to enable this?

Thanks
Tian-Ying
M. BagherEsmaeily | 26 Jun 11:20 2016
Picon

Delete row that has columns with future timestamp

Hello
I use HBase version 0.98.9-hadoop1 with Hadoop version 1.2.1 . when i
delete row that has columns with future timestamp, delete not affect and
row still surviving.

For example when i put a row with future timestamp:
Put p = new Put(Bytes.toBytes("key1"));
p.add(Bytes.toBytes("C"), Bytes.toBytes("q1"), 2000000000000L,
Bytes.toBytes("test-val"));
table.put(p);

After put, when i scan my table, the result is:
ROW     COLUMN+CELL
key1    column=C:q1, timestamp=2000000000000, value=test-val

When i delete this row with following code:
Delete d = new Delete(Bytes.toBytes("key1"));
table.delete(d);

OR with this code:
Delete d = new Delete(Bytes.toBytes("key1"), Long.MAX_VALUE);
table.delete(d);

After each two deletes the result of scan is:
ROW     COLUMN+CELL
key1    column=C:q1, timestamp=2000000000000, value=test-val

And raw scan result is:
ROW     COLUMN+CELL
key1    column=C:, timestamp=1466931500501, type=DeleteFamily
(Continue reading)

fateme Abiri | 24 Jun 06:18 2016

unsubscribe me please from this mailing list


unsubscribe me please from this mailing list
Thanks & Regards

Fateme Abiri
Software Engineer
M.Sc. Degree
Ferdowsi University of Mashhad, Iran

Ted Yu | 23 Jun 06:50 2016
Picon

Re: HostAndWeight

YQ:
The HostAndWeight is basically a tuple.
In getTopHosts(), hosts are retrieved.
In getWeight(String host), weight is retrieved.

Why do you think a single Long is enough ?

Cheers

On Wed, Jun 22, 2016 at 9:28 PM, ramkrishna vasudevan <
ramkrishna.s.vasudevan@...> wrote:

> Hi WangYQ,
>
> For code related suggestions if you feel there is an improvement or bug it
> is preferrable to raise a JIRA and give a patch. Pls feel free to raise a
> JIRA with your suggestion and why you plan to change it.
>
> Regards
> Ram
>
> On Thu, Jun 23, 2016 at 9:36 AM, WangYQ <wangyongqiang0617@...> wrote:
>
> >
> > there is a class named "HDFSBlockDistribution",  use a tree map
> > "hostAndWeight" to store data
> > private Map<String, HostAndWeight> hostAndWeight
> >
> > I think we can use
> > private Map<String, Long> hostAndWeight
(Continue reading)

WangYQ | 23 Jun 06:06 2016

(无主题)


there is a class named "HDFSBlockDistribution",  use a tree map "hostAndWeight" to store data
private Map<String, HostAndWeight> hostAndWeight

I think we can use
private Map<String, Long> hostAndWeight
to store data

thanks
vishnu rao | 22 Jun 07:48 2016
Picon
Gravatar

after server restart - getting exception - java.io.IOException: Timed out waiting for lock for row

need some help. this has happened for 2 of my servers
-------------

*[B.defaultRpcServer.handler=2,queue=2,port=16020]  regionserver.HRegion:
Failed getting lock in batch put,
row=a\xF7\x1D\xCBdR\xBC\xEC_\x18D>\xA2\xD0\x95\xFF*

*java.io.IOException: Timed out waiting for lock for row:
a\xF7\x1D\xCBdR\xBC\xEC_\x18D>\xA2\xD0\x95\xFF*

at
org.apache.hadoop.hbase.regionserver.HRegion.getRowLockInternal(HRegion.java:5051)

at
org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:2944)

at
org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2801)

at
org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2743)

at
org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:692)

at
org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:654)

at
org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2031)
(Continue reading)

jinhong lu | 22 Jun 06:39 2016
Picon

hbase get big table problem

I got a cluster of 200 regionserver, and one of the tables is about 3T and 5 billion lines. Is it possible to get
about 8000 Gets per second(about 100,000 lines)?

I found YOUNG GC occurs every several senconds, each GC cost about 1second. if I set -Xmn bigger, the GC
occurs every several minutes, but each GC cost more time.

Any suggetion? thanks.

=====================
Thanks,
lujinhong

ps40 | 19 Jun 03:29 2016
Picon

Hbase release notes and class/package changes

Hi Guys,

I'm trying to upgrade the Apache Lily Project from Hbase 0.94 to 0.98.
Apache Lily uses some Hbase classes which have changed or disappeared
between these hbase versions. For example, there used to be an enum called:
org.apache.hadoop.hbase.regionserver.StoreFile.BloomType which became a
standalone enum in 0.95:org.apache.hadoop.hbase.regionserver.BloomType

In some cases the classes have disappeared entirely. For example:
org.apache.hadoop.hbase.ipc.CoprocessorProtocol

Is this stuff documented somewhere. Is there an upgrade guide from 0.94 to
0.95?

The closest analogy I have is the spring 3 to spring 4 upgrade guide which
went thru such changes in detail.

I have already looked at CHANGES file which contains a list of bug fixes in
the release but not these changes related to classes/packages.

Please let me know eitherways.

Thanks
Prashant

--
View this message in context: http://apache-hbase.679495.n3.nabble.com/Hbase-release-notes-and-class-package-changes-tp4080743.html
Sent from the HBase User mailing list archive at Nabble.com.

(Continue reading)

Andrew Purtell | 20 Jun 07:20 2016
Picon
Gravatar

[ANNOUNCE] Apache HBase 0.98.20 is now available for download

Apache HBase 0.98.20 is now available for download. Get it from an Apache
mirror [1] or Maven repository. The list of changes in this release can be
found in the release notes [2] or at the bottom of this announcement.

Thanks to all who contributed to this release.

Best,
The HBase Dev Team

​1. http://www.apache.org/dyn/closer.lua/hbase/
2. https://s.apache.org/5f48

​​
HBASE-4368  Expose processlist in shell (per regionserver and perhaps by
cluster)
HBASE-13532 Make UnknownScannerException logging less scary
HBASE-14818 user_permission does not list namespace permissions
HBASE-15292 Refined ZooKeeperWatcher to prevent ZooKeeper's callback while
construction
HBASE-15465 userPermission returned by getUserPermission() for the selected
namespace does not have namespace set
HBASE-15615 Wrong sleep time when RegionServerCallable need retry
HBASE-15617 Canary in regionserver mode might not enumerate all
regionservers
HBASE-15634 TestDateTieredCompactionPolicy#negativeForMajor is flaky
HBASE-15676 FuzzyRowFilter fails and matches all the rows in the table if
the mask consists of all 0s
HBASE-15686 Add override mechanism for the exempt classes when dynamically
loading table coprocessor
HBASE-15693 Reconsider the ImportOrder rule of checkstyle
(Continue reading)

Chathuri Wimalasena | 18 Jun 17:29 2016
Picon

Unusual log generation in Hadoop namenode logs

Hi,

I'm using HBase 0.94-23 with Hadoop 2.7.2. In my hadoop namenode log file,
I'm getting below log info message over and over again. No jobs are running
still seeing the same log appearing. When I run a hadoop job, some jobs
failed with timeout. The cluster has 10 data nodes with 200TB of data.
When I stop HBase, logs no longer appear.

16/06/18 11:23:51 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap
updated: 10.10.2.1:44949 is added to
blk_1077143665_3402990{UCState=UNDER_CONSTRUCTION, truncateBlock=null,
primaryNodeIndex=-1,
replicas=[ReplicaUC[[DISK]DS-ab36f47b-c40a-4561-b4ad-8f7258257691:NORMAL:10.10.2.6:44949|RBW],
ReplicaUC[[DISK]DS-48499fec-79cd-48df-943a-4e5011dc9f9a:NORMAL:10.10.2.1:44949|RBW],
ReplicaUC[[DISK]DS-3f5fb044-538f-4977-a24d-349c8496e77d:NORMAL:10.10.2.3:44949|RBW]]}
size 0
16/06/18 11:23:51 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap
updated: 10.10.2.6:44949 is added to
blk_1077143665_3402990{UCState=UNDER_CONSTRUCTION, truncateBlock=null,
primaryNodeIndex=-1,
replicas=[ReplicaUC[[DISK]DS-ab36f47b-c40a-4561-b4ad-8f7258257691:NORMAL:10.10.2.6:44949|RBW],
ReplicaUC[[DISK]DS-48499fec-79cd-48df-943a-4e5011dc9f9a:NORMAL:10.10.2.1:44949|RBW],
ReplicaUC[[DISK]DS-3f5fb044-538f-4977-a24d-349c8496e77d:NORMAL:10.10.2.3:44949|RBW]]}
size 0
16/06/18 11:23:51 INFO hdfs.StateChange: BLOCK* allocate
blk_1077143674_3402999{UCState=UNDER_CONSTRUCTION, truncateBlock=null,
primaryNodeIndex=-1,
replicas=[ReplicaUC[[DISK]DS-8af58951-0eb7-4244-b76a-9356ad832a93:NORMAL:10.10.2.6:44949|RBW],
ReplicaUC[[DISK]DS-6095b306-487e-48d7-948e-84ac5ef16aa4:NORMAL:10.10.2.9:44949|RBW],
ReplicaUC[[DISK]DS-9b2f64f6-bbac-4ed5-a1f9-9366129d196b:NORMAL:10.10.2.1:44949|RBW]]}
(Continue reading)

Bryan Beaudreault | 18 Jun 01:15 2016
Gravatar

Scan.setMaxResultSize and Result.isPartial

Hello,

We are running 1.2.0-cdh5.7.0 on our server side, and 1.0.0-cdh5.4.5 on the
client side. We're in the process of upgrading the client, but aren't there
yet. I'm trying to figure out the relationship of Result.isPartial and the
user, when setMaxResultSize is used.

I've done a little reading of the code, and it looks like isPartial is
mostly used by the internals of ClientScanner. From what I can tell the
user should never get a Result where isPartial == true, because the
ClientScanner will do multiple requests internally to flesh out incomplete
rows.

However, the code is a bit complex so I'd like to verify. Is this correct
for either version of HBase above? Is it safe to use setMaxResultSize
without any more work, or should we be handling the potential isPartial()
Result ourselves in every scan request we make?

I wonder if this should be added to the docs, either way (didn't see it),
or remove isPartial from the public API in future versions?

Thanks!

Gmane