Apache Hudson Server | 1 Jul 01:03 2009
Picon

Hudson build is back to normal: HBase-Patch #689

See http://hudson.zones.apache.org/hudson/job/HBase-Patch/689/changes

ryan rawson (JIRA | 1 Jul 01:24 2009
Picon

[jira] Assigned: (HBASE-1218) Implement in-memory column


     [
https://issues.apache.org/jira/browse/HBASE-1218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

ryan rawson reassigned HBASE-1218:
----------------------------------

    Assignee: ryan rawson  (was: Jonathan Gray)

> Implement in-memory column
> --------------------------
>
>                 Key: HBASE-1218
>                 URL: https://issues.apache.org/jira/browse/HBASE-1218
>             Project: Hadoop HBase
>          Issue Type: New Feature
>            Reporter: stack
>            Assignee: ryan rawson
>             Fix For: 0.20.0
>
>
> HCD already talks of in-memory columns; its just not implemented.  With hfile this should now be possible
-- if set, read in the whole storefile and serve from the blockbuffer.

--

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

(Continue reading)

stack (JIRA | 1 Jul 01:30 2009
Picon

[jira] Assigned: (HBASE-1583) Start/Stop of large cluster untenable


     [
https://issues.apache.org/jira/browse/HBASE-1583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

stack reassigned HBASE-1583:
----------------------------

    Assignee: stack

> Start/Stop of large cluster untenable
> -------------------------------------
>
>                 Key: HBASE-1583
>                 URL: https://issues.apache.org/jira/browse/HBASE-1583
>             Project: Hadoop HBase
>          Issue Type: Bug
>            Reporter: stack
>            Assignee: stack
>             Fix For: 0.20.0
>
>
> Starting and stopping a loaded large cluster is way too flakey and takes too long.  This is 0.19.x but same
issues apply to TRUNK I'd say.
> At pset with our > 100 nodes carrying 6k regions:
> + shutdown takes way too long.... maybe ten minutes or so.  We compact regions inline with shutdown.  We
should just go down.  It doesn't seem like all regionservers go down everytime either.
> + startup is a mess with our assigning out regions an rebalancing at same time.  By time that the compactions
on open run, it can be near an hour before whole thing settles down and becomes useable

--

-- 
(Continue reading)

stack (JIRA | 1 Jul 01:30 2009
Picon

[jira] Updated: (HBASE-1574) Client and server APIs to do batch deletes.


     [
https://issues.apache.org/jira/browse/HBASE-1574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

stack updated HBASE-1574:
-------------------------

    Fix Version/s:     (was: 0.20.0)
                   0.21.0

Moving to 0.21

> Client and server APIs to do batch deletes.
> -------------------------------------------
>
>                 Key: HBASE-1574
>                 URL: https://issues.apache.org/jira/browse/HBASE-1574
>             Project: Hadoop HBase
>          Issue Type: Bug
>    Affects Versions: 0.20.0
>            Reporter: ryan rawson
>             Fix For: 0.21.0
>
>
> in 880 there is no way to do a batch delete (anymore?).  We should add one back in.

--

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
(Continue reading)

Jonathan Gray (JIRA | 1 Jul 02:08 2009
Picon

[jira] Commented: (HBASE-1218) Implement in-memory column


    [
https://issues.apache.org/jira/browse/HBASE-1218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12725856#action_12725856
] 

Jonathan Gray commented on HBASE-1218:
--------------------------------------

The LRU in trunk now supports in-memory blocks.

In HFile.Reader.readBlock(int) we need to change cache.cacheBlock(blockName, blockBuf) to
cache.cacheBlock(blockName, blockBuf, boolean inMemory).

HFile.Reader gets instantiated from within a Store so should be easy enough to pass something in from the
family configuration there.

I strongly recommend lazy-loading, no pre-fetching of blocks.  That's how they do it in big table.  If you
want to warm the blocks, just scan the table.

> Implement in-memory column
> --------------------------
>
>                 Key: HBASE-1218
>                 URL: https://issues.apache.org/jira/browse/HBASE-1218
>             Project: Hadoop HBase
>          Issue Type: New Feature
>            Reporter: stack
>            Assignee: ryan rawson
>             Fix For: 0.20.0
>
(Continue reading)

Jonathan Gray (JIRA | 1 Jul 02:18 2009
Picon

[jira] Commented: (HBASE-1593) TableMap's generic type inconsistcy with Hadoop


    [
https://issues.apache.org/jira/browse/HBASE-1593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12725858#action_12725858
] 

Jonathan Gray commented on HBASE-1593:
--------------------------------------

So can we close this issue?

> TableMap's generic type inconsistcy with Hadoop
> -----------------------------------------------
>
>                 Key: HBASE-1593
>                 URL: https://issues.apache.org/jira/browse/HBASE-1593
>             Project: Hadoop HBase
>          Issue Type: Bug
>          Components: mapred
>            Reporter: jaehong choi
>
> Hello~
> I've been trying to global sort rows whose key is LongWritable and value is Text type.
> Since there are a few result files after finishing a mapreduce job, I need to do a merge sort on them. 
> I want to sort rows in a decreasing order, but ImmutableBytesWritable only supports increasing sort.
> So, I assign LongWritable.DecreasingComparator on JobConf.setOutputKeyComparatorClass() in order
that output keys are in decreasing order.
> However, There is a big problem. As you see, TableMap requires K must implement or extend
WritableComparable<K>, but LongWritable only implements WritableComparable, not
WritableComparable<LongWritable>. This makes a compilation time error.  
> I think we should come up with an idea to solve this one. 
(Continue reading)

Jonathan Gray (JIRA | 1 Jul 02:22 2009
Picon

[jira] Resolved: (HBASE-1546) atomicIncrement doesn't participate in HLog/WAL


     [
https://issues.apache.org/jira/browse/HBASE-1546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Jonathan Gray resolved HBASE-1546.
----------------------------------

       Resolution: Duplicate
    Fix Version/s: 0.20.0

Fixed by HBASE-1563

> atomicIncrement doesn't participate in HLog/WAL
> -----------------------------------------------
>
>                 Key: HBASE-1546
>                 URL: https://issues.apache.org/jira/browse/HBASE-1546
>             Project: Hadoop HBase
>          Issue Type: Bug
>            Reporter: ryan rawson
>             Fix For: 0.20.0
>
>
> With the new implementation of atomicIncrement in HBASE-1304, these increments dont participate in the
WAL.  
> I was also seeing odd flush behaviour.  Flushing a table with only atomic increments didnt seem to make new
store files.

--

-- 
This message is automatically generated by JIRA.
(Continue reading)

Jonathan Gray (JIRA | 1 Jul 02:28 2009
Picon

[jira] Commented: (HBASE-1498) Remove old API references from hbase shell, to make it match the new one.


    [
https://issues.apache.org/jira/browse/HBASE-1498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12725860#action_12725860
] 

Jonathan Gray commented on HBASE-1498:
--------------------------------------

 <at> Erik what's the deal with this issue?

> Remove old API references from hbase shell, to make it match the new one.
> -------------------------------------------------------------------------
>
>                 Key: HBASE-1498
>                 URL: https://issues.apache.org/jira/browse/HBASE-1498
>             Project: Hadoop HBase
>          Issue Type: Improvement
>          Components: scripts
>    Affects Versions: 0.20.0
>            Reporter: Erik Holstad
>            Assignee: Erik Holstad
>             Fix For: 0.20.1
>
>
> In some of the .rb scripts we still have references to old API.

--

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
(Continue reading)

Jonathan Gray (JIRA | 1 Jul 02:34 2009
Picon

[jira] Updated: (HBASE-1470) hbase and HADOOP-4379, dhruba's flush/sync


     [
https://issues.apache.org/jira/browse/HBASE-1470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Jonathan Gray updated HBASE-1470:
---------------------------------

    Fix Version/s: 0.20.0

Need to figure something for 0.20.0 release.  Since we rely on a patched hadoop server, either must have a
separate hbase, a patch for hbase and hadoop, or if we can (via reflection or such) detect if the hadoop
server supports appends we could commit this and only maintain one thing.

> hbase and HADOOP-4379, dhruba's flush/sync
> ------------------------------------------
>
>                 Key: HBASE-1470
>                 URL: https://issues.apache.org/jira/browse/HBASE-1470
>             Project: Hadoop HBase
>          Issue Type: Bug
>            Reporter: stack
>             Fix For: 0.20.0
>
>         Attachments: recovery3.patch
>
>
> This covers work with HADOOP-4379

--

-- 
This message is automatically generated by JIRA.
(Continue reading)

Jonathan Gray (JIRA | 1 Jul 02:36 2009
Picon

[jira] Commented: (HBASE-1463) -ROOT- reassignment provokes .META. reassignment though .META. is sitting pretty


    [
https://issues.apache.org/jira/browse/HBASE-1463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12725863#action_12725863
] 

Jonathan Gray commented on HBASE-1463:
--------------------------------------

Is this an issue in trunk still or can we close?

> -ROOT- reassignment provokes .META. reassignment though .META. is sitting pretty
> --------------------------------------------------------------------------------
>
>                 Key: HBASE-1463
>                 URL: https://issues.apache.org/jira/browse/HBASE-1463
>             Project: Hadoop HBase
>          Issue Type: Bug
>            Reporter: stack
>
> I've seen post-hbase-1457 that if regionserver hosting -ROOT- goes down and then is reassigned, though
.META. is happy where it is, and even though -ROOT- edits get picked up and are applied, I see that
rootscanner complains the  .META. assignment is invalid because the server and startcode are empty. 
Something's up.  Take a look.

--

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

(Continue reading)


Gmane