Sanjay Ghemawat | 2 Jan 21:53 2012
Picon

Re: Re: A few questions about the source.

On Wed, Dec 28, 2011 at 12:05 AM, Patrick Twohig
<patrick.twohig <at> gmail.com> wrote:
> Sorry, link pasted funny: http://mysticpaste.com/view/11057
>
>
>
> On Dec 28, 12:04 am, Patrick Twohig <patrick.two... <at> gmail.com> wrote:
>> C++98  certainly allows for it.  It's a little clumsy, but so long as
>> you always you always construct the string using the allocator or use
>> a copy of an existing string.
>> Sample program here:http://mysticpaste.com/view/11057Andyou can see
>> the reference page here:http://en.cppreference.com/w/cpp/string/basic_string/basic_string
>> C++1X only added rvalue references for move semantics.
>> Cheers,Patrick.
>>
>> On Dec 27, 10:34 pm, Florian Weimer <f... <at> deneb.enyo.de> wrote:
>>
>>
>>
>>
>>
>>
>>
>> > * Patrick Twohig:
>>
>> > >> leveldb uses new/delete all over the place internally.  Just fixing this one
>> > >> part of the API would not make it suitable for use on a platform that
>> > >> does not support new/delete.
>>
>> > > Along those lines would it be possible to re-factor the Env class to
(Continue reading)

Shakeel Mahate | 5 Jan 19:20 2012
Picon

Worst case timing of compaction

I am reading the Implementation notes of leveldb  http://leveldb.googlecode.com/svn/trunk/doc/impl.html specifically the section on Timing.


Quoting from the above page: "Other than the special level-0 compactions, we will pick one 2MB file from level L. In the worst case, this will overlap ~ 12 files from level L+1 (10 because level-(L+1) is ten times the size of level-L, and another two at the boundaries since the file ranges at level-L will usually not be aligned with the file ranges at level-L+1). The compaction will therefore read 26MB and write 26MB. Assuming a disk IO rate of 100MB/s (ballpark range for modern drives), the worst compaction cost will be approximately 0.5 second. "

So the worst case timing of compacting one file at level L (for levels greater than 0) is 26 MB since it touches 12 files from Level L+1 and the file in Level L that triggered the compaction.

From the above scenario can I assume that a compaction in level L will not result in a recursive compaction in level L+2 or higher? It looks like you guarantee that the key range for level L will not overlap with more than 20 level L+2 files, but I fail to understand how you restrict compaction in level L to one higher level only?

How do you throttle compactions?

Thanks for creating and sharing leveldb, I am enjoying reading the source code. It gives me a great insight into how Google builds products.

Shakeel


leveldb | 10 Jan 10:33 2012

Issue 65 in leveldb: Mutate and append to value support

Status: New
Owner: ----
Labels: Type-Defect Priority-Medium

New issue 65 by morten.h... <at> gmail.com: Mutate and append to value support
http://code.google.com/p/leveldb/issues/detail?id=65

It would be nice if one could:
* Append to an existing value atomically
* Mutator operation on the value, like +/-n

Thanks!

Rosh | 10 Jan 15:33 2012
Picon

can I use integer keys and values that are arbitrary objects in leveldb ?

can I use integer keys and values that are arbitrary objects in
leveldb ?

Sanjay Ghemawat | 11 Jan 01:41 2012
Picon

Re: can I use integer keys and values that are arbitrary objects in leveldb ?

On Tue, Jan 10, 2012 at 6:33 AM, Rosh <rosh.cherian <at> gmail.com> wrote:
> can I use integer keys and values that are arbitrary objects in
> leveldb ?

Not directly.  But you can write conversion routines that convert from
your types to byte arrays and then use those as leveldb keys and values.
Note that when generating keys like this, you should ensure that the
generated key must sort appropriately; either by using an appropriate
encoding function (like big-endian encoding for integers), or by
supplying a custom Comparator to leveldb.

gaoqiang | 12 Jan 04:18 2012
Picon

Is there any problem for leveldb to support tens of billion items ?

google claims leveldb support billion data. is any problem for tens or
hundards of billion ?

leveldb | 12 Jan 10:06 2012

Issue 66 in leveldb: Too Many deleted fd

Status: New
Owner: ----
Labels: Type-Defect Priority-Medium

New issue 66 by yafei.zh... <at> langtaojin.com: Too Many deleted fd
http://code.google.com/p/leveldb/issues/detail?id=66

I use leveldb on Linux, there are some read/write in each second.

After running three days, there are 1386 deleted fd. I think it is abnormal.

the attachment is the output of "ls -al /proc/$pid/fd"

Attachments:
	fd.log  150 KB

leveldb | 12 Jan 10:15 2012

Re: Issue 62 in leveldb: Trouble building on Linux Red Hat with g++ 3.4.6


Comment #1 on issue 62 by yafei.zh... <at> langtaojin.com: Trouble building on  
Linux Red Hat with g++ 3.4.6
http://code.google.com/p/leveldb/issues/detail?id=62

hi
your GCC version may be too old to compile leveldb

X Chen | 12 Jan 11:40 2012
Picon

Is there a good C# version for LevelDB

I have found one "leveldb-sharp", https://masterbranch.com/leveldb-sharp-project/1248719

but it seems that it doesn't have a storage function.

Any better one?

Thanks

Itamar Syn-Hershko | 12 Jan 11:56 2012

Re: Is there a good C# version for LevelDB

This doesn't seem like a port, more like the very beginning of one...

Would be interested in hearing about a C# wrapper around the native LevelDB sources, or collaborate in an effort to create one.

On Thu, Jan 12, 2012 at 12:40 PM, X Chen <iamindcs <at> gmail.com> wrote:
I have found one "leveldb-sharp", https://masterbranch.com/leveldb-sharp-project/1248719

but it seems that it doesn't have a storage function.

Any better one?

Thanks



Gmane