Gregory Maxwell | 28 Oct 21:36 2014
Picon

Fwd: death by halving

On Tue, Oct 28, 2014 at 8:17 PM, Ferdinando M. Ametrano
<ferdinando.ametrano <at> gmail.com> wrote:
>
> On Oct 25, 2014 9:19 PM, "Gavin Andresen" <gavinandresen <at> gmail.com> wrote:
> > We had a halving, and it was a non-event.
> > Is there some reason to believe next time will be different?
>
> In november 2008 bitcoin was a much younger ecosystem,

Or very old, indeed, if you are using unsigned arithmetic. [...]

> and the halving happened during a quite stable positive price trend

Hardly,

http://bitcoincharts.com/charts/mtgoxUSD#rg60zczsg2012-10-01zeg2012-12-01ztgSzm1g10zm2g25zv

> Moreover, halving is not strictly necessary to respect the spirit of Nakamoto's monetary rule

It isn't, but many people have performed planning around the current
behaviour. The current behaviour has also not shown itself to be
problematic (and we've actually experienced its largest effect already
without incident), and there are arguable benefits like encouraging
investment in mining infrastructure.

This thread is, in my opinion, a waste of time.  It's yet again
another perennial bikeshedding proposal brought up many times since at
least 2011, suggesting random changes for
non-existing(/not-yet-existing) issues.

(Continue reading)

Tom Harding | 27 Oct 20:58 2014

DS Deprecation Window

Greetings Bitcoin Dev,

This is a proposal to improve the ability of bitcoin users to rely on 
unconfirmed transactions.  It can be adopted incrementally, with no hard 
or soft fork required.

https://github.com/dgenr8/out-there/blob/master/ds-dep-win.md

Your thoughtful feedback would be very much appreciated.

It is not yet implemented anywhere.

Cheers,
Tom Harding
CA, USA

------------------------------------------------------------------------------
Alex Morcos | 27 Oct 20:33 2014
Picon

Reworking the policy estimation code (fee estimates)

I've been playing around with the code for estimating fees and found a few issues with the existing code.   I think this will address several observations that the estimates returned by the existing code appear to be too high.  For instance see <at> cozz in Issue 4866.

Here's what I found:

1) We're trying to answer the question of what fee X you need in order to be confirmed within Y blocks.   The existing code tries to do that by calculating the median fee for each possible Y instead of gathering statistics for each possible X.  That approach is statistically incorrect.  In fact since certain X's appear so frequently, they tend to dominate the statistics at all possible Y's (a fee rate of about 40k satoshis)

2) The existing code then sorts all of the data points in all of the buckets together by fee rate and then reassigns buckets before calculating the medians for each confirmation bucket.  The sorting forces a relationship where there might not be one.  Imagine some other variable, such as first 2 bytes of the transaction hash.  If we sorted these and then used them to give estimates, we'd see a clear but false relationship where transactions with low starting bytes in their hashes took longer to confirm.

3) Transactions which don't have all their inputs available (because they depend on other transactions in the mempool) aren't excluded from the calculations.  This skews the results.

I rewrote the code to follow a different approach.  I divided all possible fee rates up into fee rate buckets (I spaced these logarithmically).  For each transaction that was confirmed, I updated the appropriate fee rate bucket with how many blocks it took to confirm that transaction.  

The hardest part of doing this fee estimation is to decide what the question really is that we're trying to answer.  I took the approach that if you are asking what fee rate I need to be confirmed within Y blocks, then what you would like to know is the lowest fee rate such that a relatively high percentage of transactions of that fee rate are confirmed within Y blocks. Since even the highest fee transactions are confirmed within the first block only 90-93% of the time, I decided to use 80% as my cutoff.  So now to answer "estimatefee Y", I scan through all of the fee buckets from the most expensive down until I find the last bucket with >80% of the transactions confirmed within Y blocks.

Unfortunately we still have the problem of not having enough data points for non-typical fee rates, and so it requires gathering a lot of data to give reasonable answers. To keep all of these data points in a circular buffer and then sort them for every analysis (or after every new block) is expensive.  So instead I adopted the approach of keeping an exponentially decaying moving average for each bucket.  I used a decay of .998 which represents a half life of 374 blocks or about 2.5 days.  Also if a bucket doesn't have very many transactions, I combine it with the next bucket.

Here is a link to the code.  I can create an actual pull request if there is consensus that it makes sense to do so.

I've attached a graph comparing the estimates produced for 1-3 confirmations by the new code and the old code.  I did apply the patch to fix issue 3 above to the old code first.  The new code is in green and the fixed code is in purple.  The Y axis is a log scale of feerate in satoshis per KB and the X axis is chain height.  The new code produces the same estimates for 2 and 3 confirmations (the answers are effectively quantized by bucket).

I've also completely reworked smartfees.py.  It turns out to require many many more transactions are put through in order to have statistically significant results, so the test is quite slow to run (about 3 mins on my machine).

I've also been running a real world test, sending transactions of various fee rates and seeing how long they took to get confirmed.  After almost 200 tx's at each fee rate, here are the results so far:

Fee rate 1100   Avg blocks to confirm 2.30 NumBlocks:% confirmed 1: 0.528 2: 0.751 3: 0.870
Fee rate 2500   Avg blocks to confirm 2.22 NumBlocks:% confirmed 1: 0.528 2: 0.766 3: 0.880
Fee rate 5000   Avg blocks to confirm 1.93 NumBlocks:% confirmed 1: 0.528 2: 0.782 3: 0.891
Fee rate 10000  Avg blocks to confirm 1.67 NumBlocks:% confirmed 1: 0.569 2: 0.844 3: 0.943
Fee rate 20000  Avg blocks to confirm 1.33 NumBlocks:% confirmed 1: 0.715 2: 0.963 3: 0.989
Fee rate 30000  Avg blocks to confirm 1.27 NumBlocks:% confirmed 1: 0.751 2: 0.974 3: 1.0
Fee rate 40000  Avg blocks to confirm 1.25 NumBlocks:% confirmed 1: 0.792 2: 0.953 3: 0.994
Fee rate 60000  Avg blocks to confirm 1.12 NumBlocks:% confirmed 1: 0.875 2: 1.0   3: 1.0
Fee rate 100000 Avg blocks to confirm 1.09 NumBlocks:% confirmed 1: 0.901 2: 1.0   3: 1.0
Fee rate 300000 Avg blocks to confirm 1.12 NumBlocks:% confirmed 1: 0.886 2: 0.989 3: 1.0


Alex
------------------------------------------------------------------------------
_______________________________________________
Bitcoin-development mailing list
Bitcoin-development <at> lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Wladimir | 26 Oct 08:57 2014
Picon

Bitcoin Core 0.10 release schedule

Now that headers-first is merged it would be good to do a 0.10 release
soon. Not *too* soon as a major code change like that takes some time
to pan out, but I'd like to propose the following:

- November 18: split off 0.10 branch, translation message and feature freeze
- December 1: release 10.0rc1, start Release Candidate cycle

That leaves three weeks until the freeze. After the release and branch
split-off, the RC cycle will run until no critical problems are found.
For major releases this is usually more painful than for stable
releases, but if we can keep to these dates I'd expect the final
release no later than January 2015.

Let's aim to have any pending development for 0.10 merged before
November 18. Major work that I'm aware of is:

- BIP62 (#5134, #5065)
- Verification library (#5086, #5118, #5119)
- Gitian descriptors overhaul, so that Gitian depends = Travis depends (#4727)
- Autoprune (#4701)
- Add "warmup mode" for RPC server (#5007)
- Add unauthenticated HTTP REST interface (#2844)

Let me know if there is anything else you think is ready (and not too
risky) to be in 0.10. You can help along the development process by
participating in testing and reviewing of the mentioned pull requests,
or just by testing master and reporting bugs and regressions.

Note: I intended the 0.10 release to be much sooner. The reason that
this didn't pan out is that I insisted on including headers-first, and
this took longer than expected. There seems to be a preference to
switch to a fixed (instead of feature-based) 6-month major release
schedule, ie

- July 2015: 0.11.0 (or whatever N+1 release is called)
- January 2016: 0.12.0 (or whatever N+2 release is called)
- July 2016: 0.13.0 (or whatever N+3 release is called)

Wladimir

------------------------------------------------------------------------------
Jeff Garzik | 25 Oct 20:31 2014

Re: death by halving

It is an overly-simplistic miner model to assume altruism is
necessary.  The hashpower market is maturing in the direction of
financial instruments, where the owner of the hashpower is not
necessarily the one receiving income.  These are becoming tradeable
instruments, and derivatives and hedging are built on top of that.
Risk is hedged at each layer.  Market players also forge agreements
with miners, and receive -negative- value if hashpower is simply shut
down.

Simplistic models cannot predict what hashpower does in the face of
business-to-business medium- and long-term contracts.

On Sat, Oct 25, 2014 at 2:22 PM, Alex Mizrahi <alex.mizrahi <at> gmail.com> wrote:
>
>>
>> "Flag day" herd behavior like this is unlikely for well informed and
>> well prepared market participants.
>
>
> It is simply rational to turn your mining device off until difficulty
> adjusts.
> Keeping mining for 2+ weeks when it costs you money is an altruistic
> behavior, we shouldn't rely on this.
>
> ------------------------------------------------------------------------------
>
> _______________________________________________
> Bitcoin-development mailing list
> Bitcoin-development <at> lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bitcoin-development
>

--

-- 
Jeff Garzik
Bitcoin core developer and open source evangelist
BitPay, Inc.      https://bitpay.com/

------------------------------------------------------------------------------
Alex Mizrahi | 25 Oct 20:06 2014
Picon

death by halving

# Death by halving

## Summary

If miner's income margin are less than 50% (which is a healthy situation when mining hardware is readily available), we might experience catastrophic loss of hashpower (and, more importantly, catastrophic loss of security) after reward halving.

## A simple model

Let's define miner's income margin as `MIM = (R-C_e)/R`, where R is the total revenue miner receives over a period of time, and C_e is the cost of electricity spent on mining over the same period of time. (Note that for the sake of simplicity we do not take into account equipment costs, amortization and other costs mining might incur.)

Also we will assume that transaction fees collected by miner are negligible as compared to the subsidy.

Theorem 1. If for a certain miner MIM is less than 0.5 before subsidy halving and bitcoin and electricity prices stay the same, then mining is no longer profitable after the halving.

Indeed, suppose the revenue after the halving is R' = R/2.
   MIM = (R-C_e)/R < 0.5
   R/2 < C_e.

   R' = R/2 < C_e.

If revenue after halving R' doesn't cover electricity cost, a rational miner should stop mining, as it's cheaper to acquire bitcoins from the market.

~~~

Under these assumptions, if the majority of miners have MIM less than 0.5, Bitcoin is going to experience a significant loss of hashing power. 
But are these assumptions reasonable? We need a study a more complex model which takes into account changes in bitcoin price and difficulty changes over time.
But, first, let's analyze significance of 'loss of hashpower'.

## Catastrophic loss of hashpower

Bitcoin security model relies on assumption that a malicious actor cannot acquire more than 50% of network's current hashpower.
E.g. there is a table in Rosenfeld's _Analysis of Hashrate-Based Double Spending_ paper which shows that as long as the malicious actor controls only a small fraction of total hashpower, attacks have well-define costs. But if the attacker-controlled hashrate is higher than 50%, attacks become virtually costless, as the attacker receives double-spending revenue on top of his mining revenue, and his risk is close to zero.

Note that the simple model described in the aforementioned paper doesn't take into account attack's effect on the bitcoin price and the price of the Bitcoin mining equipment. I hope that one day we'll see more elaborate attack models, but in the meantime, we'll have to resort to hand-waving.

Consider a situation where almost all available hashpower is available for a lease to the highest bidder on the open market. In this case someone who owns sufficient capital could easily pull off an attack.

But why is hashpower not available on the market? Quite likely equipment owners are aware of the fact that such an attack would make Bitcoin useless, and thus worthless, which would also make their equipment worthless. Thus they prefer to do mining for a known mining pools with good track record.
(Although hashpower marketplaces exist: https://nicehash.com/ they aren't particularly popular.)

Now let's consider a situation where mining bitcoins is no longer profitable and the majority of hashpower became dormant, i.e. miners turned off their equipment or went to mine something else. In this case equipment is already nearly worthless, so people might as well lease it to the highest bidder, thus enabling aforementioned attacks.

Alternatively, the attacker might buy obsolete mining equipment from people who are no longer interested in mining.

## Taking into account the Bitcoin price

This is largely trivial, and thus is left as an exercise for the reader. Let's just note that the Bitcoin subsidy halving is an event which is known to market participants in advance, and thus it shouldn't result in significant changes of the Bitcoin price,

## Changes in difficulty

Different mining devices have different efficiency. After the reward halving mining on some of these devices becomes unprofitable, thus they will drop out, which will result in a drop of mining difficulty.

We can greatly simplify calculations if we sum costs and rewards across all miners, thus calculating average MIM before the halving: `MIM = 1 - C_e/R`.

Let's consider an equilibrium break-even situation where unprofitable mining devices were turned off, thus resulting in the change in electricity expenditures: `C_e' = r * C_e`. and average MIM after the halving `MIM' = 0`. In this case:

    r * C_e = R/2
    C_e / R = 1/2r
    (1 - MIM) = 1/2r
    r = 1/(2*(1-MIM))

Let's evaluate this formulate for different before-halving MIM:

1. If `MIM = 0.5`, then `r = 1/(2*0.5) = 1`, that is, all miners can remain mining.
2. If `MIM = 0.25`, then `r = 1/(2*0.75) = 0.66`, the least efficient miners consuming 33% of total electricity costs will drop out.
3. If `MIM = 0.1`, then `r = 1/(2*0.9) = 0.55`, total electricity costs drop by 45%.

We can note that for the before-halving MIM>0, r is higher than 1/2, thus less than half of total hashpower will drop out.

The worst-case situation is when before-halving MIM is close to zero and mining devices, as well as cost of electricity in different places, are nearly identical, in that case approximately a half of all hashpower will drop out.

## MIM estimation

OK, what MIM do we expect in the long run? Is it going to be less than 50% anyway?

We can expect that people will keep buying mining devices as long as it is profitable.

Break-even condition: `R - C_e - P = 0`, where P is the price of a mining device, R is the revenue it generates over its lifetime, and C_e is the total cost of required electricity over its lifetime. In this case, `R = C_e + P`, and thus:

    MIM = 1 - C_e / (C_e + P)

`f = C_e / P` is a ratio of the cost of electricity to the cost of hardware, `C_e = f * P`, and thus

    MIM = 1 - f * P / (f * P + P) = 1 - f / (f + 1) = 1 / (1 + f)

MIM is less than 0.5 when f > 1.

Computing f is somewhat challenging even for a concrete device, as it's useful lifetime is unknown.

Let's do some guesstimation:

Spondoolies Tech's SP35 Yukon unit consumes 3.5 KW and costs $4000. If it's useful lifetime is more than 2 years and a cost of KWh is $0.1, the total expenditures on electricity will be at least $6135, thus for this device we have `f > 6135/4000 > 1.5`.

If other devices which will be sold on the market will have similar specs, we will have MIM lower than 0.5. (Well, no shit.)

## Conclusions

Reward halving is a deficiency in Bitcoin's design, but there is some hope it won't be critical: in the equilibrium break-even situation hashpower drop is less than 50%.
Hashrate might drop by more than 50% immediately after the halving (and before difficulty is updated), thus a combination of the halving and slow difficulty update pose a real threat.
------------------------------------------------------------------------------
_______________________________________________
Bitcoin-development mailing list
Bitcoin-development <at> lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Adam Back | 22 Oct 23:54 2014

Re: side-chains & 2-way pegging (Re: is there a way to do bitcoin-staging?)

For those following this thread, we have now written a paper
describing the side-chains, 2-way pegs and compact SPV proofs.
(With additional authors Andrew Poelstra & Andrew Miller).

http://blockstream.com/sidechains.pdf

Adam

On 16 March 2014 15:58, Adam Back <adam <at> cypherspace.org> wrote:
> So an update on 1-way pegging (aka bitcoin staging, explained in quoted text
> at bottom): it turns out secure 2-way pegging is also possible (with some
> bitcoin change to help support it).  The interesting thing is this allows
> interoperability in terms of being able to move bitcoin into and out of a
> side chain.  The side chains may have some different parameters, or
> experimental things people might want to come up with (subject to some
> minimum compatibility at the level of being able to produce an SPV proof of
> a given form).
>
> At the time of the 1-way peg discussion I considered 2-way peg as desirable
> and it seemed plausible with bitcoin changes, but the motivation for 1-way
> peg was to make it less risky to make changes on bitcoin, so that seemed
> like a catch-22 loop.  Also in the 2-way peg thought experiment I had not
> realized how simple it was to still impose a security firewall in the 2-way
> peg also.
>
>
> So Greg Maxwell proposed in Dec last year a practically compact way to do
> 2-way pegging using SPV proofs.  And also provided a simple argument of how
> this can provide a security firewall.  (Security firewall means the impact
> of security bugs on the side-chain is limited to the people with coins in
> it; bitcoin holders who did not use it are unaffected). [1]
>
> How it works:
>
> 1. to maintain the 21m coins promise, you start a side-chain with no
> in-chain mining subsidy, all bitcoin creation happens on bitcoin chain (as
> with 1-way peg).  Reach a reasonable hash rate.  (Other semantics than 1:1
> peg should be possible, but this is the base case).
>
> 2. you move coins to the side-chain by spending them to a fancy script,
> which suspends them, and allows them to be reanimated by the production of
> an SPV proof of burn on the side-chain.
>
> 3. the side-chain has no mining reward, but it allows you to mint coins at
> no mining cost by providing an SPV proof that the coin has been suspended as
> in 2 on bitcoin.  The SPV proof must be buried significantly before being
> used to reduce risk of reorganization.  The side-chain is an SPV client to
> the bitcoin network, and so maintains a view of the bitcoin hash chain (but
> not the block data).
>
> 4. the bitcoin chain is firewalled from security bugs on the side chain,
> because bitcoin imposes the rule that no more coins can be reanimated than
> are currently suspend (with respect to a given chain).
>
> 5. to simplify what they hypothetical bitcoin change would need to consider
> and understand, after a coin is reanimated there is a maturity period
> imposed (say same as fresh mined coins).  During the maturity period the
> reanimation script allows a fraud proof to spend the coins back.  A fraud
> bounty fee (equal to the reanimate fee) can be offered by the mover to
> incentivize side-chain full nodes to watch reanimations and search for fraud
> proofs.
>
> 6. a fraud proof is an SPV proof with a longer chain showing that the proof
> of burn was orphaned.
>
> There are a few options to compress the SPV proof, via Fiat-Shamir transform
> to provide a compact proof of amount work contained in a merkle tree of
> proofs of work (as proposed by Fabien Coelho link on
> http://hashcash.org/papers/) with params like 90% of work is proven.  But
> better is something Greg proposed based on skip-lists organized in a tree,
> where 'lucky' proofs of work are used to skip back further.  (Recalling that
> if you search for a 64-bit leading-0 proof-of-work, half the time you get a
> 65-bit, quarter 66-bit etc.)  With this mechanism you can accurately
> prove the amount of proof of work in a compressed tree (rather than ~90%).
>
>
> Apart from pegging from bitcoin to a side-chain, if a private chain is made
> with same rules to the side-chain it becomes possible with some
> modifications to the above algorithm to peg the side-chain to a private
> chain.  Private chain meaning a chain with the same format but signature of
> single server in place of hashing, and timestamping of the block signatures
> in the mined side chain.  And then reactive security on top of that by full
> nodes/auditors trying to find fraud proofs (rewrites of history relative to
> side-chain mined time-stamp or approved double-spends).  The reaction is to
> publish a fraud proof and move coins back to the side chain, and then
> regroup on a new server.  (Open transactions has this audit + reactive model
> but as far as I know does it via escrow, eg the voting pools for k of n
> escrow of the assets on the private server.) I also proposed the same
> reactive audit model but for auditable namespaces [4].
>
> Private chains add some possiblity for higher scaling, while retaining
> bitcoin security properties.  (You need to add the ability for a user to
> unilaterally move his coins to the side-chain they came from in event the
> chain server refuses to process transactions involving them.  This appears
> to be possible if you have compatible formats on the private chain and
> side-chain).
>
>
> This pegging discussion involved a number of #bitcoin-wizards, Greg Maxwell,
> Matt Corallo, Pieter Wuille, Jorge Timon, Mark Freidenbach, Luke Dashjr. The
> 2-way peg seems to have first been described by Greg.  Greg thought of
> 2-way pegging in the context of ZK-SNARKS and the coinwitness thread [2].
> (As a ZK-SNARK could compactly prove full validation of a side chain rules).
>
> There was also something seemingly similar sounding but not described in
> detail by Alex Mizrahi in the context of color coins in this post [3].
>
> Adam
>
> [1] http://download.wpsoftware.net/bitcoin/wizards/2013-12-18.txt
> [2] https://bitcointalk.org/index.php?topic=277389.40
> [3] https://bitcointalk.org/index.php?topic=277389.msg4167554#msg4167554
> [4] http://www.cypherspace.org/p2p/auditable-namespace.html
>
> On Mon, Oct 14, 2013 at 08:08:07PM +0200, Adam Back wrote:
>>
>> Coming back to the staging idea, maybe this is a realistic model that
>> could
>> work.  The objective being to provide a way for bitcoin to move to a live
>> beta and stable being worked on in parallel like fedora vs RHEL or
>> odd/even
>> linux kernel versions.
>>
>> Development runs in parallel on bitcoin 1.x beta (betacoin) and bitcoin
>> 0.x
>> stable and leap-frogs as beta becomes stable after testing.
>>
>> Its a live beta, meaning real value, real contracts.  But we dont want it
>> to
>> be an alt-coin with a floating value exactly, we want it to be bitcoin,
>> but
>> the bleeding edge bitcoin so we want to respect the 21 million coin limit,
>> and allow coins to move between bitcoin and betacoin with some necessary
>> security related restrictions.
>>
>> There is no mining reward on the betacoin network (can be merge mined for
>> security), and the way you opt to move a bitcoin into the betacoin network
>> is to mark it as transferred in some UTXO recognized way.  It cant be
>> reanimated, its dead.  (eg spend to a specific recognized invalid address
>> on
>> the bitcoin network).  In this way its not really a destruction, but a
>> move,
>> moving the coin from bitcoin to betacoin network.
>>
>> This respects the 21 million coin cap, and avoids betacoin bugs flowing
>> back
>> and affecting bitcoin security or value-store properties.  Users may buy
>> or
>> swap betacoin for bitcoin to facilitate moving money back from betacoin to
>> bitcoin.  However that is market priced so the bitcoin network is security
>> insulated from beta.  A significant security bug in beta would cause a
>> market freeze, until it is rectified.
>>
>> The cost of a betacoin is capped at one BTC because no one will pay more
>> than one bitcoin for a betacoin because they could alternatively move
>> their
>> own coin.  The reverse is market priced.
>>
>> Once bitcoin beta stabalizes, eg say year or two type of time-frame, a
>> decision is reached to promote 1.0 beta to 2.0 stable, the remaining
>> bitcoins can be moved, and the old network switched off, with mining past
>> a
>> flag day moving to the betacoin.
>>
>> During the beta period betacoin is NOT an alpha, people can rely on it and
>> use it in anger for real value transactions.  eg if it enables more script
>> features, or coin coloring, scalabity tweaks etc people can use it.
>> Probably for large value store they are always going to prefer
>> bitcoin-stable, but applications that need the coloring features, or
>> advanced scripting etc can go ahead and beta.
>>
>> Bitcoin-stable may pull validated changes and merge them, as a way to pull
>> in any features needed in the shorter term and benefit from the betacoin
>> validation.  (Testing isnt as much validation as real-money at stake
>> survivability).
>>
>> The arguments are I think that:
>>
>> - it allows faster development allowing bitcoin to progress features
>> faster,
>>
>> - it avoids mindshare dilution if alternatively an alt-coin with a hit
>>  missing feature takes off;
>>
>> - it concentrates such useful-feature alt activities into one OPEN source
>>  and OPEN control foundation mediated area (rather than suspected land
>>  grabs on colored fees or such like bitcoin respun as a business model
>>  things),
>>
>> - maybe gets the developers that would've been working on their pet
>>  alt-coin, or their startup alt-coin to work together putting more
>>  developers, testers and resources onto something with open control (open
>>  source does not necessarily mean that much) and bitcoin mindshare
>>  branding, its STILL bitcoin, its just the beta network.
>>
>> - it respects the 21 million limit, starting new mining races probably
>>  dillutes the artificial scarcity semantic
>>
>> - while insulating bitcoin from betacoin security defects (I dont mean
>>  betacoin as a testnet, it should have prudent rigorous testing like
>>  bitcoin, just the very act of adding a feature creates risk that bitcoin
>>  stable can be hesitant to take).
>>
>> Probably the main issue as always is more (trustable) very high caliber
>> testers and developers.  Maybe if the alt-coin minded startups and
>> developers donate their time to bitcoin-beta (or bitcoin-stable) for the
>> bits they are missing, we'll get more hands to work on something of
>> reusable
>> value to humanity, in parallel with their startup's objectives and as a
>> way
>> for them to get their needed features, while giving back to the bitcoin
>> community, and helping bitcoin progress faster.
>>
>> Maybe bitcoin foundation could ask for BTC donations to hire more
>> developers
>> and testers full time.  $1.5b of stored value should be interested to safe
>> guard their value store, and develop the transaction features.
>>
>> Adam
>>
>> On Mon, May 20, 2013 at 02:34:06AM -0400, Alan Reiner wrote:
>>>
>>>  This is exactly what I was planning to do with the
>>>  inappropriately-named "Ultimate Blockchain Compression".  [...]
>>>
>>>  For it to really work, it's gotta be part of the mainnet validation
>>>  rules, but no way it can be evaluated realistically without some kind of
>>>  "staging".
>>
>>
>>>  On 5/19/2013 11:08 AM, Peter Vessenes wrote:
>>>
>>>  I think this is a very interesting idea. As Bitcoiners, we often stuff
>>>  things into the 'alt chain' bucket in our heads; I wonder if this idea
>>>  works better as a curing period, essentially an extended version of the
>>>  current 100 block wait for mined coins.

------------------------------------------------------------------------------
Pavol Rusnak | 22 Oct 10:52 2014
Picon

Re: cryptographic review requested

On 10/22/2014 10:46 AM, Chris D'Costa wrote:
> Looks great, but how would you resolve the problem of knowing for certain
> that the public key you have received to encrypt the message is not from a
> MITM?

Isn't this the same problem with PGP?

--

-- 
Best Regards / S pozdravom,

Pavol Rusnak <stick <at> gk2.sk>

------------------------------------------------------------------------------
Comprehensive Server Monitoring with Site24x7.
Monitor 10 servers for $9/Month.
Get alerted through email, SMS, voice calls or mobile push notifications.
Take corrective actions from your mobile device.
http://p.sf.net/sfu/Zoho
Wladimir | 18 Oct 12:13 2014
Picon

Re: About watch-only addresses

On Fri, Oct 17, 2014 at 10:36 PM, Flavien Charlon
<flavien.charlon <at> coinprism.com> wrote:
> Hi,
>
> What is the status of watch-only addresses in Bitcoin Core? Is it merged in
> master and usable? Is there documentation on how to add a watch-only address
> through RPC.

It has been merged. There is the "importaddress" RPC call, which works
the same as "importprivkey" except that you a pass it an address.

> Also, I believe that is going towards the 0.10 release, is there a rough ETA
> for a release candidate?

Yes - aim is in a few months, probably by the end of the year.

AFAIK there are no nightly builds at this moment. Warren Togami was
building them for a while (at http://nightly.bitcoin.it/) but he
stopped some time around June.

It's not recommended to use master without at least a little bit of
development/debugging experience of yourself (to trace down problems
when they appear), so it's best to build it yourself if you're going
to test day-to-day development versions.

Wladimir

------------------------------------------------------------------------------
Comprehensive Server Monitoring with Site24x7.
Monitor 10 servers for $9/Month.
Get alerted through email, SMS, voice calls or mobile push notifications.
Take corrective actions from your mobile device.
http://p.sf.net/sfu/Zoho
Flavien Charlon | 18 Oct 11:44 2014

Re: About watch-only addresses

Also, I was wondering if there were nightly builds I could try this from?

On Fri, Oct 17, 2014 at 9:36 PM, Flavien Charlon <flavien.charlon <at> coinprism.com> wrote:
Hi,

What is the status of watch-only addresses in Bitcoin Core? Is it merged in master and usable? Is there documentation on how to add a watch-only address through RPC.

Also, I believe that is going towards the 0.10 release, is there a rough ETA for a release candidate?

Thanks
Flavien

------------------------------------------------------------------------------
Comprehensive Server Monitoring with Site24x7.
Monitor 10 servers for $9/Month.
Get alerted through email, SMS, voice calls or mobile push notifications.
Take corrective actions from your mobile device.
http://p.sf.net/sfu/Zoho
_______________________________________________
Bitcoin-development mailing list
Bitcoin-development <at> lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Flavien Charlon | 17 Oct 22:36 2014

About watch-only addresses

Hi,

What is the status of watch-only addresses in Bitcoin Core? Is it merged in master and usable? Is there documentation on how to add a watch-only address through RPC.

Also, I believe that is going towards the 0.10 release, is there a rough ETA for a release candidate?

Thanks
Flavien
------------------------------------------------------------------------------
Comprehensive Server Monitoring with Site24x7.
Monitor 10 servers for $9/Month.
Get alerted through email, SMS, voice calls or mobile push notifications.
Take corrective actions from your mobile device.
http://p.sf.net/sfu/Zoho
_______________________________________________
Bitcoin-development mailing list
Bitcoin-development <at> lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development

Gmane