Dan Bryant | 2 Jul 06:44 2015
Picon

REQ BIP # / Discuss - Sweep incoming unconfirmed transactions with a bounty.

This is a process BIP request to add functionality to the Bitcoin-Core
reference implementation.  If accepted, this could also add
flexibility into any future fee schedules.

https://github.com/d4n13/bips/blob/master/bip-00nn.mediawiki

Note, left the formatting in, since mediawiki is a fairly light markup.
==================================
<pre>
  BIP: nn
  Title: Sweep unconfirmed transactions by including their outputs in
high fee transactions
  Author: Dan Bryant <dkbryant <at> gmail.com>
  Status: Draft
  Type: Process
  Created: 2015-07-01
</pre>

==Abstract==

This BIP describes an enhancement to the reference client that
addresses the need incentive inclusion of unconfirmed transactions.
This method will create new high fee (or bounty) transactions that
spend the desired unconfirmed transactions.  To claim the high fee
(bounty) transactions, miners will need to include the desired
unconfirmed transactions.

==Motivation==

There are times when an individual receives a payment from someone
(Continue reading)

Jean-Paul Kogelman | 2 Jul 06:04 2015

Defining a min spec

Hi folks,

I’m a game developer. I write time critical code for a living and have to deal with memory, CPU, GPU and I/O
budgets on a daily basis. These budgets are based on what we call a minimum specification (of hardware);
min spec for short. In most cases the min spec is based on entry model machines that are available during
launch, and will give the user an enjoyable experience when playing our games. Obviously, we can turn on a
number of bells and whistles for people with faster machines, but that’s not the point of this mail.

The point is, can we define a min spec for Bitcoin Core? The number one reason for this is: if you know how your
changes affect your available budgets, then the risk of breaking something due to capacity problems is
reduced to practically zero.

One way of doing so is to work backwards from what we have right now: Block size (network / disk I/O),
SigOps/block (CPU), UTXO size (memory), etc. Then there’s Pieter’s analysis of network
bottlenecks and how it affects orphan rates that could be used to set some form of cap on what transfer time +
verification time should be to keep the orphan rate at an acceptable level.

So taking all of the above (and more) into account, what configuration would be the bare minimum to
comfortably run Bitcoin Core at maximum load and can it be reasonably expected to still be out there in the
field running Bitcoin Core? Also, can the parameters that were used to determine this min spec be codified
in some way so that they can later be used if Bitcoin Core is optimized (or extended with new functionality)
and see how it affects the min spec? Basically, with any reasonably big change, one of the first questions
could be: “How does this change affect min spec?"

For example, currently OpenSSL is used to verify the signatures in the transactions. The new secp256k1
implementation is several times faster than (depending on CPU architecture, I’m sure) OpenSSL’s
implementation. So it would result in faster verification time. This can then result in the following
things; either network I/O and CPU requirements are adjusted downward in the min spec (you can run the new
Bitcoin Core on a cheaper configuration), or other parameters can be adjusted upwards (number of SigOps /
transaction, block size?), through proper rollout obviously. Since we know how min spec is affected by
(Continue reading)

Rusty Russell | 2 Jul 04:38 2015
Picon

BIP 68 Questions

Hi Mark,

        It looks like the code in BIP 68 compares the input's nSequence
against the transaction's nLockTime:

        if ((int64_t)tx.nLockTime < LOCKTIME_THRESHOLD)
            nMinHeight = std::max(nMinHeight, (int)tx.nLockTime);
        else
            nMinTime = std::max(nMinTime, (int64_t)tx.nLockTime);

        if (nMinHeight >= nBlockHeight)
            return nMinHeight;
        if (nMinTime >= nBlockTime)
            return nMinTime;

So if transaction B spends the output of transaction A:

1.  If A is in the blockchain already, you don't need a relative
    locktime since you know A's time.
2.  If it isn't, you can't create B since you don't know what
    value to set nLockTime to.

How was this supposed to work?

Thanks,
Rusty.
odinn | 2 Jul 00:49 2015
Picon

Re: Draft BIP : fixed-schedule block size increase


(My replies below)

On 06/26/2015 06:47 AM, Tier Nolan wrote:
> On Thu, Jun 25, 2015 at 3:07 PM, Adam Back <adam <at> cypherspace.org 
> <mailto:adam <at> cypherspace.org>> wrote:
> 
> The hard-cap serves the purpose of a safety limit in case our 
> understanding about the economics, incentives or game-theory is
> wrong worst case.
> 
> 
> True.

Yep.

> 
> BIP 100 and 101 could be combined.  Would that increase consensus?

Possibly ~ In my past message(s), I've suggested that Jeff's BIP 100
is a better alternative to Gavin's proposal(s), but that I didn't
think that this should be taken to mean that I am saying one thing is
"superior" to Gavin's work, rather, I emphasized that Gavin work with
Jeff and Adam.

At least, at this stage the things are in a BIP process.

If the BIP 100 and BIP 101 would be combined, what would that look
like on paper?

(Continue reading)

odinn | 2 Jul 00:34 2015
Picon

Re: BIP Process and Votes


Possibly relevant to this discussion (though old)

https://gist.github.com/gavinandresen/2355445 (last changed in 2012 I
think?)

and

https://bitcoin.stackexchange.com/questions/30817/what-is-a-soft-fork
(which cites gavin's gist shown above)

On 06/25/2015 05:42 PM, Milly Bitcoin wrote:
> That description makes sense.  It also makes sense to separate out
> the hard fork from the soft fork process.   Right now some people
> want to use the soft fork procedure for a hard fork simply because
> there is no other way to do it.
> 
> I am under the impression that most users expect
> changes/improvements that would require a hard fork so I think some
> kind of process needs to be developed.  Taking the responsibility
> off the shoulder of the core maintainer also makes sense.  The hard
> fork issue is too much of a distraction for people trying to
> maintain the nuts and bolts of the underlying system.
> 
> I saw a suggestion that regularly scheduled hard forks should be 
> planned.  That seems to make sense so you would have some sort of 
> schedule where you would have cut off dates for hard-fork BIP 
> submissions.  That way you avoid the debates over whether there
> should be hard forks to what should be contained within the hard
> fork (if needed).  It makes sense to follow the BIP process as
(Continue reading)

Wladimir J. van der Laan | 1 Jul 13:42 2015
Picon

Bitcoin core 0.11.0 release candidate 3 available


Hello,

I've just uploaded Bitcoin Core 0.11.0rc3 executables to:

https://bitcoin.org/bin/bitcoin-core-0.11.0/test/

The source code can be found in the source tarball or in git under the tag 'v0.11.0rc3'

Preliminary release notes can be found here:

https://github.com/bitcoin/bitcoin/blob/0.11/doc/release-notes.md

Changes since rc2:
+- #6319 `3f8fcc9` doc: update mailing list address
+- #6303 `b711599` gitian: add a gitian-win-signer descriptor
+- #6246 `8ea6d37` Fix build on FreeBSD
+- #6282 `daf956b` fix crash on shutdown when e.g. changing -txindex and abort action
+- #6233 `a587606` Advance pindexLastCommonBlock for blocks in chainActive
+- #6333 `41bbc85` Hardcoded seeds update June 2015
+- #6354 `bdf0d94` Gitian windows signing normalization

Thanks to everyone that participated in development, translation or in the gitian build process,

Wladimir
NxtChg | 1 Jul 10:45 2015

Bitcoin governance

(sorry for the long post, I tried)

I've been thinking about how we could build an effective Bitcoin governance, but couldn't come up with
anything remotely plausible.

It seems we might go a different way, though, with Core and XT continue co-existing in parallel, mostly in a
compatible state, out of the need that "there can be only one".

Both having the same technical protocol, but different people, structure, processes and political
standing; serving as a kind of two-party system and keeping each other in check.

Their respective power will be determined by the number of Core vs XT nodes running and people/businesses
on board. They will have to negotiate any significant change at the risk of yet another full fork.

And occasionally the full forks will still happen and the minority will have to concede and change their
protocol to match the winning side.

Can there be any other way? Can you really control a decentralized system with a centralized governance,
like Core Devs or TBF?

----

In this view, what's happening is a step _towards_ decentralization, not away from it. It proves that
Bitcoin is indeed a decentralized system and that minority cannot impose its will.

For the sides to agree now would actually be a bad thing, because that would mean kicking the governance
problem down the road.

And we _need_ to go through this painful split at least once. The block size issue is perfect: controversial
enough to push the split, but not controversial enough so one side couldn't win.
(Continue reading)

Michael Naber | 1 Jul 09:15 2015
Picon

Reaching consensus on policy to continually increase block size limit as hardware improves, and a few other critical issues

This is great: Adam agrees that we should scale the block size limit discretionarily upward within the limits of technology, and continually so as hardware improves. Peter and others: What stands in the way of broader consensus on this?


We also agree on a lot of other important things:
-- block size is not a free variable
-- there are trade-offs between node requirements and block size
-- those trade-offs have impacts on decentralization
-- it is important to keep decentralization strong
-- computing technology is currently not easily capable of running a global transaction network where every transaction is broadcast to every node
-- we may need some solution (perhaps lightning / hub and spoke / other things) that can help with this

We likely also agree that:
-- whatever that solution may be, we want bitcoin to be the "hub" / core of it
-- this hub needs to exhibit the characteristic of globally aware global consensus, where every node knows about (awareness) and agrees on (consensus) every transaction
-- Critically, the Bitcoin Core Goal: the goal of Bitcoin core is to build the "best" globally aware globally consensus network, recognizing there are complex tradeoffs in doing this.


There are a few important things we still don't agree on though. Our disagreement on these things is causing us to have trouble making progress meeting the goal of Bitcoin Core. It is critical we address the following points of disagreement. Please help get agreement on these issues below by sharing your thoughts:

1) Some believe that fees and therefore hash-rate will be high by limiting capacity, and that we need to limit capacity to have a "healthy fee market".

Think of the airplane analogy: If some day technology exists to ship a hundred million people (transactions) on a plane (block) then do you really want to fight to outlaw those planes? Airlines are regulated so they have to pay to screen each passenger to a minimum standard, so even if the plane has unlimited capacity, they still have to pay to meet minimum security for each passenger. 

Just like we can set the block limit, so can we "regulate the airline security requirements" and set a minimum fee size for the sake of security. If technology allows running 100,000 transactions per second in 25 years, and we set the minimum fee size to one penny, then each block is worth a minimum of $600,000. Miners should be ok with that and so should everyone else.

2) Some believe that it is better for (a) network reliability and (b) validation of transaction integrity, to have every user run a "full node" in order to use Bitcoin Core.

I don't agree with this. I'll break this into two pieces of network reliability and transaction integrity.

Network Reliability

Imagine you're setting up an email server for a big company. You decide to set up a main server, and two fail-over servers. Somebody says that they're really concerned about reliability and asks you to add another couple fail-over servers. So you agree. But at some point there's limited benefit to adding more servers: and there's real cost -- all those servers need to keep in sync with one another, and they need to be maintained, etc. And there's limited return: how likely is it really that all those servers are going to go down?

Bitcoin is obviously different from corporate email servers. In one sense, you've got miners and volunteer nodes rather than centrally managed ones, so nodes are much more likely to go down. But at the end of the day, is our up-time really going to be that much better when you have a million nodes versus a few thousand? 

Cloud storage copies your data a half dozen times to a few different data centers. But they don't copy it a half a million times. At some point the added redundancy doesn't matter for reliability. We just don't need millions of nodes to participate in a broadcast network to ensure network reliability.

Transaction Integrity

Think of open source software: you trust it because you know it can be audited easily, but you probably don't take the time to audit yourself every piece of open source software you use. And so it is with Bitcoin: People need to be able to easily validate the blockchain, but they don't need to be able to validate it every time they use it, and they certainly don't need to validate it when using Bitcoin on their Apple watches.

If I can lease a server in a data center for a few hours at fifty cents an hour to validate the block chain, then the total cost for me to independently validate the blockchain is just a couple dollars. Compare that to my cost to independently validate other parts of the system -- like the source code! Where's the real cost here?

If the goal of decentralization is to ensure transaction integrity and network reliability, then we just don't need lots of nodes or every user running a node to meet that goal. If the goal of decentralization is something else: what is it?

3) Some believe that we should make Bitcoin Core to run as a high-memory server-grade software rather than for people's desktops.

I think this is a great idea. 

The meaningful impact to the goals of decentralization by limiting which hardware nodes can run on will be minimal compared with the huge gains in capacity. Why does increasing capacity of Bitcoin Core matter when we can "increase capacity" by moving to hub and spoke / lightning? Maybe we should ask why does growing more apples matter if we can grow more oranges instead?

Hub and spoke and lightning are useful means of making lower cost transactions, but they're not the same as Bitcoin Core. Stick to the goal: the goal of Bitcoin core is to build the "best" globally aware globally consensus network, recognizing there are complex tradeoffs in doing this.

Hub and spoke and lightning could be great when you want lower-cost fees and don't really care about global awareness. Poker chips are great when you're in a casino. We don't talk about lightning networks to the guy who designs poker chips, and we shouldn't be talking about them to the guy who builds globally aware consensus networks either. 

Do people even want increased capacity when they can use hub and spoke / lightning? If you think they might be willing to pay $600,000 every ten minutes for it (see above) then yes. Increase capacity, and let the market decide if that capacity gets used.


On Tue, Jun 30, 2015 at 3:54 PM, Adam Back <adam <at> cypherspace.org> wrote:
Not that I'm arguing against scaling within tech limits - I agree we
can and should - but note block-size is not a free variable.  The
system is a balance of factors, interests and incentives.

As Greg said here
https://www.reddit.com/r/Bitcoin/comments/3b0593/to_fork_or_not_to_fork/cshphic?context=3
there are multiple things we should usefully do with increased
bandwidth:

a) improve decentralisation and hence security/policy
neutrality/fungibility (which is quite weak right now by a number of
measures)
b) improve privacy (privacy features tend to consume bandwidth, eg see
the Confidential Transactions feature) or more incremental features.
c) increase throughput

I think some of the within tech limits bandwidth should be
pre-allocated to decentralisation improvements given a) above.

And I think that we should also see work to improve decentralisation
with better pooling protocols that people are working on, to remove
some of the artificial centralisation in the system.

Secondly on the interests and incentives - miners also play an
important part of the ecosystem and have gone through some lean times,
they may not be overjoyed to hear a plan to just whack the block-size
up to 8MB.  While it's true (within some limits) that miners could
collectively keep blocks smaller, there is the ongoing reality that
someone else can take break ranks and take any fee however de minimis
fee if there is a huge excess of space relative to current demand and
drive fees to zero for a few years.  A major thing even preserving
fees is wallet defaults, which could be overridden(plus protocol
velocity/fee limits).

I think solutions that see growth scale more smoothly - like Jeff
Garzik's and Greg Maxwell's and Gavin Andresen's (though Gavin's
starts with a step) are far less likely to create perverse unforeseen
side-effects.  Well we can foresee this particular effect, but the
market and game theory can surprise you so I think you generally want
the game-theory & market effects to operate within some more smoothly
changing caps, with some user or miner mutual control of the cap.

So to be concrete here's some hypotheticals (unvalidated numbers):

a) X MB cap with miner policy limits (simple, lasts a while)
b) starting at 1MB and growing to 2*X MB cap with 10%/year growth
limiter + policy limits
c) starting at 1MB and growing to 3*X MB cap with 15%/year growth
limiter + Jeff Garzik's miner vote.
d) starting at 1MB and growing to 4*X MB cap with 20%/year growth
limiter + Greg Maxwell's flexcap

I think it would be good to see some tests of achievable network
bandwidth on a range of networks, but as an illustration say X is 2MB.

Rationale being the weaker the signalling mechanism between users and
user demanded size (in most models communicated via miners), the more
risk something will go in an unforeseen direction and hence the lower
the cap and more conservative the growth curve.

15% growth limiter is not Nielsen's law by intent.  Akamai have data
on what they serve, and it's more like 15% per annum, but very
variable by country
http://www.akamai.com/stateoftheinternet/soti-visualizations.html#stoi-graph
CISCO expect home DSL to double in 5 years
(http://www.cisco.com/c/en/us/solutions/collateral/service-provider/visual-networking-index-vni/VNI_Hyperconnectivity_WP.html
), which is about the same number.

(Thanks to Rusty for data sources for 15% number).

This also supports the claim I have made a few times here, that it is
not realistic to support massive growth without algorithmic
improvement from Lightning like or extension-block like opt-in
systems.  People who are proposing that we ramp blocksizes to create
big headroom are I think from what has been said over time, often
without advertising it clearly, actually assuming and being ok with
the idea that full nodes move into data-centers period and small
business/power user validation becomes a thing of the distant past.
Further the aggressive auto-growth risks seeing that trend continuing
into higher tier data-centers with negative implications for
decentralisation.  The odd proponent seems OK with even that too.

Decentralisation is key to Bitcoin's security model, and it's
differentiating properties.  I think those aggressive growth numbers
stray into the zone of losing efficiency.  By which I mean in
scalability or privacy systems if you make a trade-off too far, it
becomes time to re-asses what you're doing.  For example at that level
of centralisation, alternative designs are more network efficient,
while achieving the same effective (weak) decentralisation.  In
Bitcoin I see this as a strong argument not to push things to that
extreme, the core functionality must remain for Lightning and other
scaling approaches to remain secure by using the Bitcoin as a secure
anchor.  If we heavily centralise and weaken the security of the main
Bitcoin chain, there remains nothing secure to build on.

Therefore I think it's more appropriate for high scale to rely on
lightning, or a semi-centralised trade-offs being in the side-chain
model or similar, where the higher risk of centralisation is opt-in
and not exposed back (due to the security firewall) to the Bitcoin
network itself.

People who would like to try the higher tier data-center and
throughput by high bandwidth use route should in my opinion run that
experiment as a layer 2 side-chain or analogous.  There are a few ways
to do that.  And it would be appropriate to my mind that we discuss
them here also.

An experiment like that could run in parallel with lightning, maybe it
could be done faster, or offer different trade-offs, so could be an
interesting and useful thing to see work on.

> On Tue, Jun 30, 2015 at 12:25 PM, Peter Todd <pete <at> petertodd.org> wrote:
>> Which of course raises another issue: if that was the plan, then all you
>> can do is double capacity, with no clear way to scaling beyond that.
>> Why bother?

A secondary function can be a market signalling - market evidence
throughput can increase, and there is a technical process that is
effectively working on it.  While people may not all understand the
trade-offs and decentralisation work that should happen in parallel,
nor the Lightning protocol's expected properties - they can appreciate
perceived progress and an evidently functioning process.  Kind of a
weak rationale, from a purely technical perspective, but it may some
value, and is certainly less risky than a unilateral fork.

As I recall Gavin has said things about this area before also
(demonstrate throughput progress to the market).

Another factor that people have said, which I think I agree with
fairly much is that if we can chose something conservative, that there
is wide-spread support for, it can be safer to do it with moderate
lead time.  Then if there is an implied 3-6mo lead time we are maybe
projecting ahead a bit further on block-size utilisation.  Of course
the risk is we overshoot demand but there probably should be some
balance between that risk and the risk of doing a more rushed change
that requires system wide upgrade of all non-SPV software, where
stragglers risk losing money.

As well as scaling block-size within tech limits, we should include a
commitment to improve decentralisation, and I think any proposal
should be reasonably well analysed in terms of bandwidth assumptions
and game-theory.  eg In IETF documents they have a security
considerations section, and sometimes a privacy section.  In BIPs
maybe we need a security, privacy and decentralisation/fungibility
section.

Adam

NB some new list participants may not be aware that miners are
imposing local policy limits eg at 750kB and that a 250kB policy
existed in the past and those limits saw utilisation and were
unilaterally increased unevenly.  I'm not sure if anyone has a clear
picture of what limits are imposed by hash-rate even today.  That's
why Pieter posed the question - are we already at the policy limit -
maybe the blocks we're seeing are closely tracking policy limits, if
someone mapped that and asked miners by hash-rate etc.

On 30 June 2015 at 18:35, Michael Naber <mickeybob <at> gmail.com> wrote:
> Re: Why bother doubling capacity? So that we could have 2x more network
> participants of course.
>
> Re: No clear way to scaling beyond that: Computers are getting more capable
> aren't they? We'll increase capacity along with hardware.
>
> It's a good thing to scale the network if technology permits it. How can you
> argue with that?

_______________________________________________
bitcoin-dev mailing list
bitcoin-dev <at> lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Peter Grigor | 1 Jul 01:41 2015

A possible solution for the block size limit: Detection and rejection of bloated blocks by full nodes.

The block size debate centers around one concern it seems. To wit: if block size is increased malicious miners may publish unreasonably large "bloated" blocks. The way a miner would do this is to generate a plethora of private, non-propagated transactions and include these in the block they solve.

It seems to me that these bloated blocks could easily be detected by other miners and full nodes: they will contain a very high percentage of transactions that aren't found in the nodes' own memory pools. This signature can be exploited to allow nodes to reject these bloated blocks. The key here is that any malicious miner that publishes a block that is bloated with his own transactions would contain a ridiculous number of transactions that *absolutely no other full node has in its mempool*.

Simply put, a threshold would be set by nodes on the allowable number of non-mempool transactions allowed in a solved block (say, maybe, 50% -- I really don't know what it should be). If a block is published which contains more that this threshold of non-mempool transactions then it is rejected.

If this idea works the block size limitation could be completely removed.
_______________________________________________
bitcoin-dev mailing list
bitcoin-dev <at> lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Adam Back | 30 Jun 21:54 2015

block-size tradeoffs & hypothetical alternatives (Re: Block size increase oppositionists: please clearly define what you need done to increase block size to a static 8MB, and help do it)

Not that I'm arguing against scaling within tech limits - I agree we
can and should - but note block-size is not a free variable.  The
system is a balance of factors, interests and incentives.

As Greg said here
https://www.reddit.com/r/Bitcoin/comments/3b0593/to_fork_or_not_to_fork/cshphic?context=3
there are multiple things we should usefully do with increased
bandwidth:

a) improve decentralisation and hence security/policy
neutrality/fungibility (which is quite weak right now by a number of
measures)
b) improve privacy (privacy features tend to consume bandwidth, eg see
the Confidential Transactions feature) or more incremental features.
c) increase throughput

I think some of the within tech limits bandwidth should be
pre-allocated to decentralisation improvements given a) above.

And I think that we should also see work to improve decentralisation
with better pooling protocols that people are working on, to remove
some of the artificial centralisation in the system.

Secondly on the interests and incentives - miners also play an
important part of the ecosystem and have gone through some lean times,
they may not be overjoyed to hear a plan to just whack the block-size
up to 8MB.  While it's true (within some limits) that miners could
collectively keep blocks smaller, there is the ongoing reality that
someone else can take break ranks and take any fee however de minimis
fee if there is a huge excess of space relative to current demand and
drive fees to zero for a few years.  A major thing even preserving
fees is wallet defaults, which could be overridden(plus protocol
velocity/fee limits).

I think solutions that see growth scale more smoothly - like Jeff
Garzik's and Greg Maxwell's and Gavin Andresen's (though Gavin's
starts with a step) are far less likely to create perverse unforeseen
side-effects.  Well we can foresee this particular effect, but the
market and game theory can surprise you so I think you generally want
the game-theory & market effects to operate within some more smoothly
changing caps, with some user or miner mutual control of the cap.

So to be concrete here's some hypotheticals (unvalidated numbers):

a) X MB cap with miner policy limits (simple, lasts a while)
b) starting at 1MB and growing to 2*X MB cap with 10%/year growth
limiter + policy limits
c) starting at 1MB and growing to 3*X MB cap with 15%/year growth
limiter + Jeff Garzik's miner vote.
d) starting at 1MB and growing to 4*X MB cap with 20%/year growth
limiter + Greg Maxwell's flexcap

I think it would be good to see some tests of achievable network
bandwidth on a range of networks, but as an illustration say X is 2MB.

Rationale being the weaker the signalling mechanism between users and
user demanded size (in most models communicated via miners), the more
risk something will go in an unforeseen direction and hence the lower
the cap and more conservative the growth curve.

15% growth limiter is not Nielsen's law by intent.  Akamai have data
on what they serve, and it's more like 15% per annum, but very
variable by country
http://www.akamai.com/stateoftheinternet/soti-visualizations.html#stoi-graph
CISCO expect home DSL to double in 5 years
(http://www.cisco.com/c/en/us/solutions/collateral/service-provider/visual-networking-index-vni/VNI_Hyperconnectivity_WP.html
), which is about the same number.

(Thanks to Rusty for data sources for 15% number).

This also supports the claim I have made a few times here, that it is
not realistic to support massive growth without algorithmic
improvement from Lightning like or extension-block like opt-in
systems.  People who are proposing that we ramp blocksizes to create
big headroom are I think from what has been said over time, often
without advertising it clearly, actually assuming and being ok with
the idea that full nodes move into data-centers period and small
business/power user validation becomes a thing of the distant past.
Further the aggressive auto-growth risks seeing that trend continuing
into higher tier data-centers with negative implications for
decentralisation.  The odd proponent seems OK with even that too.

Decentralisation is key to Bitcoin's security model, and it's
differentiating properties.  I think those aggressive growth numbers
stray into the zone of losing efficiency.  By which I mean in
scalability or privacy systems if you make a trade-off too far, it
becomes time to re-asses what you're doing.  For example at that level
of centralisation, alternative designs are more network efficient,
while achieving the same effective (weak) decentralisation.  In
Bitcoin I see this as a strong argument not to push things to that
extreme, the core functionality must remain for Lightning and other
scaling approaches to remain secure by using the Bitcoin as a secure
anchor.  If we heavily centralise and weaken the security of the main
Bitcoin chain, there remains nothing secure to build on.

Therefore I think it's more appropriate for high scale to rely on
lightning, or a semi-centralised trade-offs being in the side-chain
model or similar, where the higher risk of centralisation is opt-in
and not exposed back (due to the security firewall) to the Bitcoin
network itself.

People who would like to try the higher tier data-center and
throughput by high bandwidth use route should in my opinion run that
experiment as a layer 2 side-chain or analogous.  There are a few ways
to do that.  And it would be appropriate to my mind that we discuss
them here also.

An experiment like that could run in parallel with lightning, maybe it
could be done faster, or offer different trade-offs, so could be an
interesting and useful thing to see work on.

> On Tue, Jun 30, 2015 at 12:25 PM, Peter Todd <pete <at> petertodd.org> wrote:
>> Which of course raises another issue: if that was the plan, then all you
>> can do is double capacity, with no clear way to scaling beyond that.
>> Why bother?

A secondary function can be a market signalling - market evidence
throughput can increase, and there is a technical process that is
effectively working on it.  While people may not all understand the
trade-offs and decentralisation work that should happen in parallel,
nor the Lightning protocol's expected properties - they can appreciate
perceived progress and an evidently functioning process.  Kind of a
weak rationale, from a purely technical perspective, but it may some
value, and is certainly less risky than a unilateral fork.

As I recall Gavin has said things about this area before also
(demonstrate throughput progress to the market).

Another factor that people have said, which I think I agree with
fairly much is that if we can chose something conservative, that there
is wide-spread support for, it can be safer to do it with moderate
lead time.  Then if there is an implied 3-6mo lead time we are maybe
projecting ahead a bit further on block-size utilisation.  Of course
the risk is we overshoot demand but there probably should be some
balance between that risk and the risk of doing a more rushed change
that requires system wide upgrade of all non-SPV software, where
stragglers risk losing money.

As well as scaling block-size within tech limits, we should include a
commitment to improve decentralisation, and I think any proposal
should be reasonably well analysed in terms of bandwidth assumptions
and game-theory.  eg In IETF documents they have a security
considerations section, and sometimes a privacy section.  In BIPs
maybe we need a security, privacy and decentralisation/fungibility
section.

Adam

NB some new list participants may not be aware that miners are
imposing local policy limits eg at 750kB and that a 250kB policy
existed in the past and those limits saw utilisation and were
unilaterally increased unevenly.  I'm not sure if anyone has a clear
picture of what limits are imposed by hash-rate even today.  That's
why Pieter posed the question - are we already at the policy limit -
maybe the blocks we're seeing are closely tracking policy limits, if
someone mapped that and asked miners by hash-rate etc.

On 30 June 2015 at 18:35, Michael Naber <mickeybob <at> gmail.com> wrote:
> Re: Why bother doubling capacity? So that we could have 2x more network
> participants of course.
>
> Re: No clear way to scaling beyond that: Computers are getting more capable
> aren't they? We'll increase capacity along with hardware.
>
> It's a good thing to scale the network if technology permits it. How can you
> argue with that?
Justus Ranvier | 30 Jun 19:53 2015
Picon

RFC: HD Bitmessage address derivation based on BIP-43

Monetas has developed a Bitmessage address derivation method from an
HD seed based on BIP-43.

https://github.com/monetas/bips/blob/bitmessage/bip-bm01.mediawiki

We're proposing this as a BIP per the BIP-43 recommendation in order
to reserve a purpose code.

Gmane