[BIP Draft] Datastream compression of Blocks and Transactions

 <at> gmaxwell Bip Editor, and the Bitcoin Dev Community,

After several weeks of experimenting and testing with various
compression libraries I think there is enough evidence to show that
compressing blocks and transactions is not only beneficial in reducing
network bandwidth but is also provides a small performance boost when
there is latency on the network.

The following is a BIP Draft document for your review. 
(The alignment of the columns in the tables doesn't come out looking
right in this email but if you cut and paste into a text document they
are just fine)

  BIP: ?
  Title: Datastream compression of Blocks and Tx's
  Author: Peter Tschipper <peter.tschipper <at> gmail.com>
  Status: Draft
  Type: Standards Track
  Created: 2015-11-30


To compress blocks and transactions, and to concatenate them together
when possible, before sending.


(Continue reading)

Use CPFP as consensus critical for Full-RBF

(I haven't been following this development recently so apologies in advance if I've made assumptions about RBF)

If you made CPFP consensus critical for all Full-RBF transactions, RBF should be safer to use. I see RBF as a necessity for users to fix mistakes (and not for transaction prioritisation), but we can't know for sure if miners are playing with this policy fairly or not. It is hard to spot a legitimate RBF and a malicious one, but if the recipient signs off on the one they know about using CPFP, there should be no problems. This might depend on the CPFP implementation, because you'll need a way for the transaction to mark which output is a change address and which is a payment to prevent the sender from signing off his own txns. (This might be bad for privacy, but IMO a lot safer than allowing RBF double spending sprees... If you value privacy then don't use RBF?) Or maybe let them sign it off but make all outputs sign off somehow.

Copy/Paste from my reddit post:


Going to chime in my opinion: opt-in RBF eliminates the trust required with miners. You don't know if they're secretly running RBF right now anyway. Whether Peter Todd invented this is irrelevant, it was going to happen either way either with good intentions or with malice, so better to develop this with good intentions.

Perhaps the solution to this problem is simple. Allow Full-RBF up to the point where a recipient creates a CPFP transaction. Any transaction with full RBF that hasn't been signed off with a CPFP cannot go into a block, and this can become a consensus rule rather than local policy thanks to the opt-in flags that's inside transactions.

> P.S. (When I wrote this, I'm actually not sure how the flag looks like and am just guessing it can be used this way. I'm not familiar with the implementation.)

CPFP is needed so that merchants can bear the burden of fees (double bandwidth costs aside, and frankly if RBF is allowed bandwidth is going to increase regardless anyway). That's always the way I've being seeing its purpose. And this makes RBF much safer to use by combining the two.

bitcoin-dev mailing list
bitcoin-dev <at> lists.linuxfoundation.org

Test Results for : Datasstream Compression of Blocks and Tx's

Hi All,

Here are some final results of testing with the reference implementation for compressing blocks and transactions. This implementation also concatenates blocks and transactions when possible so you'll see data sizes in the 1-2MB ranges.

Results below show the time it takes to sync the first part of the blockchain, comparing Zlib to the LZOx library.  (LZOf was also tried but wasn't found to be as good as LZOx).  The following shows tests run with and without latency.  With latency on the network, all compression libraries performed much better than without compression.

I don't think it's entirely obvious which is better, Zlib or LZO.  Although I prefer the higher compression of Zlib, overall I would have to give the edge to LZO.  With LZO we have the fastest most scalable option when at the lowest compression setting which will be a boost in performance for users that want peformance over compression, and then at the high end LZO provides decent compression which approaches Zlib, (although at a higher cost) but good for those that want to save more bandwidth.

Uncompressed 60ms Zlib-1 (60ms) Zlib-6 (60ms) LZOx-1 (60ms) LZOx-999 (60ms)
219 299 296 294 291
432 568 565 558 548
652 835 836 819 811
866 1106 1107 1081 1071
1082 1372 1381 1341 1333
1309 1644 1654 1605 1600
1535 1917 1936 1873 1875
1762 2191 2210 2141 2141
1992 2463 2486 2411 2411
2257 2748 2780 2694 2697
2627 3034 3076 2970 2983
3226 3416 3397 3266 3302
4010 3983 3773 3625 3703
4914 4503 4292 4127 4287
5806 4928 4719 4529 4821
6674 5249 5164 4840 5314
7563 5603 5669 5289 6002
8477 6054 6268 5858 6638
9843 7085 7278 6868 7679
11338 8215 8433 8044 8795

These results from testing on a highspeed wireless LAN (very small latency)

Results in seconds

Num blocks sync'd Uncompressed Zlib-1 Zlib-6 LZOx-1 LZOx-999
10000 255 232 233 231 257
20000 464 414 420 407 453
30000 677 594 611 585 650
40000 887 782 795 760 849
50000 1099 961 977 933 1048
60000 1310 1145 1167 1110 1259
70000 1512 1330 1362 1291 1470
80000 1714 1519 1552 1469 1679
90000 1917 1707 1747 1650 1882
100000 2122 1905 1950 1843 2111
110000 2333 2107 2151 2038 2329
120000 2560 2333 2376 2256 2580
130000 2835 2656 2679 2558 2921
140000 3274 3259 3161 3051 3466
150000 3662 3793 3547 3440 3919
160000 4040 4172 3937 3767 4416
170000 4425 4625 4379 4215 4958
180000 4860 5149 4895 4781 5560
190000 5855 6160 5898 5805 6557
200000 7004 7234 7051 6983 7770

The following show the compression ratio acheived for various sizes of data.  Zlib is the clear
winner for compressibility, with LZOx-999 coming close but at a cost.

range Zlib-1 cmp%
Zlib-6 cmp% LZOx-1 cmp% LZOx-999 cmp%
0-250b 12.44 12.86 10.79 14.34
250-500b  19.33 12.97 10.34 11.11
600-700 16.72 n/a 12.91 17.25
700-800 6.37 7.65 4.83 8.07
900-1KB 6.54 6.95 5.64 7.9
1KB-10KB 25.08 25.65 21.21 22.65
10KB-100KB 19.77 21.57 14.37 19.02
100KB-200KB 21.49 23.56 15.37 21.55
200KB-300KB 23.66 24.18 16.91 22.76
300KB-400KB 23.4 23.7 16.5 21.38
400KB-500KB 24.6 24.85 17.56 22.43
500KB-600KB 25.51 26.55 18.51 23.4
600KB-700KB 27.25 28.41 19.91 25.46
700KB-800KB 27.58 29.18 20.26 27.17
800KB-900KB 27 29.11 20 27.4
900KB-1MB 28.19 29.38 21.15 26.43
1MB -2MB 27.41 29.46 21.33 27.73

The following shows the time in seconds to compress data of various sizes.  LZO1x is the
fastest and as file sizes increase, LZO1x time hardly increases at all.  It's interesing
to note as compression ratios increase LZOx-999 performs much worse than Zlib.  So LZO is faster
on the low end and slower (5 to 6 times slower) on the high end.

range Zlib-1 Zlib-6 LZOx-1 LZOx-999 cmp%
0-250b    0.001 0 0 0
250-500b   0 0 0 0.001
500-1KB     0 0 0 0.001
1KB-10KB    0.001 0.001 0 0.002
10KB-100KB   0.004 0.006 0.001 0.017
100KB-200KB  0.012 0.017 0.002 0.054
200KB-300KB  0.018 0.024 0.003 0.087
300KB-400KB  0.022 0.03 0.003 0.121
400KB-500KB  0.027 0.037 0.004 0.151
500KB-600KB  0.031 0.044 0.004 0.184
600KB-700KB  0.035 0.051 0.006 0.211
700KB-800KB  0.039 0.057 0.006 0.243
800KB-900KB  0.045 0.064 0.006 0.27
900KB-1MB   0.049 0.072 0.006 0.307

bitcoin-dev mailing list
bitcoin-dev <at> lists.linuxfoundation.org


Prior discussion:

Greatly improve security for payment networks like the 'Lightning
Network' (LN) [1]


To improve privacy while using a payment network, it is possible to
use onion-routing to make a payment to someone. In this context,
onion-routing means encrypting the data about subsequent hops in a way
that each node only knows where it received a payment from and the
direct next node it should send the payment to. This way we can route
a payment over N nodes, and none of these will know

(1) at which position it is within the route (first, middle, last?)

(2) which node initially issued the payment (payer)

(3) which node consumes the payment (payee).

However, given the way payments in LN work, each payment is uniquely
identifiable by a preimage-hash pair R-H. H is included in the output
script of the commit transaction, such that the payment is enforceable
if you ever get to know the preimage R.

In a payment network each node makes a promise to pay the next node,
if they can produce R. They can pass on the payment, as they know that
they can enforce the payment from a previous node using the same
preimage R. This severely damages privacy, as it lowers the amount of
nodes an attacker has to control to gain information about payer and


The problem was inherited by using RIPEMD-160 for preimage-hash
construction. For any cryptographic hash-function it is fundamentally
unfeasible to correlate preimage and hash in such a way, that

F1(R1) = R2 and
F2(H1) = H2, while
SHA(R1) = H1 and SHA(R2) = H2.

In other words, I cannot give a node H1 and H2 and ask it to receive
my payment using H1, but pass it on using H2, as the node has no way
of verifying it can produce R1 out of the R2 it will receive. If it
cannot produce R1, it is unable to enforce my part of the contract.


While above functions are merely impossible to construct for a
cryptographic hash functions, they are trivial when R and H is a EC
private/public key pair. The original sender can make a payment using
H1 and pass on a random number M1, such that the node can calculate a
new public key

H2 = H1 + M1.

When he later receives the private key R2, he can construct

R1 = R2 - M1

to be able to enforce the other payment. M1 can be passed on in the
onion object, such that each node can only see M for that hop.
Furthermore, it is unfeasible to brute-force, given a sufficiently
large number M.



Given that E wants to receive a payment from A, payable to H. (if A
can produce R, it can be used as a prove he made the payment and E
received it)

A decides to route the payment over the nodes B, C and D. A uses four
numbers M1...M4 to calculate H1...H4. The following payments then take

A->B using H4
B->C using H3
C->D using H2
D->E using H1.

When E receives H1, he can use attached M1 to calculate R1 from it.
The chain will resolve itself, and A is able to calculate R using
M1...M4. It also means that all privacy is at the sole discretion of
the sender, and that not even the original pair R/H is known to any of
the nodes.

To improve privacy, E could also be a rendezvous point chosen by the
real receiver of the payment, similar constructions are similar in
that direction as well.



Currently it is difficult to enforce a payment to a private-public key
pair on the blockchain. While there exists OP_HASH160 OP_EQUAL to
enforce a payment to a hash, the same does not hold true for EC keys.
To make above possible we would therefore need some easy way to force
a private key, given a public key. This could be done by using one of
the unused OP_NOP codes, which will verify

<private key> <public key> OP_CHECKPRIVPUBPAIR

and fails if these are not correlated or NOP otherwise. Would need
OP_2DROP afterwards. This would allow deployment using a softfork.

As there are requests for all sort of general crypto operations in
script, we can also introduce a new general OP_CRYPTO and prepend one
byte for the operation, so

0x02-0xff OP_CRYPTO = OP_NOP

to allow for extension at some later point.



In the attached discussion there are some constructions that would
allow breaking the signature scheme, but they are either very large in
script language or expensive to calculate. Given that the blocksize is
a difficult topic already, it would not be beneficial to have a 400B+
for each open payment in case one party breaches the contract. (or
just disappears for a couple of days)

It is also possible to use a NIZKP - more specifically SNARK - to
prove to one node that it is able to recover a preimage R1 = R2 XOR
M1, given only H1, H2 and M1. However, these are expensive to
calculate and experimental in it's current state.


Gregory Maxwell for pointing out addition of M1 for EC points is much
less expensive
Pieter Wuille for helping with general understanding of EC math.
Anthony Towns for bringing up the issue and explaining SNARKs

Peter Todd via bitcoin-dev | 25 Nov 22:37 2015

Why sharding the blockchain is difficult


The following was originally posted to reddit; I was asked to repost it here:

In a system where everyone mostly trusts each other, sharding works great! You
just split up the blockchain the same way you'd shard a database, assigning
miners/validators a subset of the txid space. Transaction validation would
assume that if you don't have the history for an input yourself, you assume
that history is valid. In a banking-like environment where there's a way to
conduct audits and punish those who lie, this could certainly be made to work.
(I myself have worked on and off on a scheme to do exactly that for a few
different clients: [Proofchains](https://github.com/proofchains))

But in a decentralized environment sharding is far, far, harder to
accomplish... There's an old idea we've been calling "fraud proofs", where you
design a system where for every way validation can fail, you can create a short
proof that part of the blockchain was invalid. Upon receiving that proof your
node would reject the invalid part of the chain and roll back the chain. In
fact, the original Satoshi whitepaper refers to fraud proofs, using the term
"alerts", and assumed SPV nodes would use them to get better guarantees they're
using a valid chain. (SPV as implemented by bitcoinj is sometimes referred to
as "non-validating SPV") The problem is, how do you guarantee that the fraud
will get detected? And How do you guarantee that fraud that is detected
actually gets propagated around the network? And if all that fails... then

The nightmare scenario in that kind of system is some miner successfully gets
away with fraud for awhile, possibly creating hundreds of millions of dollars
worth of bitcoins out of thin air. Those fake coins could easily "taint" a
significant fraction of the economy, making rollback impossible and shaking
faith in the value of the currency. Right now in Bitcoin this is pretty much
impossible because everyone can run a full node to validate the chain for
themselves, but in a sharded system that's far harder to guarantee.

Now, suppose we *can* guarantee validity. zk-SNARKS are basically a way of
mathematically proving that you ran a certain computer program on some data,
and that program returned true. *Recursive* zk-SNARKS are simply zk-SNARKS
where the program can also recursively evaluate that another zk-SNARK is true.
With this technology a miner could *prove* that the shard they're working on is
valid, solving the problem of fake coins. Unfortunately, zk-SNARKS are bleeding
edge crypto, (if zerocoin had been deployed a the entire system would have been
destroyed by a recently found bug that allowed fake proofs to be created) and
recursive zk-SNARKS don't exist yet.

The closest thing I know of to recrusive zk-SNARKS that actually does work
without "moon-math" is an idea I came up with for treechains called coin
history linearization. Basically, if you allow transactions to have multiple
inputs and outputs, proving that a given coin is valid requires the entire coin
history, which has quasi-exponential scaling - in the Bitcoin economy coins are
very quickly mixed such that all coins have pretty much all other coins in
their history.

Now suppose that rather than proving that all inputs are valid for a
transaction, what if you only had to prove that *one* was valid? This would
linearize the coin history as you only have to prove a single branch of the
transaction DAG, resulting in O(n) scaling. (with n <= total length of the
blockchain chain)

Let's assume Alice is trying to pay Bob with a transaction with two inputs each
of equal value. For each input she irrevocable records it as spent, permanently
committing that input's funds to Bob. (e.g. in an irrevocable ledger!) Next she
makes use of a random beacon - a source of publicly known random numbers that
no-one can influence - to chose which of the two inputs' coin history's she'll
give to Bob as proof that the transaction is real. (both the irrevocable ledger
and random beacon can be implemented with treechains, for example)

If Alice is being honest and both inputs are real, there's a 100% chance that
she'll be able to successfully convince Bob that the funds are real. Similarly,
if Alice is dishonest and neither input is real, it'll be impossible for her
convince prove to Bob that the funds are real.

But what if one of the two inputs is real and the other is actually fake? Half
the time the transaction will succeed - the random beacon will select the real
input and Bob won't know that the other input is fake. However, half the time
the *fake* input will be selected, and Alice won't be able to prove anything.
Yet, the real input has irrevocably been spent anyway, destroying the funds! If
the process by which funds are spent really is irrevocable, and Alice has
absolutely no way to influence the random beacon, the two cases cancel out.
While she can get away with fraud, there's no economic benefit for her to do
so. On a macro level, this means that fraud won't result in inflation of the
currency. (in fact, we want a system that institutionalizes this so-called
"fraud" - creating false proofs is a great way to make your coins more private)
(FWIW the way zk-SNARKS actually work is similar to this simple linearization
scheme, but with a lot of very clever error correction math, and the hash of
the data itself as the random beacon)

An actual implementation would be extended to handle multiple transaction
inputs of different sizes by weighing the probability that an input will be
selected by it's value - merkle-sum-trees work well for this. We still have the
problem that O(n) scaling kinda sucks; can we do better?

Yes! Remember that a genesis transaction output has no history - the coins are
created out of thin air and its validity is proven by the proof of work itself.
So every time you make a transaction that spends a genesis output you have a
chance of reducing the length of the coin validity proof back to zero. Better
yet, we can design a system where every transaction is associated with a bit of
proof-of-work, and thus every transaction has a chance of resetting the length
of the validity proof back to zero. In such a system you might do the PoW on a
per-transaction basis; you could outsource the task to miners with a special
output that only the miner can spend. Now we have O(1) scaling, with a k that
depends on the inflation rate. I'd have to dig up the calculations again, but
IIRC I sketched out a design for the above that resulted in something like 10MB
or 100MB coin validity proofs, assuming 1% inflation a year. (equally you can
describe that 1% inflation as a coin security tax) Certainly not small, but
compared to running a full node right now that's still a *huge* reduction in
storage space. (recursive zk-SNARKS might reduce that proof to something like
1kB of data)

Regardless of whether you have lightweight zk-SNARKS, heavyweight linearized
coin history proofs, or something else entirely, the key advantage is that
validation can become entirely client side. Miners don't even need to care
whether or not their *own* blocks are "valid", let alone other miners' blocks.
Invalid transactions in the chain are just garbage data, which gets rejected by
wallet software as invalid. So long as the protocol itself  works and is
implemented correctly it's impossible for fraud to go undetected and destroy
the economy the way it can in a sharded system.

However we still have a problem: censorship. This one is pretty subtle, and
gets to the heart of how these systems actually work. How do you prove that a
coin has validly been spent? First, prove that it hasn't already been spent!
How do you do that if you don't have the blockchain data? You can't, and no
amount of fancy math can change that.

In Bitcoin if everyone runs full nodes censorship can't happen: you either have
the full blockchain and thus can spend your money and help mine new blocks, or
that alternate fork might as well not exist. SPV breaks this as it allows funds
to be spent without also having the ability to mine - with SPV a cartel of
miners can prevent anyone else from getting access to the blockchain data
required to mine, while still allowing commerce to happen. In reality, this
type of cartel would be more subtle, and can even happen by accident; just
delaying other miners getting blockchain data by a few seconds harms those
non-cartel miners' profitability, without being obvious censorship. Equally, so
long as the cartel has [>30% of hashing power it's profitable in the long run
for the cartel if this
happens](http://www.mail-archive.com/bitcoin-development <at> lists.sourceforge.net/msg03200.html).

In sharded systems the "full node defense" doesn't work, at least directly. The
whole point is that not everyone has all the data, so you have to decide what
happens when it's not available.

Altcoins provide one model, albeit a pretty terrible one: taken as a whole you
can imagine the entire space of altcoins as a series of cryptocurrency shards
for moving funds around. The problem is each individual shard - each altcoin -
is weak and can be 51% attacked. Since they can be attacked so easily, if you
designed a system where funds could be moved from one shard to another through
coin history proofs every time a chain was 51% attacked and reorged you'd be
creating coins out of thin air, destroying digital scarcity and risking the
whole economy with uncontrolled inflation. You can instead design a system
where coins can't move between shards - basically what the altcoin space looks
like now - but that means actually paying someone on another "shard" requires
you to sell your coins and buy their coins - a inefficient and expensive
logistical headache. (there's a reason the Eurozone was created!)

If you want to transfer value between shards with coin history proofs, without
risking inflation, you need all the shards to share some type of global
consensus. This is the idea behind treechains: every part of the tree is linked
to a top-level timestamp chain, which means we have global consensus on the
contents of all chains, and thus spending a coin really is an immutable
one-time act.

Let's go into a bit more detail. So what is a coin in a treechains system?
First and foremost it's a *starting point* in some part of the tree, a specific
subchain. When Alice wants to prove to Bob that she spent a coin, giving it to
Bob, she inserts into that subchain the data that proves that someone *could
have* spent that coin - a valid signature and the hash of the transaction
output it was spending. But the actual proof that she gives to Bob isn't just
that spend data, but rather proof that all the blocks in that chain between the
starting point and the spend did *not* have a valid spend in them. (easiest way
to do that? give Bob those blocks) That proof must link back to the top-level
chain; if it doesn't the proof is simply not valid.

Now suppose Alice can't get that part of the subchain, perhaps because a cartel
of miners is mining it and won't give anyone else the data, or perhaps because
everyone with the data suffered a simultaneous harddrive crash. We'll also say
that higher up in the tree the data is available, at minimum the top-level
chain. As with Bitcoin, as long as that cartel has 51% of the hashing power,
Alice is screwed and can't spend her money.

What's interesting is what happens after that cartel disbands: how does mining
restart? It's easy to design a system where the creation of a block doesn't
require the knowledge of previous blocks, so new blocks can be added to extend
the subchain. But Alice is still screwed: she can't prove to Bob that the
missing blocks in the subchain didn't contain a valid spend of her coin. This
is pretty bad, on the other hand the damage is limited to just that one
subchain, and the system as a whole is unaffected.

There's a tricky incentives problem here though: if a miner can extend a
subchain without actually having previous blocks in that chain, where's the
incentive for that miner to give anyone else the blocks they create? Remember
that exclusive knowledge of a block is potentially valuable if you can extort
coin owners for it. (Bitcoin suffers from this problem right now with
validationless "SPV" mining, though the fact that a block can be invalid in
Bitcoin helps limit its effects)

Part of the solution could be mining reward; in Bitcoin, coinbase outputs can't
be spent for 100 blocks. A similar scheme could require that a spend of a
coinbase output in a subchain include proof that the next X blocks in that
subchain were in fact linked together. Secondly make block creation dependent
on actually having that data to ensure the linkage actually means something,
e.g. by introducing some validity rules so blocks can be invalid, and/or using
a PoW function that requires hashers to have a copy of that data.

Ultimately though this isn't magic: like it or not lower subchains in such a
system are inherently weaker and more dangerous than higher ones, and this is
equally true of any sharded system. However a hierarchically sharded system
like treechains can give users options: higher subchains are safer, but
transactions will expensive. The hierarchy does combine the PoW security of all
subchains together for the thing you can easily combine: timestamping security.

There's a big problem though: holy ! <at> #$ is the above complex compared to
Bitcoin! Even the "kiddy" version of sharding - my linearization scheme rather
than zk-SNARKS - is probably one or two orders of magnitude more complex than
using the Bitcoin protocol is right now, yet right now a huge % of the
companies in this space seem to have thrown their hands up and used centralized
API providers instead. Actually implementing the above and getting it into the
hands of end-users won't be easy.

On the other hand, decentralization isn't cheap: using PayPal is one or two
orders of magnitude simpler than the Bitcoin protocol.


'peter'[:-1] <at> petertodd.org
bitcoin-dev mailing list
bitcoin-dev <at> lists.linuxfoundation.org

OP_CHECKWILDCARDSIGVERIFY or "Wildcard Inputs" or "Coalescing Transactions"

Here is the problem I'm trying to solve with this idea:

Lets say you create an address, publish the address on your blog, and
tell all your readers to donate $0.05 to that address if they like
your blog. Lets assume you receive 10,000 donations this way. This all
adds up to $500. The problem is that because of the way the bitcoin
payment protocol works, a large chunk of that money will go to fees.
If one person sent you a single donation of $500, you would be able to
spend most of the $500, but since you got this coin by many smaller
UTXO's, your wallet has to use a higher tx fee when spending this

The technical reason for this is that you have to explicitly list each
UTXO individually when making bitcoin transactions. There is no way to
say "all the utxos". This post describes a way to achieve this. I'm
not yet a bitcoin master, so there are parts of this proposal that I
have not yet figured out entirely, but I'm sure other people who know
more could help out.


First, I propose a new opcode. This opcode works exactly the same as
OP_CHECKSIGVERIFY, except it only evaluates true if the signature is a
"wildcard signature". What is a wildcard signature you ask? This is
the part that I have not yet 100% figured out yet. It is basically a
signature that was created in such a way that expresses the private
key owners intent to make this input a *wildcard input*

** wildcard inputs **

A wildcard input is defined as a input to a transaction that has been
signed with OP_CHECKWILDCARDSIGVERIFY. The difference between a
wildcard input and a regular input is that the regular input respects
the "value" or "amount" field, while the wildcard input ignores that
value, and instead uses the value of *all inputs* with a matching
locking script.

** coalescing transaction"

A bitcoin transaction that
Btc Drak via bitcoin-dev | 24 Nov 11:30 2015

Alternative name for CHECKSEQUENCEVERIFY (BIP112)

BIP68 introduces relative lock-time semantics to part of the nSequence field leaving the majority of bits undefined for other future applications.

BIP112 introduces opcode CHECKSEQUENCEVERIFY (OP_CSV) that is specifically limited to verifying transaction inputs according to BIP68's relative lock-time[1], yet the _name_ OP_CSV is much boarder than that. We spent months limiting the number of bits used in BIP68 so they would be available for future use cases, thus we have acknowledged there will be completely different usecases that take advantage of unused nSequence bits.

For this reason I believe the BIP112 should be renamed specifically for it's usecase, which is verifying the time/maturity of transaction inputs relative to their inclusion in a block.



We could of course softfork additional meaning into OP_CSV each time we add new sequence number usecases, but that would become obscure and confusing. We have already shown there is no shortage of opcodes so it makes no sense to cram everything into one generic opcode.

TL;DR: let's give BIP112 opcode a name that reflects it's actual usecase rather than focusing on the bitcoin internals.

bitcoin-dev mailing list
bitcoin-dev <at> lists.linuxfoundation.org
Peter Todd via bitcoin-dev | 24 Nov 05:36 2015

BIP68: Second-level granularity doesn't make sense

BIP68 currently represents by-height locks as a simple 16-bit integer of
the number of blocks - effectively giving a granularity of 600 seconds
on average - but for for by-time locks the representation is a 25-bit
integer with granularity of 1 second. However this granularity doesn't
make sense with BIP113, median time-past as endpoint for lock-time
calcualtions, and poses potential problems for future upgrades.

There's two cases to consider here:

1) No competing transactions

By this we mean that the nSequence field is being used simply to delay
when an output can be spent; there aren't competing transactions trying
to spend that output and thus we're not concerned about one transaction
getting mined before another "out of order". For instance, an 2-factor
escrow service like GreenAddress could use nSequence with
CHECKSEQUENCEVERIFY (CSV) to guarantee that users will eventually get
their funds back after some timeout.

In this use-case exact miner behavior is irrelevant. Equally given the
large tolerances allowed on block times, as well as the poisson
distribution of blocks generated, granularity below an hour or two
doesn't have much practical significance.

2) Competing transactions

Here we are relying on miners prefering lower sequence numbers. For
instance a bidirectional payment channel can decrement nSequence for
each change of direction; BIP68 suggests such a decrement might happen
in increments of one day.

BIP113 makes lock-time calculations use the median time-past as the
threshold for by-time locks. The median time past is calculated by
taking median time of the 11 previous blocks, which means when a miner
creates a block they have absolutely no control over what the median
time-past is; it's purely a function of the block tip they're building

This means that granularity below a block interval will, on average,
have absolutely no effect at all on what transaction the miner includes
even in the hypothetical case. In practice of course, users will want to
use significantly larger than 1 block interval granularity in protocols.

The downside of BIP68 as written is users of by-height locktimes have 14
bits unused in nSequence, but by-time locktimes have just 5 bits unused.
This presents an awkward situation if we add new meanings to nSequence
if we ever need more than 5 bits. Yet as shown above, the extra
granularity doesn't have a practical benefit.

Recommendation: Change BIP68 to make by-time locks have the same number
of bits as by-height locks, and multiply the by-time lock field by the
block interval.


'peter'[:-1] <at> petertodd.org
bitcoin-dev mailing list
bitcoin-dev <at> lists.linuxfoundation.org
Btc Drak via bitcoin-dev | 21 Nov 15:13 2015

BIP68: Relative lock-time through consensus-enforced sequence numbers (update)

As I am sure you are aware, for the last 5 months work has been on-going to create a relative lock-time proposal using sequence numbers. The implementation can be found at https://github.com/bitcoin/bitcoin/pull/6312. The current implementation is "mempool-only" and the soft-fork would be deployed at a later stage.

Over these months there has been various discussion back and forth to refine the details.

I have updated the BIP text now according to the details that were discussed in mid-October[1][2] and have extensively clarified the text.

To recap, the overall picture for relative lock-time is that BIP68 introduces consensus rules using some of the nSequence field, while BIP112 creates a new opcode OP_CHECKSEQUENCEVERIFY (PR #6564) so relative lock-time can be verified from the Bitcoin scripting language. Ideally we would soft-fork BIP68, BIP112 (CSV) and 113 (MTP) together. BIP113 has been deployed in 0.11.2 as mempool policy so miners should be applying this policy as they deploy version 4 blocks for the ongoing CLTV soft-fork (currently at 42% at the time of writing).

I am writing this mail to draw your attention to the BIP68 pull-requests and to request final review at:

BIP68 text - https://github.com/bitcoin/bips/pull/245

Discussion references:
bitcoin-dev mailing list
bitcoin-dev <at> lists.linuxfoundation.org

Hierarchical Deterministic Script Templates

A while back, I started working with William Swanson on a script template format to allow for interoperability in accounts between different wallets. We made some progress, but both of us got pretty busy with other projects and general interest was still quite low.
It seems interest has picked up again, especially in light of recent developments (i.e. CLTV, relative CLTV, bidirectional payment channels, lightning), where nongeneralized script formats will not readily support the rapidly advancing state-of-the-art in script design.
I have started working on a draft for such a standard: https://github.com/bitcoin/bips/pull/246
Comments, suggestions, and collaboration are welcome.
- Eric
bitcoin-dev mailing list
bitcoin-dev <at> lists.linuxfoundation.org

Re: BIP - Block size doubles at each reward halving with max block size of 32M

On Wed, Nov 18, 2015 at 10:13 AM, Shuning Hong <hongshuning <at> gmail.com> wrote:
> 2015-11-15 20:16 GMT+08:00 Jorge Timón <bitcoin-dev <at> lists.linuxfoundation.org>:
>> The time threshold must be set enough in the future to give users time to upgrade. But we can perceive
miners' adoption, so if the system knows they haven't upgraded, it should wait for them to upgrade (it
would be nice to have an equivalent mechanism to wait for the rest of the users, but unfortunately there's none).
> If the majority of the miners never upgrade, how could we treat that
> BIP? Wait forever?

Assuming it was deployed as an uncontroversial hardfork as recommended
in BIP99, the deployment would use versionbits (BIP9) and the hardfork
would timeout.
But this timeout would clearly signal that either the minimum
activation threshold wasn't giving enough time for all users to
upgrade (apparently miners didn't had time) or the hardfork is not
really an uncontroversial hardfork but rather a schism one. Then,
assuming some people still want to deploy it as a schism hardfork,
bip99 recommends using only a mediantime threshold without versionbits
nor miner upgrade confirmation.
bitcoin-dev mailing list
bitcoin-dev <at> lists.linuxfoundation.org