BIP Number Request: Open Asset

Open Asset is a simple and well known colored coin protocol made by Flavien Charlon, which has been around for more than two years ago.
Open Asset is OP_RETURN to store coin's color. Since then, the only modification to the protocol has been for allowing OA data to be into any push into an OP_RETURN.


I asked to Flavien Charlon if he was OK if I submit the protocol to the mailing list before posting.

Additional BIP number might be required to cover for example the "colored address" format: https://github.com/OpenAssets/open-assets-protocol/blob/master/address-format.mediawiki
But I will do it in a separate request.

Here is the core of the Open Asset specification:
<pre> Title: Open Assets Protocol (OAP/1.0) Author: Flavien Charlon <flavien <at> charlon.net> Created: 2013-12-12 </pre> ==Abstract== This document describes a protocol used for storing and transferring custom, non-native assets on the Blockchain. Assets are represented by tokens called colored coins. An issuer would first issue colored coins and associate them with a formal or informal promise that he will redeem the coins according to terms he has defined. Colored coins can then be transferred using transactions that preserve the quantity of every asset. ==Motivation== In the current Bitcoin implementation, outputs represent a quantity of Bitcoin, secured by an output script. With the Open Assets Protocol, outputs can encapsulate a quantity of a user-defined asset on top of that Bitcoin amount. There are many applications: * A company could issue colored coins representing shares. The shares could then be traded frictionlessly through the Bitcoin infrastructure. * A bank could issue colored coins backed by a cash reserve. People could withdraw and deposit money in colored coins, and trade those, or use them to pay for goods and services. The Blockchain becomes a system allowing to transact not only in Bitcoin, but in any currency. * Locks on cars or houses could be associated with a particular type of colored coins. The door would only open when presented with a wallet containing that specific coin. ==Protocol Overview== Outputs using the Open Assets Protocol to store an asset have two new characteristics: * The '''asset ID''' is a 160 bits hash, used to uniquely identify the asset stored on the output. * The '''asset quantity''' is an unsigned integer representing how many units of that asset are stored on the output. This document describes how the asset ID and asset quantity of an output are calculated. Each output in the Blockchain can be either colored or uncolored: * Uncolored outputs have no asset ID and no asset quantity (they are both undefined). * Colored outputs have a strictly positive asset quantity, and a non-null asset ID. The ID of an asset is the RIPEMD-160 hash of the SHA-256 hash of the output script referenced by the first input of the transaction that initially issued that asset (<code>script_hash = RIPEMD160(SHA256(script))</code>). An issuer can reissue more of an already existing asset as long as they retain the private key for that asset ID. Assets on two different outputs can only be mixed together if they have the same asset ID. Like addresses, asset IDs can be represented in base 58. They must use version byte 23 (115 in TestNet3) when represented in base 58. The base 58 representation of an asset ID therefore starts with the character 'A' in MainNet. The process to generate an asset ID and the matching private key is described in the following example: # The issuer first generates a private key: <code>18E14A7B6A307F426A94F8114701E7C8E774E7F9A47E2C2035DB29A206321725</code>. # He calculates the corresponding address: <code>16UwLL9Risc3QfPqBUvKofHmBQ7wMtjvM</code>. # Next, he builds the Pay-to-PubKey-Hash script associated to that address: <code>OP_DUP OP_HASH160 010966776006953D5567439E5E39F86A0D273BEE OP_EQUALVERIFY OP_CHECKSIG</code>. # The script is hashed: <code>36e0ea8e93eaa0285d641305f4c81e563aa570a2</code> # Finally, the hash is converted to a base 58 string with checksum using version byte 23: <code>ALn3aK1fSuG27N96UGYB1kUYUpGKRhBuBC</code>. The private key from the first step is required to issue assets identified by the asset ID <code>ALn3aK1fSuG27N96UGYB1kUYUpGKRhBuBC</code>. This acts as a digital signature, and gives the guarantee that nobody else but the original issuer is able to issue assets identified by this specific asset ID. ==Open Assets Transactions== Transactions relevant to the Open Assets Protocol must have a special output called the marker output. This allows clients to recognize such transactions. Open Assets transactions can be used to issue new assets, or transfer ownership of assets. Transactions that are not recognized as an Open Assets transaction are considered as having all their outputs uncolored. ===Marker output=== The marker output can have a zero or non-zero value. The marker output starts with the OP_RETURN opcode, and can be followed by any sequence of opcodes, but it must contain a PUSHDATA opcode containing a parsable Open Assets marker payload. If multiple parsable PUSHDATA opcodes exist in the same output, the first one is used, and the other ones are ignored. If multiple valid marker outputs exist in the same transaction, the first one is used and the other ones are considered as regular outputs. If no valid marker output exists in the transaction, all outputs are considered uncolored. The payload as defined by the Open Assets protocol has the following format: {| ! Field !! Description !! Size |- ! OAP Marker || A tag indicating that this transaction is an Open Assets transaction. It is always 0x4f41. || 2 bytes |- ! Version number || The major revision number of the Open Assets Protocol. For this version, it is 1 (0x0100). || 2 bytes |- ! Asset quantity count || A [https://en.bitcoin.it/wiki/Protocol_specification#Variable_length_integer var-integer] representing the number of items in the <code>asset quantity list</code> field. || 1-9 bytes |- ! Asset quantity list || A list of zero or more [http://en.wikipedia.org/wiki/LEB128 LEB128-encoded] unsigned integers representing the asset quantity of every output in order (excluding the marker output). || Variable |- ! Metadata length || The [https://en.bitcoin.it/wiki/Protocol_specification#Variable_length_integer var-integer] encoded length of the <code>metadata</code> field. || 1-9 bytes |- ! Metadata || Arbitrary metadata to be associated with this transaction. This can be empty. || Variable |} Possible formats for the <code>metadata</code> field are outside of scope of this protocol, and may be described in separate protocol specifications building on top of this one. The <code>asset quantity list</code> field is used to determine the asset quantity of each output. Each integer is encoded using variable length [http://en.wikipedia.org/wiki/LEB128 LEB128] encoding (also used in [https://developers.google.com/protocol-buffers/docs/encoding#varints Google Protocol Buffers]). If the LEB128-encoded asset quantity of any output exceeds 9 bytes, the marker output is deemed invalid. The maximum valid asset quantity for an output is 2<sup>63</sup> - 1 units. If the marker output is malformed, it is considered non-parsable. Coinbase transactions and transactions with zero inputs cannot have a valid marker output, even if it would be otherwise considered valid. If there are less items in the <code>asset quantity list</code> than the number of colorable outputs (all the outputs except the marker output), the outputs in excess receive an asset quantity of zero. If there are more items in the <code>asset quantity list</code> than the number of colorable outputs, the marker output is deemed invalid. The marker output is always uncolored. After the <code>asset quantity list</code> has been used to assign an asset quantity to every output, asset IDs are assigned to outputs. Outputs before the marker output are used for asset issuance, and outputs after the marker output are used for asset transfer. ====Example==== This example illustrates how a marker output is decoded. Assuming the marker output is output 1: Data in the marker output Description ----------------------------- ------------------------------------------------------------------- 0x6a The OP_RETURN opcode. 0x10 The PUSHDATA opcode for a 16 bytes payload. 0x4f 0x41 The Open Assets Protocol tag. 0x01 0x00 Version 1 of the protocol. 0x03 There are 3 items in the asset quantity list. 0xac 0x02 0x00 0xe5 0x8e 0x26 The asset quantity list: - '0xac 0x02' means output 0 has an asset quantity of 300. - Output 1 is skipped and has an asset quantity of 0 because it is the marker output. - '0x00' means output 2 has an asset quantity of 0. - '0xe5 0x8e 0x26' means output 3 has an asset quantity of 624,485. - Outputs after output 3 (if any) have an asset quantity of 0. 0x04 The metadata is 4 bytes long. 0x12 0x34 0x56 0x78 Some arbitrary metadata. ===Asset issuance outputs=== All the outputs before the marker output are used for asset issuance. All outputs preceding the marker output and with a non-zero asset quantity get assigned the asset ID defined as the RIPEMD-160 hash of the SHA-256 hash of the output script referenced by the first input of the transaction. Outputs that have an asset quantity of zero are uncolored. ===Asset transfer outputs=== All the outputs after the marker output are used for asset transfer. The asset IDs of those outputs are determined using a method called order-based coloring. Inputs are seen as a sequence of asset units, each having an asset ID. Similarly, outputs are seen as a sequence of asset units to be assigned an asset ID. These two sequences are built by taking each input or output in order, each of them adding a number of asset units equal to their asset quantity. The process starts with the first input of the transaction and the first output after the marker output. After the sequences have been built, the asset ID of every asset unit in the input sequence is assigned to the asset unit at the same position in the output sequence until all the asset units in the output sequence have received an asset ID. If there are less asset units in the input sequence than in the output sequence, the marker output is considered invalid. Finally, for each transfer output, if the asset units forming that output all have the same asset ID, the output gets assigned that asset ID. If any output is mixing units with more than one distinct asset ID, the marker output is considered invalid. Outputs with an asset quantity of zero are always considered uncolored. ===Example=== This is an example of an Open Assets transaction. The coloring process starts by retrieving the asset quantities and asset IDs of the outputs referenced by each input of the transaction. Then, the marker output is identified. In this example, it is output 2, and the <code>asset quantity list</code> field contains the following values: 0, 10, 6, 0, 7, 3 This list is used to assign asset quantities to outputs. Inputs Outputs - Initial state Outputs - Final result ============================= ============================= ============================= Input 0 Output 0 (Issuance) Output 0 (Issuance) Asset quantity: 3 Asset quantity: 0 Asset quantity: <NULL> Asset ID: A1 Asset ID: Asset ID: <NULL> ----------------------------- ----------------------------- ----------------------------- Input 1 Output 1 (Issuance) Output 1 (Issuance) Asset quantity: 2 Asset quantity: 10 Asset quantity: 10 Asset ID: A1 Asset ID: Asset ID: H ----------------------------- ----------------------------- ----------------------------- Input 2 Output 2 (Marker) Output 2 (Marker) Asset quantity: <NULL> Asset quantity: <NULL> Asset quantity: <NULL> Asset ID: <NULL> Asset ID: <NULL> Asset ID: <NULL> ----------------------------- ----------------------------- ----------------------------- Input 3 Output 3 (Transfer) Output 3 (Transfer) Asset quantity: 5 Asset quantity: 6 Asset quantity: 6 Asset ID: A1 Asset ID: Asset ID: A1 ----------------------------- ----------------------------- ----------------------------- Input 4 Output 4 (Transfer) Output 4 (Transfer) Asset quantity: 3 Asset quantity: 0 Asset quantity: <NULL> Asset ID: A1 Asset ID: Asset ID: <NULL> ----------------------------- ----------------------------- ----------------------------- Input 5 Output 5 (Transfer) Output 5 (Transfer) Asset quantity: 9 Asset quantity: 7 Asset quantity: 7 Asset ID: A2 Asset ID: Asset ID: A1 ============================= ----------------------------- ----------------------------- Output 6 (Transfer) Output 6 (Transfer) Asset quantity: 3 Asset quantity: 3 Asset ID: Asset ID: A2 ============================= ============================= Outputs are colored from the first to the last. Outputs before the marker output are issuance outputs: * Output 0 has an asset quantity of zero, so it is considered uncolored. * Output 1 gets assigned the asset ID defined by <code>H = RIPEMD160(SHA256((S))</code> where <code>S</code> is the output script referenced by the first input of the transaction (input 0). Output 2 is the marker output, separating issuance outputs from transfer outputs. The marker output is always uncolored. Transfer outputs are then colored: * Output 3 receives 3 units from input 0, 2 units from input 1, 0 unit from input 2 and 1 unit from input 3. All the 6 units have the same asset ID <code>A1</code>, so the asset ID <code>A1</code> is assigned to output 3. * Output 4 has an asset quantity of zero, so it is considered uncolored. * Output 5 receives the remaining 4 units of input 3, and 3 units from input 4. All the 7 units have the same asset ID <code>A1</code>, so the asset ID <code>A1</code> is assigned to output 5. * Output 6 receives the first 3 units of input 5. Input 5 has the asset ID <code>A2</code> so the asset ID <code>A2</code> is assigned to output 6. ==Rationale== This approach offers a number of desirable characteristics: # Economical: The cost of issuing or transferring an asset is completely independent from the quantity issued or transferred. # Clients have a way to identify colored outputs simply by traversing the Blockchain, without needing to be fed external data. Transactions relevant to the Open Assets Protocol are identified by the special marker output. # It is possible to determine the asset ID and asset quantity of an output by traversing only a limited number of transactions. # Assets are pseudonymous. They are represented by an asset ID, which is enough to identify each asset uniquely, while still providing an adequate level of anonymity for both the issuer and users of the asset. # This approach uses the recommended way to embed data in the Blockchain (OP_RETURN), and therefore does not pollute the UTXO. # The whole cryptographic infrastructure that Bitcoin provides for securing the spending of outputs is reused for securing the ability to issue assets. There is a symmetry between ''an address + private key'' as a way to spend Bitcoins, and ''an address + private key'' as a way to issue assets. # Generating a new type of asset is as simple as generating an address, can be done offline, and for free. # Reissuing more of an existing asset is easy and can be done quickly and at no cost (except for the transaction fee) as long as the issuer retains the private key for the asset ID. # Single-issuance assets can be achieved by destroying the private key used to issue the asset immediately after issuing it. # Since issuance is based on standard Bitcoin output scripts, it is possible to create an asset that requires multiple signatures for issuance. ==Compatibility== For backward compatibility reasons, we consider than an older client is allowed to see a colored output as uncolored. ===Backward compatibility with existing Bitcoin protocol=== The Open Assets Protocol sits on top of the Bitcoin protocol. It does not require any change to the existing Bitcoin protocol. Existing clients that don't support the Open Assets Protocol will see all outputs as uncolored, and will not be able to perform transfer transactions. ===Compatibility between different versions of OAP=== New versions with the same major version number (e.g. 1.1) should be backwards compatible. New versions with a different major version number (e.g. 2.0) can introduce breaking changes, but transactions created by newer clients will be identifiable by a different version number in the output 0 of genesis and transfer transactions. ==Copyright== This document has been placed in the public domain.

Nicolas Dorier,
_______________________________________________
bitcoin-dev mailing list
bitcoin-dev <at> lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev

BIP: OP_PRANDOM

== Background

OP_PRANDOM is a new op code for Bitcoin that pushes a pseudo-random number to the top of the stack based on the next N block hashes. The source of the pseudo-random number is defined as the XOR of the next N block hashes after confirmation of a transaction containing the OP_PRANDOM encumbered output. When a transaction containing the op code is redeemed, the transaction receives a pseudo-random number based on the next N block hashes after confirmation of the redeeming input. This means that transactions are also effectively locked until at least N new blocks have been found.


== Rational

Making deterministic, verifiable, and trustless pseudo-random numbers available for use in the Script language makes it possible to support a number of new smart contracts. OP_PRANDOM would allow for the simplistic creation of purely decentralized lotteries without the need for complicated multi-party computation protocols. Gambling is also another possibility as contracts can be written based on hashed commitments, with the winner chosen if a given commitment is closest to the pseudo-random number. OP_PRANDOM could also be used for cryptographically secure virtual asset management such as rewards in video games and in other applications.


== Security

Pay-to-script-hash can be used to protect the details of contracts that use OP_PRANDOM from the prying eyes of miners. However, since there is also a non-zero risk that a participant in a contract may attempt to bribe a miner the inclusion of multiple block hashes as a source of randomness is a must. Every miner would effectively need to be bribed to ensure control over the results of the random numbers, which is already very unlikely. The risk approaches zero as N goes up.

There is however another issue: since the random numbers are based on a changing blockchain, its problematic to use the next immediate block hashes before the state is “final.” A safe default for accepting the blockchain state as final would need to be agreed upon beforehand, otherwise you could have multiple random outputs becoming valid simultaneously on different forks.

A simple solution is not to reveal any commitments before the chain height surpasses a certain point but this might not be an issue since only one version will eventually make it into the final chain anyway -- though it is something to think about.


== Outro

I'm not sure how secure this is or whether its a good idea so posting it here for feedback

Thoughts?

_______________________________________________
bitcoin-dev mailing list
bitcoin-dev <at> lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev

Re: Making UTXO Set Growth Irrelevant With Low-Latency Delayed TXO Commitments


On May 19, 2016 01:53, "Peter Todd" <pete <at> petertodd.org> wrote:
tip of the tree.
> >
> > How expensive it is to update a leaf from this tree from unspent to spent?
>
> log2(n) operations.

Updating a leaf is just as expensive as adding a new one?
That's not what I expected.
Or is adding a new one O (1) ?

Anyway, thanks, I'll read this in more detail.

> > Wouldn't it be better to have both an append-only TXO and an append-only
> > STXO (with all spent outputs, not only the latest ones like in your "STXO")?
>
> Nope. The reason why this doesn't work is apparent when you ask how will the
> STXO be indexed?

Just the same way the TXO is (you just stop updating the txo leafs from unspent to spent.

> If it's indexed by outpoint - that is H(txid:n) - to update the STXO you need
> he entire thing, as the position of any new STXO that you need to add to the
> STXO tree is random.
>
> OTOH, if you index the STXO by txout creation order, with the first txout ever
> created having position #0, the second #1, etc. the data you may need to update
> the STXO later has predictable locality... but now you have something that's
> basically identical to my proposed insertion-ordered TXO commitment anyway.

Yeah, that's what I want. Like your append only TXO but for STXO (that way we avoid ever updating leafs in the TXO, and I suspect there are other advantages for fraud proofs).

> Incidentally, it's interesting how if a merbinner tree is insertion-order
> indexed you end up with a datastructure that's almost identical to a MMR.

No complain with MMR. My point is having 2 of them separated: one for the TXO (entries unmutable) and one for the STXO (again, entries unmutable).

Maybe it doesn't make sense, but I would like to understand why.

_______________________________________________
bitcoin-dev mailing list
bitcoin-dev <at> lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev

Re: Making UTXO Set Growth Irrelevant With Low-Latency Delayed TXO Commitments


On May 17, 2016 15:23, "Peter Todd via bitcoin-dev" <bitcoin-dev <at> lists.linuxfoundation.org> wrote:
> # TXO Commitments
>

> Specifically TXO commitments proposes a Merkle Mountain Range¹ (MMR), a
> type of deterministic, indexable, insertion ordered merkle tree, which allows
> new items to be cheaply appended to the tree with minimal storage requirements,
> just log2(n) "mountain tips". Once an output is added to the TXO MMR it is
> never removed; if an output is spent its status is updated in place. Both the
> state of a specific item in the MMR, as well the validity of changes to items
> in the MMR, can be proven with log2(n) sized proofs consisting of a merkle path
> to the tip of the tree.

How expensive it is to update a leaf from this tree from unspent to spent?

Wouldn't it be better to have both an append-only TXO and an append-only STXO (with all spent outputs, not only the latest ones like in your "STXO")?

_______________________________________________
bitcoin-dev mailing list
bitcoin-dev <at> lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Peter Todd via bitcoin-dev | 17 May 15:23 2016

Making UTXO Set Growth Irrelevant With Low-Latency Delayed TXO Commitments

# Motivation

UTXO growth is a serious concern for Bitcoin's long-term decentralization. To
run a competitive mining operation potentially the entire UTXO set must be in
RAM to achieve competitive latency; your larger, more centralized, competitors
will have the UTXO set in RAM. Mining is a zero-sum game, so the extra latency
of not doing so if they do directly impacts your profit margin. Secondly,
having possession of the UTXO set is one of the minimum requirements to run a
full node; the larger the set the harder it is to run a full node.

Currently the maximum size of the UTXO set is unbounded as there is no
consensus rule that limits growth, other than the block-size limit itself; as
of writing the UTXO set is 1.3GB in the on-disk, compressed serialization,
which expands to significantly more in memory. UTXO growth is driven by a
number of factors, including the fact that there is little incentive to merge
inputs, lost coins, dust outputs that can't be economically spent, and
non-btc-value-transfer "blockchain" use-cases such as anti-replay oracles and
timestamping.

We don't have good tools to combat UTXO growth. Segregated Witness proposes to
give witness space a 75% discount, in part of make reducing the UTXO set size
by spending txouts cheaper. While this may change wallets to more often spend
dust, it's hard to imagine an incentive sufficiently strong to discourage most,
let alone all, UTXO growing behavior.

For example, timestamping applications often create unspendable outputs due to
ease of implementation, and because doing so is an easy way to make sure that
the data required to reconstruct the timestamp proof won't get lost - all
Bitcoin full nodes are forced to keep a copy of it. Similarly anti-replay
use-cases like using the UTXO set for key rotation piggyback on the uniquely
strong security and decentralization guarantee that Bitcoin provides; it's very
difficult - perhaps impossible - to provide these applications with
alternatives that are equally secure. These non-btc-value-transfer use-cases
can often afford to pay far higher fees per UTXO created than competing
btc-value-transfer use-cases; many users could afford to spend $50 to register
a new PGP key, yet would rather not spend $50 in fees to create a standard two
output transaction. Effective techniques to resist miner censorship exist, so
without resorting to whitelists blocking non-btc-value-transfer use-cases as
"spam" is not a long-term, incentive compatible, solution.

A hard upper limit on UTXO set size could create a more level playing field in
the form of fixed minimum requirements to run a performant Bitcoin node, and
make the issue of UTXO "spam" less important. However, making any coins
unspendable, regardless of age or value, is a politically untenable economic
change.

# TXO Commitments

A merkle tree committing to the state of all transaction outputs, both spent
and unspent, we can provide a method of compactly proving the current state of
an output. This lets us "archive" less frequently accessed parts of the UTXO
set, allowing full nodes to discard the associated data, still providing a
mechanism to spend those archived outputs by proving to those nodes that the
outputs are in fact unspent.

Specifically TXO commitments proposes a Merkle Mountain Range¹ (MMR), a
type of deterministic, indexable, insertion ordered merkle tree, which allows
new items to be cheaply appended to the tree with minimal storage requirements,
just log2(n) "mountain tips". Once an output is added to the TXO MMR it is
never removed; if an output is spent its status is updated in place. Both the
state of a specific item in the MMR, as well the validity of changes to items
in the MMR, can be proven with log2(n) sized proofs consisting of a merkle path
to the tip of the tree.

At an extreme, with TXO commitments we could even have no UTXO set at all,
entirely eliminating the UTXO growth problem. Transactions would simply be
accompanied by TXO commitment proofs showing that the outputs they wanted to
spend were still unspent; nodes could update the state of the TXO MMR purely
from TXO commitment proofs. However, the log2(n) bandwidth overhead per txin is
substantial, so a more realistic implementation is be to have a UTXO cache for
recent transactions, with TXO commitments acting as a alternate for the (rare)
event that an old txout needs to be spent.

Proofs can be generated and added to transactions without the involvement of
the signers, even after the fact; there's no need for the proof itself to
signed and the proof is not part of the transaction hash. Anyone with access to
TXO MMR data can (re)generate missing proofs, so minimal, if any, changes are
required to wallet software to make use of TXO commitments.

## Delayed Commitments

TXO commitments aren't a new idea - the author proposed them years ago in
response to UTXO commitments. However it's critical for small miners' orphan
rates that block validation be fast, and so far it has proven difficult to
create (U)TXO implementations with acceptable performance; updating and
recalculating cryptographicly hashed merkelized datasets is inherently more
work than not doing so. Fortunately if we maintain a UTXO set for recent
outputs, TXO commitments are only needed when spending old, archived, outputs.
We can take advantage of this by delaying the commitment, allowing it to be
calculated well in advance of it actually being used, thus changing a
latency-critical task into a much easier average throughput problem.

Concretely each block B_i commits to the TXO set state as of block B_{i-n}, in
other words what the TXO commitment would have been n blocks ago, if not for
the n block delay. Since that commitment only depends on the contents of the
blockchain up until block B_{i-n}, the contents of any block after are
irrelevant to the calculation.

## Implementation

Our proposed high-performance/low-latency delayed commitment full-node
implementation needs to store the following data:

1) UTXO set

    Low-latency K:V map of txouts definitely known to be unspent. Similar to
    existing UTXO implementation, but with the key difference that old,
    unspent, outputs may be pruned from the UTXO set.

2) STXO set

    Low-latency set of transaction outputs known to have been spent by
    transactions after the most recent TXO commitment, but created prior to the
    TXO commitment.

3) TXO journal

    FIFO of outputs that need to be marked as spent in the TXO MMR. Appends
    must be low-latency; removals can be high-latency.

4) TXO MMR list

    Prunable, ordered list of TXO MMR's, mainly the highest pending commitment,
    backed by a reference counted, cryptographically hashed object store
    indexed by digest (similar to how git repos work). High-latency ok. We'll
    cover this in more in detail later.

### Fast-Path: Verifying a Txout Spend In a Block

When a transaction output is spent by a transaction in a block we have two
cases:

1) Recently created output

    Output created after the most recent TXO commitment, so it should be in the
    UTXO set; the transaction spending it does not need a TXO commitment proof.
    Remove the output from the UTXO set and append it to the TXO journal.

2) Archived output

    Output created prior to the most recent TXO commitment, so there's no
    guarantee it's in the UTXO set; transaction will have a TXO commitment
    proof for the most recent TXO commitment showing that it was unspent.
    Check that the output isn't already in the STXO set (double-spent), and if
    not add it. Append the output and TXO commitment proof to the TXO journal.

In both cases recording an output as spent requires no more than two key:value
updates, and one journal append. The existing UTXO set requires one key:value
update per spend, so we can expect new block validation latency to be within 2x
of the status quo even in the worst case of 100% archived output spends.

### Slow-Path: Calculating Pending TXO Commitments

In a low-priority background task we flush the TXO journal, recording the
outputs spent by each block in the TXO MMR, and hashing MMR data to obtain the
TXO commitment digest. Additionally this background task removes STXO's that
have been recorded in TXO commitments, and prunes TXO commitment data no longer
needed.

Throughput for the TXO commitment calculation will be worse than the existing
UTXO only scheme. This impacts bulk verification, e.g. initial block download.
That said, TXO commitments provides other possible tradeoffs that can mitigate
impact of slower validation throughput, such as skipping validation of old
history, as well as fraud proof approaches.

### TXO MMR Implementation Details

Each TXO MMR state is a modification of the previous one with most information
shared, so we an space-efficiently store a large number of TXO commitments
states, where each state is a small delta of the previous state, by sharing
unchanged data between each state; cycles are impossible in merkelized data
structures, so simple reference counting is sufficient for garbage collection.
Data no longer needed can be pruned by dropping it from the database, and
unpruned by adding it again. Since everything is committed to via cryptographic
hash, we're guaranteed that regardless of where we get the data, after
unpruning we'll have the right data.

Let's look at how the TXO MMR works in detail. Consider the following TXO MMR
with two txouts, which we'll call state #0:

      0
     / \
    a   b

If we add another entry we get state #1:

        1
       / \
      0   \
     / \   \
    a   b   c

Note how it 100% of the state #0 data was reused in commitment #1. Let's
add two more entries to get state #2:

            2
           / \
          2   \
         / \   \
        /   \   \
       /     \   \
      0       2   \
     / \     / \   \
    a   b   c   d   e

This time part of state #1 wasn't reused - it's wasn't a perfect binary
tree - but we've still got a lot of re-use.

Now suppose state #2 is committed into the blockchain by the most recent block.
Future transactions attempting to spend outputs created as of state #2 are
obliged to prove that they are unspent; essentially they're forced to provide
part of the state #2 MMR data. This lets us prune that data, discarding it,
leaving us with only the bare minimum data we need to append new txouts to the
TXO MMR, the tips of the perfect binary trees ("mountains") within the MMR:

            2
           / \
          2   \
               \
                \
                 \
                  \
                   \
                    e

Note that we're glossing over some nuance here about exactly what data needs to
be kept; depending on the details of the implementation the only data we need
for nodes "2" and "e" may be their hash digest.

Adding another three more txouts results in state #3:

                  3
                 / \
                /   \
               /     \
              /       \
             /         \
            /           \
           /             \
          2               3
                         / \
                        /   \
                       /     \
                      3       3
                     / \     / \
                    e   f   g   h

Suppose recently created txout f is spent. We have all the data required to
update the MMR, giving us state #4. It modifies two inner nodes and one leaf
node:

                  4
                 / \
                /   \
               /     \
              /       \
             /         \
            /           \
           /             \
          2               4
                         / \
                        /   \
                       /     \
                      4       3
                     / \     / \
                    e  (f)  g   h

If an archived txout is spent requires the transaction to provide the merkle
path to the most recently committed TXO, in our case state #2. If txout b is
spent that means the transaction must provide the following data from state #2:

            2
           /
          2
         /
        /
       /
      0
       \
        b

We can add that data to our local knowledge of the TXO MMR, unpruning part of
it:

                  4
                 / \
                /   \
               /     \
              /       \
             /         \
            /           \
           /             \
          2               4
         /               / \
        /               /   \
       /               /     \
      0               4       3
       \             / \     / \
        b           e  (f)  g   h

Remember, we haven't _modified_ state #4 yet; we just have more data about it.
When we mark txout b as spent we get state #5:

                  5
                 / \
                /   \
               /     \
              /       \
             /         \
            /           \
           /             \
          5               4
         /               / \
        /               /   \
       /               /     \
      5               4       3
       \             / \     / \
       (b)          e  (f)  g   h

Secondly by now state #3 has been committed into the chain, and transactions
that want to spend txouts created as of state #3 must provide a TXO proof
consisting of state #3 data. The leaf nodes for outputs g and h, and the inner
node above them, are part of state #3, so we prune them:

                  5
                 / \
                /   \
               /     \
              /       \
             /         \
            /           \
           /             \
          5               4
         /               /
        /               /
       /               /
      5               4
       \             / \
       (b)          e  (f)

Finally, lets put this all together, by spending txouts a, c, and g, and
creating three new txouts i, j, and k. State #3 was the most recently committed
state, so the transactions spending a and g are providing merkle paths up to
it. This includes part of the state #2 data:

                  3
                 / \
                /   \
               /     \
              /       \
             /         \
            /           \
           /             \
          2               3
         / \               \
        /   \               \
       /     \               \
      0       2               3
     /       /               /
    a       c               g

After unpruning we have the following data for state #5:

                  5
                 / \
                /   \
               /     \
              /       \
             /         \
            /           \
           /             \
          5               4
         / \             / \
        /   \           /   \
       /     \         /     \
      5       2       4       3
     / \     /       / \     /
    a  (b)  c       e  (f)  g

That's sufficient to mark the three outputs as spent and add the three new
txouts, resulting in state #6:

                        6
                       / \
                      /   \
                     /     \
                    /       \
                   /         \
                  6           \
                 / \           \
                /   \           \
               /     \           \
              /       \           \
             /         \           \
            /           \           \
           /             \           \
          6               6           \
         / \             / \           \
        /   \           /   \           6
       /     \         /     \         / \
      6       6       4       6       6   \
     / \     /       / \     /       / \   \
   (a) (b) (c)      e  (f) (g)      i   j   k

Again, state #4 related data can be pruned. In addition, depending on how the
STXO set is implemented may also be able to prune data related to spent txouts
after that state, including inner nodes where all txouts under them have been
spent (more on pruning spent inner nodes later).

### Consensus and Pruning

It's important to note that pruning behavior is consensus critical: a full node
that is missing data due to pruning it too soon will fall out of consensus, and
a miner that fails to include a merkle proof that is required by the consensus
is creating an invalid block. At the same time many full nodes will have
significantly more data on hand than the bare minimum so they can help wallets
make transactions spending old coins; implementations should strongly consider
separating the data that is, and isn't, strictly required for consensus.

A reasonable approach for the low-level cryptography may be to actually treat
the two cases differently, with the TXO commitments committing too what data
does and does not need to be kept on hand by the UTXO expiration rules. On the
other hand, leaving that uncommitted allows for certain types of soft-forks
where the protocol is changed to require more data than it previously did.

### Consensus Critical Storage Overheads

Only the UTXO and STXO sets need to be kept on fast random access storage.
Since STXO set entries can only be created by spending a UTXO - and are smaller
than a UTXO entry - we can guarantee that the peak size of the UTXO and STXO
sets combined will always be less than the peak size of the UTXO set alone in
the existing UTXO-only scheme (though the combined size can be temporarily
higher than what the UTXO set size alone would be when large numbers of
archived txouts are spent).

TXO journal entries and unpruned entries in the TXO MMR have log2(n) maximum
overhead per entry: a unique merkle path to a TXO commitment (by "unique" we
mean that no other entry shares data with it). On a reasonably fast system the
TXO journal will be flushed quickly, converting it into TXO MMR data; the TXO
journal will never be more than a few blocks in size.

Transactions spending non-archived txouts are not required to provide any TXO
commitment data; we must have that data on hand in the form of one TXO MMR
entry per UTXO. Once spent however the TXO MMR leaf node associated with that
non-archived txout can be immediately pruned - it's no longer in the UTXO set
so any attempt to spend it will fail; the data is now immutable and we'll never
need it again. Inner nodes in the TXO MMR can also be pruned if all leafs under
them are fully spent; detecting this is easy the TXO MMR is a merkle-sum tree,
with each inner node committing to the sum of the unspent txouts under it.

When a archived txout is spent the transaction is required to provide a merkle
path to the most recent TXO commitment. As shown above that path is sufficient
information to unprune the necessary nodes in the TXO MMR and apply the spend
immediately, reducing this case to the TXO journal size question (non-consensus
critical overhead is a different question, which we'll address in the next
section).

Taking all this into account the only significant storage overhead of our TXO
commitments scheme when compared to the status quo is the log2(n) merkle path
overhead; as long as less than 1/log2(n) of the UTXO set is active,
non-archived, UTXO's we've come out ahead, even in the unrealistic case where
all storage available is equally fast. In the real world that isn't yet the
case - even SSD's significantly slower than RAM.

### Non-Consensus Critical Storage Overheads

Transactions spending archived txouts pose two challenges:

1) Obtaining up-to-date TXO commitment proofs

2) Updating those proofs as blocks are mined

The first challenge can be handled by specialized archival nodes, not unlike
how some nodes make transaction data available to wallets via bloom filters or
the Electrum protocol. There's a whole variety of options available, and the
the data can be easily sharded to scale horizontally; the data is
self-validating allowing horizontal scaling without trust.

While miners and relay nodes don't need to be concerned about the initial
commitment proof, updating that proof is another matter. If a node aggressively
prunes old versions of the TXO MMR as it calculates pending TXO commitments, it
won't have the data available to update the TXO commitment proof to be against
the next block, when that block is found; the child nodes of the TXO MMR tip
are guaranteed to have changed, yet aggressive pruning would have discarded that
data.

Relay nodes could ignore this problem if they simply accept the fact that
they'll only be able to fully relay the transaction once, when it is initially
broadcast, and won't be able to provide mempool functionality after the initial
relay. Modulo high-latency mixnets, this is probably acceptable; the author has
previously argued that relay nodes don't need a mempool² at all.

For a miner though not having the data necessary to update the proofs as blocks
are found means potentially losing out on transactions fees. So how much extra
data is necessary to make this a non-issue?

Since the TXO MMR is insertion ordered, spending a non-archived txout can only
invalidate the upper nodes in of the archived txout's TXO MMR proof (if this
isn't clear, imagine a two-level scheme, with a per-block TXO MMRs, committed
by a master MMR for all blocks). The maximum number of relevant inner nodes
changed is log2(n) per block, so if there are n non-archival blocks between the
most recent TXO commitment and the pending TXO MMR tip, we have to store
log2(n)*n inner nodes - on the order of a few dozen MB even when n is a
(seemingly ridiculously high) year worth of blocks.

Archived txout spends on the other hand can invalidate TXO MMR proofs at any
level - consider the case of two adjacent txouts being spent. To guarantee
success requires storing full proofs. However, they're limited by the blocksize
limit, and additionally are expected to be relatively uncommon. For example, if
1% of 1MB blocks was archival spends, our hypothetical year long TXO commitment
delay is only a few hundred MB of data with low-IO-performance requirements.

## Security Model

Of course, a TXO commitment delay of a year sounds ridiculous. Even the slowest
imaginable computer isn't going to need more than a few blocks of TXO
commitment delay to keep up ~100% of the time, and there's no reason why we
can't have the UTXO archive delay be significantly longer than the TXO
commitment delay.

However, as with UTXO commitments, TXO commitments raise issues with Bitcoin's
security model by allowing relatively miners to profitably mine transactions
without bothering to validate prior history. At the extreme, if there was no
commitment delay at all at the cost of a bit of some extra network bandwidth
"full" nodes could operate and even mine blocks completely statelessly by
expecting all transactions to include "proof" that their inputs are unspent; a
TXO commitment proof for a commitment you haven't verified isn't a proof that a
transaction output is unspent, it's a proof that some miners claimed the txout
was unspent.

At one extreme, we could simply implement TXO commitments in a "virtual"
fashion, without miners actually including the TXO commitment digest in their
blocks at all. Full nodes would be forced to compute the commitment from
scratch, in the same way they are forced to compute the UTXO state, or total
work. Of course a full node operator who doesn't want to verify old history can
get a copy of the TXO state from a trusted source - no different from how you
could get a copy of the UTXO set from a trusted source.

A more pragmatic approach is to accept that people will do that anyway, and
instead assume that sufficiently old blocks are valid. But how old is
"sufficiently old"? First of all, if your full node implementation comes "from
the factory" with a reasonably up-to-date minimum accepted total-work
thresholdⁱ - in other words it won't accept a chain with less than that amount
of total work - it may be reasonable to assume any Sybil attacker with
sufficient hashing power to make a forked chain meeting that threshold with,
say, six months worth of blocks has enough hashing power to threaten the main
chain as well.

That leaves public attempts to falsify TXO commitments, done out in the open by
the majority of hashing power. In this circumstance the "assumed valid"
threshold determines how long the attack would have to go on before full nodes
start accepting the invalid chain, or at least, newly installed/recently reset
full nodes. The minimum age that we can "assume valid" is tradeoff between
political/social/technical concerns; we probably want at least a few weeks to
guarantee the defenders a chance to organise themselves.

With this in mind, a longer-than-technically-necessary TXO commitment delayʲ
may help ensure that full node software actually validates some minimum number
of blocks out-of-the-box, without taking shortcuts. However this can be
achieved in a wide variety of ways, such as the author's prev-block-proof
proposal³, fraud proofs, or even a PoW with an inner loop dependent on
blockchain data. Like UTXO commitments, TXO commitments are also potentially
very useful in reducing the need for SPV wallet software to trust third parties
providing them with transaction data.

i) Checkpoints that reject any chain without a specific block are a more
   common, if uglier, way of achieving this protection.

j) A good homework problem is to figure out how the TXO commitment could be
   designed such that the delay could be reduced in a soft-fork.

## Further Work

While we've shown that TXO commitments certainly could be implemented without
increasing peak IO bandwidth/block validation latency significantly with the
delayed commitment approach, we're far from being certain that they should be
implemented this way (or at all).

1) Can a TXO commitment scheme be optimized sufficiently to be used directly
without a commitment delay? Obviously it'd be preferable to avoid all the above
complexity entirely.

2) Is it possible to use a metric other than age, e.g. priority? While this
complicates the pruning logic, it could use the UTXO set space more
efficiently, especially if your goal is to prioritise bitcoin value-transfer
over other uses (though if "normal" wallets nearly never need to use TXO
commitments proofs to spend outputs, the infrastructure to actually do this may
rot).

3) Should UTXO archiving be based on a fixed size UTXO set, rather than an
age/priority/etc. threshold?

4) By fixing the problem (or possibly just "fixing" the problem) are we
encouraging/legitimising blockchain use-cases other than BTC value transfer?
Should we?

5) Instead of TXO commitment proofs counting towards the blocksize limit, can
we use a different miner fairness/decentralization metric/incentive? For
instance it might be reasonable for the TXO commitment proof size to be
discounted, or ignored entirely, if a proof-of-propagation scheme (e.g.
thinblocks) is used to ensure all miners have received the proof in advance.

6) How does this interact with fraud proofs? Obviously furthering dependency on
non-cryptographically-committed STXO/UTXO databases is incompatible with the
modularized validation approach to implementing fraud proofs.

# References

1) "Merkle Mountain Ranges",
   Peter Todd, OpenTimestamps, Mar 18 2013,
   https://github.com/opentimestamps/opentimestamps-server/blob/master/doc/merkle-mountain-range.md

2) "Do we really need a mempool? (for relay nodes)",
   Peter Todd, bitcoin-dev mailing list, Jul 18th 2015,
   https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-July/009479.html

3) "Segregated witnesses and validationless mining",
   Peter Todd, bitcoin-dev mailing list, Dec 23rd 2015,
   https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-December/012103.html

--

-- 
https://petertodd.org 'peter'[:-1] <at> petertodd.org
_______________________________________________
bitcoin-dev mailing list
bitcoin-dev <at> lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev

Bip44 extension for P2SH/P2WSH/...

Hello List,

With SegWit approaching it would make sense to define a common derivation scheme how BIP44 compatible
wallets will handle P2(W)SH (and later on P2WPKH) receiving addresses.
I was thinking about starting a BIP for it, but I wanted to get some feedback from other wallets devs first.

In my opinion there are two(?) different options: 

1) Stay with the current Bip44 account, give the user for each public key the option to show it as a
P2PKH-Address or a P2SH address and also scan the blockchain for both representation of each public key.
	+) This has the advantage, that the user does not need to decide or have to understand that he needs to
migrate to a new account type
	-) The downside is that the wallet has to scan/look for ever twice as much addresses. In the future when we
have a P2WPKH, it will be three times as much.
	-) If you have the same xPub/xPriv key in different wallets, you need to be sure both take care for the
different address types

2) Define a new derivation path, parallel to Bip44, but a different  'purpose' (eg.
<BipNumber-of-this-BIP>' instead of 44'). Let the user choose which account he want to add ("Normal
account", "Witness account").  

	m / purpose' / coin_type' / account' / change / address_index

	+) Wallet needs only to take care of 1 address per public key
	+) If you use more than one wallet on the same xPub/xPriv it will work or fail completely. You will notice it
immediately that there is something wrong
	-) User has to understand that (s)he needs to migrate to a new account to get the benefits of SegWit
	+) Thus, its easier to make a staged roll-out, only user actively deciding to use SegWit will get it and we
can catch bugs earlier.
	
3) other ideas?

My personal favourite is pt2.

Has any Bip44 compliant wallet already done any integration at this point?

Thx,
Daniel/Mycelium
Peter Todd via bitcoin-dev | 10 May 20:57 2016

Making AsicBoost irrelevant

As part of the hard-fork proposed in the HK agreement(1) we'd like to make the
patented AsicBoost optimisation useless, and hopefully make further similar
optimizations useless as well.

What's the best way to do this? Ideally this would be SPV compatible, but if it
requires changes from SPV clients that's ok too. Also the fix this should be
compatible with existing mining hardware.

1) https://medium.com/ <at> bitcoinroundtable/bitcoin-roundtable-consensus-266d475a61ff

2) http://lists.linuxfoundation.org/pipermail/bitcoin-dev/2016-April/012596.html

--

-- 
https://petertodd.org 'peter'[:-1] <at> petertodd.org
_______________________________________________
bitcoin-dev mailing list
bitcoin-dev <at> lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev

Fwd: Compact Block Relay BIP

On Mon, May 9, 2016 at 11:32 AM, Tom <tomz <at> freedommail.ch> wrote:
> On Monday 09 May 2016 10:43:02 Gregory Maxwell wrote:
>> On Mon, May 9, 2016 at 9:35 AM, Tom Zander via bitcoin-dev
>> <bitcoin-dev <at> lists.linuxfoundation.org> wrote:
>> > You misunderstand the networking effects.
>> > The fact that your node is required to choose which one to set the
>> > announce
>> > bit on implies that it needs to predict which node will have the best data
>> > in the future.
>>
>> Not required. It may.
>
> It is required, in the reference of wanting to actually use compact block
> relay.

I cannot parse this sentence.

A node implementing this does not have to ask peers to send blocks
without further solicitation.

If they don't, their minimum transfer time increases to the current
1.5 RTT (but sending massively less data).

> Apologies, I thought that the term was wider known.  "Laboratory situations"
> is used where I am from as the opposite of real-world messy and unpredictable
> situations.
>
> So, your measurements may be true, but are not useful to decide how well it
> behaves under less optimal situations. aka "the real world".

My measurements were made in the real world, on a collection of nodes
around the network which were not setup for this purpose and are
running standard configurations, over many weeks of logs.

This doesn't guarantee that they're representative of everything-- but
they don't need to be.

>> This also _increases_ robustness. Right now a single peer failing at
>> the wrong time will delay blocks with a long time out.
>
> If your peers that were supposed to send you a compact block fail, then you'll
> end up in exactly that same situation again.  Only with various timeouts in
> between before you get your block making it a magnitude slower.

That is incorrect.

If a header shows up and a compact block has not shown up, a compact
block will be requested.

If compactblock shows up reconstruction will be attempted.

If any of the requested compact blocks show up (the three in advance,
if high bandwidth mode is used, or a requested one, if there was one)
then reconstruction proceeds without delay.

The addition of the unsolicited input causes no additional timeouts or
delays (ignoring bandwidth usage). It does use some more bandwidth
than not having it, but still massively less than the status quo.

>> > Another problem with your solution is that nodes send a much larger amount
>> > of unsolicited data to peers in the form of the thin-block compared to
>> > the normal inv or header-first data.
>>
>> "High bandwidth" mode
>
> Another place where I may have explained better.
> This is not about the difference about the two modes of your design.
> This is about the design as a whole. As compared to current.

It is massively more efficient than the current protocol, even under
fairly poor conditions. In the absolute worst possible case (miner
sends a block of completely unexpected transactions, and three peers
send compact blocks, it adds about 6% overhead)

> Service bits are exactly the right solution to indicate additional p2p
> feature-support.

With this kind of unsubstantiated axiomatic assertion, I don't think
further discussion with you is likely to be productive-- at least I
gave a reason.

> That's all fine and well, it doesn't at any point take away from my point that
> any specification should NOT invent something new that has for decades had a
> great specification already.

UTF-8 would be a poor fit here for the reasons I explained and others
less significant ones (including the additional error cases that must
be handled resulting from the inefficient encoding; -- poor handing of
invalid UTF-8 have even resulted in security issues in some
applications).

I am a bit baffled that you'd suggest using UTF-8 as a general compact
integer encoding in a binary protocol in the first place.

>> > Just the first (highest) 8 bytes of a sha256 hash.
>> >
>> > The amount of collisions will not be less if you start xoring the rest.
>> > The whole reason for doing this extra work is also irrelevant as a spam
>> > protection.
>>
>> Then you expose it to a trivial collision attack:  To find two 64 bit
>> hashes that collide I need perform only roughly 2^32 computation. Then
>> I can send them to the network.
>
> No, you still need to have done a POW.
>
> Next to that, your scheme is 2^32 computations *and* some XORs. The XORs are
> percentage wise a rounding error on the total time. So your argument also
> destroys your own addition.
>
>> This issue is eliminated by salting the hash.
>
> The issue is better eliminated by not allowing nodes to send uninvited large
> messages.

What are you talking about? You seem profoundly confused here. There
is no proof of work involved anywhere.

I obtain some txouts. I write a transaction spending them in malleable
form (e.g. sighash single and an op_return output).. then grind the
extra output to produce different hashes.  After doing this 2^32 times
I am likely to find two which share the same initial 8 bytes of txid.

I send one to half the nodes, the other to half the nodes.  When a
block shows up carrying one or the other of my transactions
reconstruction will fail on half the nodes in the network in a
protocol with a simple truncated hash.

Of course, doing this is easy, so I can keep it going persistently. If
I am a miner, I can be sure to filter these transactions from my own
blocks-- causing all my competition to suffer higher orphaning.

The salted short-ids do not have this easily exploited, and gratuitous
vulnerability. This was obvious enough that it this feature was in the
very earliest descriptions of these techniques in 2013/2014. The
salted short-ids cannot be collided in pre-computation, and cannot be
collided with respect to multiple nodes at once.
bfd--- via bitcoin-dev | 9 May 10:26 2016

Committed bloom filters for improved wallet performance and SPV security

We introduce several concepts that rework the lightweight Bitcoin
client model in a manner which is secure, efficient and privacy
compatible.

Thea properties of BIP37 SPV [0] are unfortunately not as strong as
originally thought:

     * The expected privacy of the probabilistic nature of bloom
       filters does not exist [1][2], any user with a BIP37 SPV wallet
       should be operating under no expectation of privacy.
       Implementation flaws make this effect significantly worse, the
       behavior meaning that no matter how high the false positive
       rate (up to simply downloading the whole blocks verbatim) the
       intent of the client connection is recoverable.

     * Significant processing load is placed on nodes in the Bitcoin
       network by lightweight clients, a single syncing wallet causes
       (at the time of writing) 80GB of disk reads and a large amount
       of CPU time to be consumed processing this data. This carries
       significant denial of service risk [3], non-distinguishable
       clients can repeatedly request taxing blocks causing
       reprocessing on every request. Processed data is unique to every
       client, and can not be cached or made more efficient while
       staying within specification.

     * Wallet clients can not have strong consistency or security
       expectations, BIP37 merkle paths allow for a wallet to validate
       that an output was spendable at some point in time but does not
       prove that this output is not spent today.

     * Nodes in the network can denial of service attack all BIP37 SPV
       wallet clients by simply returning null filter results for
       requests, the wallet has no way of discerning if it has been
       lied to and may be made simply unaware that any payment has been
       made to them. Many nodes can be queried in a probabilistic manor
       but this increases the already heavy network load with little
       benefit.

We propose a new concept which can work towards addressing these
shortcomings.

A Bloom Filter Digest is deterministically created of every block
encompassing the inputs and outputs of the containing transactions,
the filter parameters being tuned such that the filter is a small
portion of the size of the total block data. To determine if a block
has contents which may be interesting a second bloom filter of all
relevant key material is created. A binary comparison between the two
filters returns true if there is probably matching transactions, and
false if there is certainly no matching transactions. Any matched
blocks can be downloaded in full and processed for transactions which
may be relevant.

The BFD can be used verbatim in replacement of BIP37, where the filter
can be cached between clients without needing to be recomputed. It can
also be used by normal pruned nodes to do re-scans locally of their
wallet without needing to have the block data available to scan, or
without reading the entire block chain from disk.

-

For improved probabilistic security the bloom filters can be presented
to lightweight clients by semi-trusted oracles. A client wallet makes
an assumption that they trust a set, or subset of remote parties
(wallet vendors, services) which all all sign the BFD for each block.
The BFD can be downloaded from a single remote source, and the hash of
the filters compared against others in the trust set. Agreement is a
weak suggestion that the filter has not been tampered with, assuming
that these parties are not conspiring to defraud the client.

The oracles do not learn any additional information about the client
wallet, the client can download the block data from either nodes on
the network, HTTP services, NTTP, or any other out of band
communication method that provides the privacy desired by the client.

-

The security model of the oracle bloom filter can be vastly improved
by instead committing a hash of the BFD inside every block as a soft-
fork consensus rule change. After this, every node in the network would
build the filter and validate that the hash in the block is correct,
then make a conscious choice discard it for space savings or cache the
data to disk.

With a commitment to the filter it becomes impossible to lie to
lightweight clients by omission. Lightweight clients are provided with
a block header, merkle path, and the BFD. Altering the BFD invalidates
the merkle proof, it's validity is a strong indicator that the client
has an unadulterated picture of the UTXO condition without needing to
build one itself. A strong assurance that the hash of the BFD means
that the filters can be downloaded out of band along with the block
data at the leisure of the client, allowing for significantly greater
privacy and taking load away from the P2P Bitcoin network.

Committing the BFD is not a hard forking change, and does not require
alterations to mining software so long as the coinbase transaction
scriptSig is not included in the bloom filter.

[0] https://github.com/bitcoin/bips/blob/master/bip-0037.mediawiki
[1] https://eprint.iacr.org/2014/763.pdf
[2] https://jonasnick.github.io/blog/2015/02/12/privacy-in-bitcoinj/
[3] https://github.com/petertodd/bloom-io-attack

Fwd: Proposal to update BIP-32

I received this:

---------- Forwarded message ----------
From: Pieter Wuille <pieter.wuille <at> gmail.com>
Date: Fri, Apr 22, 2016 at 6:44 PM
Subject: Re: [bitcoin-dev] Proposal to update BIP-32
To: Marek Palatinus <marek <at> palatinus.cz>
Cc: Bitcoin Dev <bitcoin-dev <at> lists.linuxfoundation.org>


On Thu, Apr 21, 2016 at 2:08 PM, Marek Palatinus <marek <at> palatinus.cz> wrote:
On Wed, Apr 20, 2016 at 6:32 PM, Jochen Hoenicke via bitcoin-dev <bitcoin-dev <at> lists.linuxfoundation.org> wrote:
Hello Bitcoin Developers,

I would like to make a proposal to update BIP-32 in a small way.

I think the backward compatibility issues are minimal.  The chance
that this affects anyone is less than 10^-30.  Even if it happens, it
would only create some additional addresses (that are not seen if the
user downgrades).  The main reason for suggesting a change is that we
want a similar method for different curves where a collision is much
more likely.

I think I change like this makes a lot of sense technically, and I wish I had known how BIP-32 would end up being used inside higher level mechanisms that automatically increment the position beyond the control of the application generating them. The inclusion of the requirement was there because ECDSA is notorious for security problems under biased secret keys, though it's really only a certificational issue for secp256k1 (due to its group order being so close to 2^256).

#QUESTIONS:

What is the procedure to update the BIP?  Is it still possible to
change the existing BIP-32 even though it is marked as final?  Or
should I make a new BIP for this that obsoletes BIP-32?

BIPs are not supposed to be updated with new ideas, only remarks/links/typos/clarifications/..., so that their bumbers can unambiguously be used to refer to an idea. My suggestion would be to write a new BIP that overrides parts of BIP32, and then put a note in BIP32 that a better mechanism is available that is unlikely to change things in reality for the secp256k1 curve.

I guess
 
What algorithm is preferred? (bike-shedding)  My suggestion:

---

Change the last step of the private -> private derivation functions to:

 . In case parse(I_L) >= n or k_i = 0, the procedure is repeated
   at step 2 with
    I = HMAC-SHA512(Key = c_par, Data = 0x01 || I_R || ser32(i)) 

---

I think this suggestion is simple to implement (a bit harder to unit
test) and the string to hash with HMAC-SHA512 always has the same
length.  I use I_R, since I_L is obviously not very random if I_L >= n.
There is a minimal chance that it will lead to an infinite loop if I_R
is the same in two consecutive iterations, but that has only a chance
of 1 in 2^512 (if the algorithm is used for different curves that make
I_L >= n more likely, the chance is still less than 1 in 2^256).  In
theory, this loop can be avoided by incrementing i in every iteration,
but this would make an implementation error in the "hard to test" path
of the program more likely.

The chance for failure is a bit higher than that, as it only requires a failed key (one in 2^128) in the first step, followed by an iteration that results in the same I_R to cause a cycle. If you take multiple failures before the cycle starts into account, the combined chance for failure is p/(1-p)^2 / 2^256 (with p the chance for a random inadmissable key), which is not much better than 1 in 2^128 for high values of p.

An alternative that always converges is to retry with an appended iteration count is possible:
{
  I = HMAC-SHA512(Key = c_par, Data = 0x01 ||  || ser32(i)) for the first iteration
  I = HMAC-SHA512(Key = c_par, Data = 0x01 ||  || ser32(i) || ser32(j)) for iteration number j, with j > 0
}

Cheers,

--
Pieter


_______________________________________________
bitcoin-dev mailing list
bitcoin-dev <at> lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev

Re: Proposal to update BIP-32

On Sun, May 8, 2016 at 10:07 AM, Pavol Rusnak via bitcoin-dev
<bitcoin-dev <at> lists.linuxfoundation.org> wrote:
> On 21/04/16 14:08, Marek Palatinus via bitcoin-dev wrote:
>> Sipa, you are probably the most competent to answer this.
>> Could you please tell us your opinion? For me, this is
>> straightforward, backward compatible fix and I like it a lot.
>> Not sure about the process of changing "Final" BIP though.
>
> Sipa: Marek told me you posted your answer and he received it, but it
> never reached the list. Could you please resend after figuring out what
> went wrong?

AFAIK Sipa has not been on this list for some time.

Gmane