Flavien Charlon | 16 Nov 17:21 2014

Increasing the OP_RETURN maximum payload size

Hi,

The data that can be embedded as part of an OP_RETURN output is currently limited to 40 bytes. It was initially supposed to be 80 bytes, but got reduced to 40 before the 0.9 release to err on the side of caution.

After 9 months, it seems OP_RETURN did not lead to a blockchain catastrophe, so I think it might be time to discuss increasing the limit.

There are a number of proposals:
  1. Allow two OP_RETURN outputs per transaction (PR)
  2. Increase the default maximum payload size from 40 bytes to 80 bytes (PR)
    Note that the maximum can be configured already through the 'datacarriersize' option - this is just changing the default.
  3. Make the maximum OP_RETURN payload size proportional to the number of outputs of the transaction
  4. A combination of the above
3 sounds the most interesting, and 2 would be the second best.

1 is also good to have as long as the "space budget" is shared between the two outputs.

Can we discuss this and agree on a plan?

Thanks,
Flavien
------------------------------------------------------------------------------
Comprehensive Server Monitoring with Site24x7.
Monitor 10 servers for $9/Month.
Get alerted through email, SMS, voice calls or mobile push notifications.
Take corrective actions from your mobile device.
http://pubads.g.doubleclick.net/gampad/clk?id=154624111&iu=/4140/ostg.clktrk
_______________________________________________
Bitcoin-development mailing list
Bitcoin-development <at> lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Wladimir | 13 Nov 18:25 2014
Picon

[ann] Live Bitcoin Core commits on #bitcoin-commits

All,

As of now you can join #bitcoin-commits on freenode to be notified of
commits to the bitcoin/bitcoin repository. Thanks to Luke-Jr for
telling me how to set this up.

Regards,
Wladimir

------------------------------------------------------------------------------
Comprehensive Server Monitoring with Site24x7.
Monitor 10 servers for $9/Month.
Get alerted through email, SMS, voice calls or mobile push notifications.
Take corrective actions from your mobile device.
http://pubads.g.doubleclick.net/gampad/clk?id=154624111&iu=/4140/ostg.clktrk
Tier Nolan | 9 Nov 00:45 2014
Picon

BIP draft - Auxiliary Header Format

I created a draft BIP detailing a way to add auxiliary headers to Bitcoin in a bandwidth efficient way.  The overhead per auxiliary header is only around 104 bytes per header.  This is much smaller than would be required by embedding the hash of the header in the coinbase of the block.

It is a soft fork and it uses the last transaction in the block to store the hash of the auxiliary header.

It makes use of the fact that the last transaction in the block has a much less complex Merkle branch than the other transactions.
------------------------------------------------------------------------------
_______________________________________________
Bitcoin-development mailing list
Bitcoin-development <at> lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Mike Hearn | 8 Nov 17:04 2014
Picon

Update on mobile 2-factor wallets

Here is a summary of current developments in the space of decentralised 2-factor Bitcoin wallets. I figured some people here might find it interesting.

There has been very nice progress in the last month or two. Decentralised 2FA wallets run on a desktop/laptop and have a (currently always Android) smartphone app to go with them. Compromise of the wallet requires compromise of both devices.

Alon Muroch and Chris Pacia have made huge progress on "Bitcoin Authenticator", their (HD) wallet app. The desktop side runs on Win/Mac/Linux and the mobile side runs on Android. Sending money from the desktop triggers a push notification to the mobile side, which presents the transaction for confirmation. Additionally the desktop wallet has a variety of other features like OneName integration. It's currently in alpha, but I suspect it will be quite popular once released due to its focus on UI and the simple mobile security model. I've tried it out and it worked fine.


Bitcoin Authenticator uses P2SH/CHECKMULTISIG to provide the 2-factor functionality. However, this has various downsides that are well known:  less support for the address type and larger transactions that waste block chain space + result in higher fees.

To solve this problem Christopher Mann and Daniel Loebenberger from Uni Bonn have ported the efficient DSA 2-of-2 signing protocol by MacKenzie and Reiter to ECDSA, and implemented their own desktop/Android wallet app pair showing that it works and has good enough performance. This means that P2SH / CHECKMULTISIG is no longer required for the two factor auth case, and thus it's as cheap as using regular addresses.


Their protocol uses an interesting combination of ECDSA, Paillier homomorphic encryption and some zero knowledge proofs to build a working solution for the 2-of-2 case only. Their app bootstraps from a QR code that includes a TLS public key and IP address of the desktop: the mobile app then connects to it directly, renders the transaction and performs the protocol when the user confirms. The protocol is online, so both devices must be physically present.

Their code is liberally licensed and looks easy to integrate with Alon and Chris' more user focused work, as both projects are built with Android and the latest bitcoinj. If someone is interested, merging Christopher/Daniel's code into the bitcoinj multisig framework would be a useful project, and would make it easier for wallet devs to benefit from this work. I can write a design doc to follow if needed.

Currently, neither of these projects implement support for BIP70, so the screen you see when signing the transaction is hardly user friendly or secure: you just have to trust that the destination address you're paying to isn't tampered with. Support for sending a full payment request between devices is the clear next step once these wallets have obtained a reasonable user base and are stable.


------------------------------------------------------------------------------
_______________________________________________
Bitcoin-development mailing list
Bitcoin-development <at> lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Peter Todd | 6 Nov 22:32 2014

The difficulty of writing consensus critical code: the SIGHASH_SINGLE bug

Recently wrote the following for a friend and thought others might learn
from it.

> Nope, never heard that term.  By "bug-for-bug" compatibility, do you mean
> that, for each version which has a bug, each bug must behave in the *same*
> buggy way?

Exactly. tl;dr: if you accept a block as valid due to a bug that others reject,
you're forked and the world ends.

Long answer... well you reminded me I've never actually written up a good
example for others, and a few people have asked me for one. A great example of
this is the SIGHASH_SINGLE bug in the SignatureHash() function:

    uint256 SignatureHash(CScript scriptCode, const CTransaction& txTo, unsigned int nIn, int nHashType)
    {

<snip>

        else if ((nHashType & 0x1f) == SIGHASH_SINGLE)
        {
            // Only lock-in the txout payee at same index as txin
            unsigned int nOut = nIn;
            if (nOut >= txTmp.vout.size())
            {
                printf("ERROR: SignatureHash() : nOut=%d out of range\n", nOut);
                return 1;
            }
<snip>

        }

<snip>

        // Serialize and hash
        CHashWriter ss(SER_GETHASH, 0);
        ss << txTmp << nHashType;
        return ss.GetHash();
    }

So that error condition results in SignatureHash() returning 1 rather than the
actual hash. But the consensus-critical code that implements the CHECKSIG
operators doesn't check for that condition! Thus as long as you use the
SIGHASH_SINGLE hashtype and the txin index is >= the number of txouts any valid
signature for the hash of the number 1 is considered valid!

When I found this bug¹ I used it to fork bitcoin-ruby, among others.
(I'm not the first; I found it independently after Matt Corallo) Those
alt-implementations handled this edge-case as an exception, which in
turn caused the script to fail. Thus they'd reject blocks containing
transactions using such scripts, and be forked off the network.

You can also use this bug for something even more subtle. So the
CHECKSIG* opcode evaluation does this:

    // Drop the signature, since there's no way for a signature to sign itself
    scriptCode.FindAndDelete(CScript(vchSig));

and CHECKMULTISIG* opcode:

    // Drop the signatures, since there's no way for a signature to sign itself
    for (int k = 0; k < nSigsCount; k++)
    {
        valtype& vchSig = stacktop(-isig-k);
        scriptCode.FindAndDelete(CScript(vchSig));
    }

We used to think that code could never be triggered by a scriptPubKey or
redeemScript, basically because there was no way to get a signature into a
transaction in the right place without the signature depending on the txid of
the transaction it was to be included in. (long story) But SIGHASH_SINGLE makes
that a non-issue, as you can now calculate the signature that signs '1' ahead
of time! In a CHECKMULTISIG that signature is valid, so is included in the list
of signatures being dropped, and thus the other signatures must take that
removal into account or they're invalid. Again, you've got a fork.

However this isn't the end of it! So the way FindAndDelete() works is as
follows:

    int CScript::FindAndDelete(const CScript& b)
    {
        int nFound = 0;
        if (b.empty())
            return nFound;
        iterator pc = begin();
        opcodetype opcode;
        do
        {
            while (end() - pc >= (long)b.size() && memcmp(&pc[0], &b[0], b.size()) == 0)
            {
                pc = erase(pc, pc + b.size());
                ++nFound;
            }
        }
        while (GetOp(pc, opcode));
        return nFound;
    }

So that's pretty ugly, but basically what's happening is the loop iterates
though all the opcodes in the script. Every opcode is compared at the *byte*
level to the bytes in the argument. If they match those bytes are removed from
the script and iteration continues. The resulting script, with chunks sliced
out of it at the byte level, is what gets hashed as part of the signature
checking algorithm.

As FindAndDelete() is always called with CScript(vchSig) the signature
being found and deleted is reserialized. Serialization of bytes isn't
unique; there are multiple valid encodings for PUSHDATA operations. The
way CScript() is called the most compact encoding is used, however this
means that if the script being hashed used a different encoding those
bytes are *not* removed and thus the signature is different.

Again, if you don't get every last one of those details exactly right, you'll
get forked.

...and I'm still not done! So when you call CScript(vchSig) the relevant code
is the following:

    class CScript : public std::vector<unsigned char>
    {

<snip>

        explicit CScript(const CScriptNum& b) { operator<<(b); }

<snip>

        CScript& operator<<(const std::vector<unsigned char>& b)
        {
            if (b.size() < OP_PUSHDATA1)
            {
                insert(end(), (unsigned char)b.size());
            }
            else if (b.size() <= 0xff)
            {
                insert(end(), OP_PUSHDATA1);
                insert(end(), (unsigned char)b.size());
            }
            else if (b.size() <= 0xffff)
            {
                insert(end(), OP_PUSHDATA2);
                unsigned short nSize = b.size();
                insert(end(), (unsigned char*)&nSize, (unsigned char*)&nSize + sizeof(nSize));
            }
            else
            {
                insert(end(), OP_PUSHDATA4);
                unsigned int nSize = b.size();
                insert(end(), (unsigned char*)&nSize, (unsigned char*)&nSize + sizeof(nSize));
            }
            insert(end(), b.begin(), b.end());
            return *this;
        }

<snip, rest of class definition>

    }

Recently as part of BIP62 we added the concept of a 'minimal' PUSHDATA
operation. Using the minimum-sized PUSHDATA opcode is obvious; not so obvious
is that there are few "push number to stack" opcodes that push the numbers 0
through 16 and -1 to the stack, bignum encoded. If you are pushing data that
happens to match the latter, you're supposed to use those OP_1...OP_16 and
OP_1NEGATE opcodes rather than a PUSHDATA.

This means that calling CScript(b'\x81') will result in a non-standard
script. I know an unmerged pull-req² related to sipa's BIP62 work has
code in the CScript() class to automatically do that conversion; had
that code shipped we'd have a potential forking bug between new and old
versions of Bitcoin as the exact encoding of CScript() is consensus
critical by virtue of being called by the FindAndDelete() code!

Even had we made that mistake, I'm not sure how to actually exploit it...
FindAndDelete() is only ever called in a consensus-critical way with valid
signatures; the byte arrays 01, 02, ..., 81 are all totally invalid signatures.

The best I could think of would be to exploit the script verification
flag SCRIPT_VERIFY_STRICTENC by using the little-known hybrid-pubkey
encoding³, which I spent the past two hours looking at. However it isn't
even soft-fork safe in the current implementation!  All I could find was
a new DoS attack⁴, and it's not exploitable in an actual release due to
the pre-v0.10 IsStandard() rules. :(

[¹]: https://bitcointalk.org/index.php?topic=260595.0
[²]: https://github.com/bitcoin/bitcoin/pull/5091
[³]: https://github.com/bitcoin/bitcoin/blob/cd9114e5136ecc1f60baa43fffeeb632782f2353/src/test/data/script_valid.json#L739
[⁴]: http://www.mail-archive.com/bitcoin-development <at> lists.sourceforge.net/msg06458.html

--

-- 
'peter'[:-1] <at> petertodd.org
000000000000000019121d8632bcba14de98125e8a9affc7d07c760706ba3879
------------------------------------------------------------------------------
_______________________________________________
Bitcoin-development mailing list
Bitcoin-development <at> lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Michael McLees | 6 Nov 14:29 2014
Picon

Nakapay - Proposal for payment method using client generated paycodes and federated paycode servers

I sent this yesterday but it is not showing in the archives, so I'm not sure if I did it correctly. I sent it prior to subscribing, so perhaps that mucked it up.

nakapay.com

I have developed a system whereby a person requesting Bitcoin can make a specific request (amount, address, timeframe, etc...) by only communicating a 6 character paycode to a payer. The system does not require that users sign up for the service; it is open to all. Users may submit information by POST via my API for which I have documentation on the website above. It is my intention to convince wallet developers, merchants, exchanges, and payment processors to integrate my system into their products.

Common objections are a lack of use cases and a lack of security. I'd like to explore possible use cases and discuss security with this mailing list.

When talking to wallet developers, I've gotten the impression that there is a chicken and egg problem with my product. If no one uses it, they won't develop for it, and if they don't develop for it ... on and on.

There are possible monetary incentives for development as there is a possible revenue stream for paycode server operators.

I've not used a mailing list like the before, so I'm not sure if this submission is getting where it needs to go.

Thank you all for your time and continued efforts to improve Bitcoin.
------------------------------------------------------------------------------
_______________________________________________
Bitcoin-development mailing list
Bitcoin-development <at> lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Peter Todd | 6 Nov 11:38 2014

SCRIPT_VERIFY_STRICTENC and CHECKSIG NOT

So right now git head will accept the following invalid transaction into
the mempool:

0100000001140de229e08fda25cbc16ded2618cdacce49fcb18c0b6ccdace00040909adae4000000009000493046022100f7828d81c849c5448ba5ba4ef55df6b4d0ba3ae3f1a59cff3291880c2c8e524f022100d2f5bc9dc2f0674eded31023cb47e61a596e10f8f1ddd44cf92d290c9db577c70144410778d430274f8c5ec1321338151e9f27f4c676a008bdf8638d07c0b6be9ab35c71a1518063243acd4dfe96b66e3f2ec8013c8e072cd09b3834a19f81f659cc3455ac91ffffffff01102700000000000017a914e661a2229cc824329c9409f49d99cb5ac350c9288700000000

which spends the redeemScript:

0778d430274f8c5ec1321338151e9f27f4c676a008bdf8638d07c0b6be9ab35c71a1518063243acd4dfe96b66e3f2ec8013c8e072cd09b3834a19f81f659cc3455
CHECKSIG NOT

That pubkey is valid and accepted by OpenSSL as it's obscure "hybrid"
format. The transaction is invalid because the signature is correct,
causing CHECKSIG to return 1, which is inverted to 0 by the NOT.

However the implementation of the STRICTENC flag simply makes pubkey
formats it doesn't recognize act as through the signature was invalid,
rather than failing the transaction. Similar to the invalid due to too
many sigops DoS attack I found before, this lets you fill up the mempool
with garbage transactions that will never be mined. OTOH I don't see any
way to exploit this in a v0.9.x IsStandard() transaction, so we haven't
shipped code that actually has this vulnerability. (dunno about
alt-implementations)

I suggest we either change STRICTENC to simply fail unrecognized pubkeys
immediately - similar to how non-standard signatures are treated - or
fail the script if the pubkey is non-standard and signature verification
succeeds.

Thoughts?

--

-- 
'peter'[:-1] <at> petertodd.org
0000000000000000152dc55f27338b58325f0432d2dc6edb90c8d449d9959583
------------------------------------------------------------------------------
_______________________________________________
Bitcoin-development mailing list
Bitcoin-development <at> lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Francis GASCHET | 6 Nov 10:51 2014
Picon

Running a full node

Dear all,

I'm currently discovering the Bitcoin's universe.
I installed bitcoind on my PC and I'm currently testing different things on testnet.
I just read an article saying that the risk for Bitcoin in the future is the decreasing number of full nodes, with appropriate resources. There are only few of them in France !

My company operates a dual homed Internet access and has some capacity to host an HA server in a secured environment. So I'm thinking about setting up a full node.
But I'd like to know what storage, RAM  and bandwidth resources are needed. I guess that the problem is not the CPU.

Thanks in advance for details.
--
------------------------------------------------------------------------------
_______________________________________________
Bitcoin-development mailing list
Bitcoin-development <at> lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Krzysztof Okupski | 4 Nov 16:02 2014

Bitcoin API Wrapper

Dear everyone,

I've developed a C++ wrapper for JSON-RPC communication with
an existing Bitcoin installation. For everyone that is a developer and
interested in building extensions or alike, this might prove useful.

The code can be found on GitHub:
-> https://github.com/minium/bitcoin-api-cpp

Warm greetings,
Krzysztof

------------------------------------------------------------------------------
Pieter Wuille | 4 Nov 14:29 2014
Picon

BIP62 and future script upgrades

Hi all,

one of the rules in BIP62 is the "clean stack" requirement, which
makes passing more inputs to a script than necessary illegal.

Unfortunately, this rule needs an exception for P2SH scripts: the test
can only be done after (and not before) the second stage evaluation.
Otherwise it would reject all spends from P2SH (which rely on
"superfluous" inputs to pass data to the second stage).

I submitted a Pull Request to clarify this in BIP62:
https://github.com/bitcoin/bips/pull/115

However, this also leads to the interesting observation that the
clean-stack rule is incompatible with future P2SH-like constructs -
which would be very useful if we'd ever want to deploy a "Script 2.0".
Any such upgrade would suffer from the same problem as P2SH, and
require an exception in the clean-stack rule, which - once deployed -
is no longer a softfork.

Luke suggested on the pull request to not apply this rule on every
transaction with nVersion >= 3, which indeed solves the problem. I
believe this can easily be generalized: make the (non mandatory) BIP62
rules only apply to transaction with strict nVersion==3, and not to
higher ones. The higher ones are non-standard anyway, and shouldn't be
used before there is a rule that applies to them anyway - which could
include some or all of BIP62 if wanted at that point still.

Opinions?

------------------------------------------------------------------------------
Francis GASCHET | 3 Nov 10:16 2014
Picon

Bug in genbuild.sh ?

Hello,

I just compiled bitCoin core on Debian 7.
I got an error message "too many arguments" when executing genuid.sh
I fixed it as shown hereafter and it worked fine.

pcfg:~/bitcoinCore> diff -u shareOLD/genbuild.sh share/genbuild.sh
--- shareOLD/genbuild.sh        2014-11-03 08:32:08.950708258 +0100
+++ share/genbuild.sh   2014-11-03 08:38:44.626698114 +0100
 <at>  <at>  -16,7 +16,7  <at>  <at> 
  DESC=""
  SUFFIX=""
  LAST_COMMIT_DATE=""
-if [ -e "$(which git 2>/dev/null)" -a $(git rev-parse 
--is-inside-work-tree 2>/dev/null) = "true" ]; then
+if [ -e "$(which git 2>/dev/null)" -a "$(git rev-parse 
--is-inside-work-tree 2>/dev/null)" = "true" ]; then
      # clean 'dirty' status of touched files that haven't been modified
      git diff >/dev/null 2>/dev/null

Best regards,
--

-- 
Francis

------------------------------------------------------------------------------

Gmane