Peter Todd | 21 Dec 08:01 2014

Re: The relationship between Proof-of-Publication and Anti-Replay Oracles

On Sun, Dec 21, 2014 at 02:18:18PM +0800, Mark Friedenbach wrote:
> Care to expand?
> 
> Freimarkets does not require proof of publication of bids or asks, which
> are distributed out of band from the block chain until a match is made. It
> does not guarantee ordering of market transactions. Indeed, front-running
> is embraced as the mechanism for generating miner fees to pay for the
> service.

Right, so Freimarkets is delibrately insecure.

Best of luck on that.

> Sybil attacks? I'm not sure what you could be referring to. In Freimarkets
> a bid or ask is valid when received; a double-spend is required to cancel
> it. You could only flood the network with actual executable orders, and the
> counter-party to the order doesn't care if they all came from the same
> person or not.
> 
> Can you explain what it is you are objecting to?

Read my paper¹ - proof-of-publication is what allows you to detect
front-running robustly within certain parameters. Protecting against
that is widely considered to be a very important goal by people actually
in finance, to the point where I've had discussions with people where
anti-front-running protection might be the *only* thing they use a
decentralized system for.

1) Decentralized digital asset exchange with honest pricing and market depth,
   Peter Todd, Feb 9th 2014,
(Continue reading)

Peter Todd | 21 Dec 06:52 2014

Re: The relationship between Proof-of-Publication and Anti-Replay Oracles

On Sun, Dec 21, 2014 at 11:57:51AM +0800, Mark Friedenbach wrote:
> On Sat, Dec 20, 2014 at 10:48 PM, Peter Todd <pete <at> petertodd.org> wrote:
> 
> > However the converse is not possible: anti-replay cannot be used to
> > implement proof-of-publication. Knowing that no conflicting message
> > exists says nothing about who be in posession of that message, or
> > indeed, any message at all. Thus anti-replay is not sufficient to
> > implement other uses of proof-of-publication such as decentralized
> > exchange³.
> >
> 
> I think you are trying to say something more specific / limited than that,
> and I suggest you adjust your wording accordingly. Decentralized exchange
> would be possible today with vanilla bitcoin using SIGHASH_SINGLE if only
> the protocol supported multiple validated assets (which it could, but
> doesn't). Rather straightforward further extensions to the protocol would
> enable market participants to use a wider class of orders, as well as
> enable the buyer rather than the seller to dictate order sizes via partial
> redemption, as we demonstrate in our Freimarkets paper.

Do you realise that all those Freimarket's uses are either based on
proof-of-publication, or insecure due to sybil attacks?

--

-- 
'peter'[:-1] <at> petertodd.org
000000000000000017d70ee98f4cee509d95c4f31d5b998bae6deb09df1088fc
------------------------------------------------------------------------------
Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server
(Continue reading)

Will Bickford | 20 Dec 08:42 2014
Picon

Area of Focus

Hi all, I'm looking to help with Bitcoin core development in my spare time (a few hours per week).

A little bit about me:
* I use C++ and Qt daily
* I love to automate and enhance software systems
* I enjoy root causing and fixing issues

I saw Gavin say we needed help with testing in a Reddit AMA a while ago. I'm curious where I can make the best impact. Any feedback would be appreciated. Thanks!

Will Bickford
"In Google We Trust"
------------------------------------------------------------------------------
Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server
from Actuate! Instantly Supercharge Your Business Reports and Dashboards
with Interactivity, Sharing, Native Excel Exports, App Integration & more
Get technology previously reserved for billion-dollar corporations, FREE
http://pubads.g.doubleclick.net/gampad/clk?id=164703151&iu=/4140/ostg.clktrk
_______________________________________________
Bitcoin-development mailing list
Bitcoin-development <at> lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development
paul snow | 17 Dec 23:20 2014
Picon

Setting the record straight on Proof-of-Publication

[[Since I sent this while the List Server was down, it didn't actually go to everyone.  Forgive me if you ended up with two copies.]]

Peter provides an excellent summary of Proof of Publication, which starts with defining it as being composed of a solution to the double spend problem.  He requires Proof-of-receipt (proof every member of p in audience P has received a message m), Proof-of-non-publication (proof a message m has not been published to an audience P), and Proof-of-membership (proof some q is a member of P).

He goes on to state (curiously) that Factom cannot provide Proof of Publication.

Proof of Membership
================

Let's first satisfy the easier proofs. A Factom user can know they are a member of the Factom audience if they have access to the Bitcoin Blockchain, knowledge of Factom's first anchor (Merkle root stored in the blockchain) and the Factom network for distributing Factom's structures.  They can pretty much know that they are in the Audience.

Proof of Receipt
============

Proof of receipt is also pretty easy for the Factom user.  User submit entries, and Factom publishes a Merkle Root to the Bitcoin Blockchain.  The Merkle proof to the entry proves receipt.  To get the Merkle proof requires access to Factom structures, which all in the audience have access to by definition.  But the proof itself only requires the blockchain.

At this point the user can have a Merkle proof of their entry rooted in the blockchain.

Proof of non-publication
==================

Last, can the Factom user have a  Proof-of-non-publication?  Well, absolutely.  The Factom state limits the public keys that can be used to write the anchors in the blockchain.  Transactions in Bitcoin that are not signed with those public keys are discounted out of hand.  Just like publishing in Mad Magazine does not qualify if publishing a notice in the New York Times is the standard.

The complaint Peter has that the user cannot see all the "child chains" (what we call Factom Chains) is invalid.  The user can absolutely see all the Directory Blocks (which documents all Factom Chains) if they have access to Factom. But the user doesn't need to prove publication in all chains.  Some of those chains are like Car Magazines, Math Textbooks, Toaster manuals, etc. Without restricting the domain of publication there is no proof of the negative. The negative must be proved in the standard of publication, i.e. the user's chain.  And the user can in fact know their chain, and can enumerate their chain, without regard to most of the other data in Factom.

Peter seems to be operating under the assumption that the audience for a Factom user must necessarily be limited to information found in the blockchain.  Yet the user certainly should have access to Factom if they are a Factom user.  Factom then is no different from the New York Times, and the trust in Factom is less. As Peter says himself, he has to trust the New York Times doesn't publish multiple versions of the same issue. The user of the New York Times would have no way to know if there were other versions of an issue outside of looking at all New York Times issues ever published.  

Factom on the other hand documents their "issues" on the blockchain.  Any fork in publication is obvious as it would require different Bitcoin addresses to be used, and the blocks would have to have validating signatures of majorities of all the Factom servers. As long as a fork in Factom can be clearly identified, and no fork exists, proof of the negative is assured.  And upon a fork, one must assume the users will specify which fork should be used.  

Proof of publication does not require a system that cannot fork, since no such non-trivial system exists.  What is required is that forks can be detected, and that a path can be chosen to move forward.
------------------------------------------------------------------------------
Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server
from Actuate! Instantly Supercharge Your Business Reports and Dashboards
with Interactivity, Sharing, Native Excel Exports, App Integration & more
Get technology previously reserved for billion-dollar corporations, FREE
http://pubads.g.doubleclick.net/gampad/clk?id=164703151&iu=/4140/ostg.clktrk
_______________________________________________
Bitcoin-development mailing list
Bitcoin-development <at> lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development
paul snow | 16 Dec 21:28 2014
Picon

Setting the record straight on Proof-of-Publication

Peter provides an excellent summary of Proof of Publication, which starts with defining it as being composed of a solution to the double spend problem.  He requires Proof-of-receipt (proof every member of p in audience P has received a message m), Proof-of-non-publication (proof a message m has not been published to an audience P), and Proof-of-membership (proof some q is a member of P).

He goes on to state (curiously) that Factom cannot provide Proof of Publication.

Proof of Audience
=============

Let's first satisfy the easier proofs. A Factom user can know they are in the Factom audience if they have access to the Bitcoin Blockchain, knowledge of Factom's first anchor (Merkle root stored in the blockchain) and the Factom network for distributing Factom's structures.  They can pretty much know that they are in the Audience.

Proof of Receipt
============

Proof of receipt is also pretty easy for the Factom user.  User submit entries, and Factom publishes a Merkle Root to the Bitcoin Blockchain.  The Merkle proof to the entry proves receipt.  To get the Merkle proof requires access to Factom structures, which all in the audience have access to by definition.  But the proof itself only requires the blockchain.

At this point the user can have a Merkle proof of their entry rooted in the blockchain.

Proof of non-publication
==================

Last, can the Factom user have a  Proof-of-non-publication.  Well, absolutely.  The Factom state limits the public keys that can be used to write the anchors in the blockchain.  Entries that are not written with those public keys are discounted out of hand.  Just like publishing in Mad Magazine does not qualify if publishing a notice in the New York Times is the standard.

The complaint Peter has that the user cannot see all the "child chains" (what we call Factom Chains) is invalid.  The user can absolutely see all the Directory Blocks (which documents all Factom Chains) if they have access to Factom. But the user doesn't need to prove publication in all chains.  Some of those chains are like Car Magazines, Math Textbooks, Toaster manuals, etc. Without restricting the domain of publication there is no proof of the negative. The negative must be proved in the standard of publication, i.e. the user's chain.  And the user can in fact know their chain, and can enumerate their chain, without regard to most of the other data in Factom.

Peter seems to be operating under the assumption that the audience for a Factom user must necessarily be limited to information found in the blockchain.  Yet the user certainly should have access to Factom if they are a Factom user.  Factom then is no different from the New York Times, and the trust in Factom is less. As Peter says himself, he has to trust the New York Times doesn't publish multiple versions of the same issue. The user of the New York Times would have no way to know if there were other versions of an issue outside of looking at all New York Times issues ever published.  

Factom on the other hand documents their "issues" on the blockchain.  Any fork in publication is obvious as it would require different Bitcoin addresses to be used, and the blocks would have to have validating signatures of majorities of all the Factom servers. As long as a fork in Factom can be clearly identified, and no fork exists, proof of the negative is assured.  And upon a fork, one must assume the users will specify which fork should be used.  

Proof of publication does not require a system that cannot fork, since no such non-trivial system exists.  What is required is that forks can be detected, and that a path can be chosen to move forward.
------------------------------------------------------------------------------
Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server
from Actuate! Instantly Supercharge Your Business Reports and Dashboards
with Interactivity, Sharing, Native Excel Exports, App Integration & more
Get technology previously reserved for billion-dollar corporations, FREE
http://pubads.g.doubleclick.net/gampad/clk?id=164703151&iu=/4140/ostg.clktrk
_______________________________________________
Bitcoin-development mailing list
Bitcoin-development <at> lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Jeff Garzik | 16 Dec 18:59 2014

Open development processes and reddit charms


It can be useful to review open source development processes from time to time.  This reddit thread[1] serves use both as a case study, and also a moment of OSS process introduction for newbies.
[1] http://www.reddit.com/r/Bitcoin/comments/2pd0zy/peter_todd_is_saying_shoddy_development_on_v010/


Dirty Laundry

When building businesses or commercial software projects, outsiders typically hear little about the internals of project development.  The public only hears what the companies release, which is prepped and polished. Internal disagreements, schedule slips, engineer fistfights are all unseen.

Open source development is the opposite.  The goal is radical transparency.  Inevitably there is private chatter (0day bugs etc.), but the default is openness.  This means that is it normal practice to "air dirty laundry in public."  Engineers will disagree, sometimes quietly, sometimes loudly, sometimes rudely and with ad hominem attacks.  On the Internet, there is a pile-on effect, where informed and uninformed supporters add their 0.02 BTC.

Competing interests cloud the issues further.  Engineers are typically employed by an organization, as a technology matures.  Those organizations have different strategies and motivations.  These organizations will sponsor work they find beneficial.  Sometimes those orgs are non-profit foundations, sometimes for-profit corporations.  Sometimes that work is maintenance ("keep it running"), sometimes that work is developing new, competitive features that company feels will give it a better market position.  In a transparent development environment, all parties are hyperaware of these competing interests.  Internet natterers painstakingly document and repeat every conspiracy theory about Bitcoin Foundation, Blockstream, BitPay, various altcoin developers, and more as a result of these competing interests.

Bitcoin and altcoin development adds an interesting new dimension.  Sometimes engineers have a more direct conflict of interest, in that the technology they are developing is also potentially their road to instant $millions.  Investors, amateur and professional, have direct stakes in a certain coin or coin technology.  Engineers also have an emotional stake in technology they design and nurture.  This results in incentives where supporters of a non-bitcoin technology work very hard to thump bitcoin.  And vice versa.  Even inside bitcoin, you see "tree chains vs. side chains" threads of a similar stripe.  This can lead to a very skewed debate.

That should not distract from the engineering discussion.  Starting from first principles, Assume Good Faith[2].  Most engineers in open source tend to mean what they say.  Typically they speak for themselves first, and their employers value that engineer's freedom of opinion.  Pay attention to the engineers actually working on the technology, and less attention to the noise bubbling around the Internet like the kindergarten game of grapevine.
[2] http://en.wikipedia.org/wiki/Wikipedia:Assume_good_faith

Being open and transparent means engineering disagreements happen in public.  This is normal.  Open source engineers live an aquarium life[3].
[3] https://www.youtube.com/watch?v=QKe-aO44R7k


What the fork?

In this case, a tweet suggests consensus bug risks, which reddit account "treeorsidechains" hyperbolizes into a dramatic headline[1].  However, the headline would seem to be the opposite of the truth.  Several changes were merged during 0.10 development which move snippets of source code into new files and new sub-directories.  The general direction of this work is creating a "libconsensus" library that carefully encapsulates consensus code in a manner usable by external projects.  This is a good thing.

The development was performed quite responsible:  Multiple developers would verify each cosmetic change, ensuring no behavior changes had been accidentally (or maliciously!) introduced.  Each pull request receives a full multi-platform build + automated testing, over and above individual dev testing.  Comparisons at the assembly language level were sometimes made in critical areas, to ensure zero before-and-after change.  Each transformation gets the Bitcoin Core codebase to a more sustainable, more reusable state.

Certainly zero-change is the most conservative approach. Strictly speaking, that has the lowest consensus risk.  But that is a short term mentality.  Both Bitcoin Core and the larger ecosystem will benefit when the "hairball" pile of source code is cleaned up.  Progress has been made on that front in the past 2 years, and continues.   Long term, combined with the "libconsensus" work, that leads to less community-wide risk.

The key is balance.  Continue software engineering practices -- like those just mentioned above -- that enable change with least consensus risk.  Part of those practices is review at each step of the development process:  social media thought bubble, mailing list post, pull request, git merge, pre-release & release.  It probably seems chaotic at times.  In effect, git[hub] and the Internet enable a dynamic system of review and feedback, where each stage provides a check-and-balance for bad ideas and bad software changes.  It's a human process, designed to acknowledge and handle that human engineers are fallible and might make mistakes (or be coerced/under duress!).  History and field experience will be the ultimate judge, but I think Bitcoin Core is doing good on this score, all things considered.

At the end of the day, while no change is without risk, version 0.10 work was done with attention to consensus risk at multiple levels (not just short term).


Technical and social debt

Working on the Linux kernel was an interesting experience that combined git-driven parallel development and a similar source code hairball.  One of the things that quickly became apparent is that cosmetic patches, especially code movement, was hugely disruptive.  Some even termed it anti-social.  To understand why, it is important to consider how modern software changes are developed:

Developers work in parallel on their personal computers to develop XYZ change, then submit their change "upstream" as a github pull request.  Then time passes.  If code movement and refactoring changes are accepted upstream before XYZ, then the developer is forced update XYZ -- typically trivial fixes, re-review XYZ, and re-test XYZ to ensure it remains in a known-working state.

Seemingly cosmetic changes such as code movement have a ripple effect on participating developers, and wider developer community.  Every developer who is not immediately merged upstream must bear the costs of updating their unmerged work.

Normally, this is expected.  Encouraging developers to build on top of "upstream" produces virtuous cycles.

However, a constant stream of code movement and cosmetic changes may produce a constant stream of disruption to developers working on non-trivial features that take a bit longer to develop before going upstream.  Trivial changes are encouraged, and non-trivial changes face a binary choice of (a) be merged immediately or (b) bear added re-base, re-view, re-test costs.

Taken over a timescale of months, I argue that a steady stream of cosmetic code movement changes serves as a disincentive to developers working with upstream.  Each upstream breakage has a ripple effect to all developers downstream, and imposes some added chance of newly introduced bugs on downstream developers.  I'll call this "social debt", a sort of technical debt[4] for developers.
[4] http://en.wikipedia.org/wiki/Technical_debt

As mentioned above, the libconsensus and code movement work is a net gain.  The codebase needs cleaning up.  Each change however incurs a little bit of social debt.  Life is a little bit harder on people trying to get work into the tree.  Developers are a little bit more discouraged at the busy-work they must perform.  Non-trivial pull requests take a little bit longer to approve, because they take a little bit more work to rebase (again).

A steady flow of code movement and cosmetic breakage into the tree may be a net gain, but it also incurs a lot of social debt.  In such situations, developers find that tested, working out-of-tree code repeatedly stops working during the process of trying to get that work in-tree.  Taken over time, it discourages working on the tree.  It is rational to sit back, not work on the tree, let the breakage stop, and then pick up the pieces.


Paradox Unwound

Bitcoin Core, then, is pulled in opposite directions by a familiar problem.  It is generally agreed that the codebase needs further refactoring.  That's not just isolated engineer nit-picking.  However, for non-trivial projects, refactoring is always anti-social in the short term.  It impacts projects other than your own, projects you don't even know about. One change causes work for N developers.  Given these twin opposing goals, the key, as ever, is finding the right balance.

Much like "feature freeze" in other software projects, developing a policy that opens and closes windows for code movement and major disruptive changes seems prudent.  One week of code movement & cosmetics followed by 3 weeks without, for example.  Part of open source parallel development is social signalling:  Signal to developers when certain changes are favored or not, then trust they can handle the rest from there.

While recent code movement commits themselves are individually ACK-worthy, professionally executed and moving towards a positive goal, I think the project could strike a better balance when it comes to disruptive cosmetic changes, a balance that better encourages developers to work on more involved Bitcoin Core projects.


--
Jeff Garzik
Bitcoin core developer and open source evangelist
BitPay, Inc.      https://bitpay.com/
------------------------------------------------------------------------------
Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server
from Actuate! Instantly Supercharge Your Business Reports and Dashboards
with Interactivity, Sharing, Native Excel Exports, App Integration & more
Get technology previously reserved for billion-dollar corporations, FREE
http://pubads.g.doubleclick.net/gampad/clk?id=164703151&iu=/4140/ostg.clktrk
_______________________________________________
Bitcoin-development mailing list
Bitcoin-development <at> lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Peter Todd | 15 Dec 13:47 2014

Recent EvalScript() changes mean CHECKLOCKTIMEVERIFY can't be merged

BtcDrak was working on rebasing my CHECKLOCKTIMEVERIFY¹ patch to master a few
days ago and found a fairly large design change that makes merging it currently
impossible. Pull-req #4890², specifically commit c7829ea7, changed the
EvalScript() function to take an abstract SignatureChecker object, removing the
txTo and nIn arguments that used to contain the transaction the script was in
and the txin # respectively. CHECKLOCKTIMEVERIFY needs txTo to obtain the
nLockTime field of the transaction, and it needs nIn to obtain the nSequence of
the txin.

We need to fix this if CHECKLOCKTIMEVERIFY is to be merged.

Secondly, that this change was made, and the manner in which is was made, is I
think indicative of a development process that has been taking significant
risks with regard to refactoring the consensus critical codebase. I know I
personally have had a hard time keeping up with the very large volume of code
being moved and changed for the v0.10 release, and I know BtcDrak - who is
keeping Viacoin up to date with v0.10 - has also had a hard time giving the
changes reasonable review. The #4890 pull-req in question had no ACKs at all,
and only two untested utACKS, which I find worrying for something that made
significant consensus critical code changes.

While it would be nice to have a library encapsulating the consensus code, this
shouldn't come at the cost of safety, especially when the actual users of that
library or their needs is still uncertain. This is after all a multi-billion
project where a simple fork will cost miners alone tens of thousands of dollars
an hour; easily much more if it results in users being defrauded. That's also
not taking into account the significant negative PR impact and loss of trust. I
personally would recommend *not* upgrading to v0.10 due to these issues.

A much safer approach would be to keep the code changes required for a
consensus library to only simple movements of code for this release, accept
that the interface to that library won't be ideal, and wait until we have
feedback from multiple opensource projects with publicly evaluatable code on
where to go next with the API.

1) https://github.com/bitcoin/bips/blob/master/bip-0065.mediawiki
2) https://github.com/bitcoin/bitcoin/pull/4890

--

-- 
'peter'[:-1] <at> petertodd.org
00000000000000001b18a596ecadd07c0e49620fb71b16f9e41131df9fc52fa6
------------------------------------------------------------------------------
Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server
from Actuate! Instantly Supercharge Your Business Reports and Dashboards
with Interactivity, Sharing, Native Excel Exports, App Integration & more
Get technology previously reserved for billion-dollar corporations, FREE
http://pubads.g.doubleclick.net/gampad/clk?id=164703151&iu=/4140/ostg.clktrk
_______________________________________________
Bitcoin-development mailing list
Bitcoin-development <at> lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Peter Todd | 13 Dec 03:34 2014

Near-zero fee transactions with hub-and-spoke micropayments

From the So-Obvious-No-one-Has-Bothered-to-Write-It-Down-Department:

tl;dr: Micropayment channels can be extended to arbitrary numbers of
parties using a nearly completley untrusted hub, greatly decreasing
transaction fees and greatly increasing the maximum number of financial
transactions per second that Bitcoin can support.

So a micropayment channel enables a payor to incrementally pay a payee
by first locking a deposit of Bitcoins in a scriptPubKey of the
following form:

    IF
        <timeout> CHECKLOCKTIMEVERIFY OP_DROP
    ELSE
        <payee> CHECKSIGVERIFY
    ENDIF
    <payor> CHECKSIGVERIFY

(obviously many other forms are possible, e.g. multisig)

Once the funds are confirmed, creating txout1, the payor creates
transactions spending txout1 sending some fraction of the txout value to
the payee and gives that half-signed transaction to the payee. Each time
the payor wants to send more money to the payee they sign a new
half-signed transaction double-spending the previous one.

When the payee is satisfied they can close the channel by signing the
most recent, highest value, tx with their key, thus making it valid. If
the payee vanishes the payor can get all the funds back once the timeout
is reached using just their key.

Since confirmation is controlled by the payee once the initial deposit
confirms subsequent increases in funds sent happen instantly in that the
payor can not double-spend the input until the timeout is reached.

(there's another formulation from Jeremy Spilman that can be almost
implemented right now using a signed refund transaction, however it is
vulnerable to transaction mutability)

Hub-and-Spoke Payments
======================

Using a nearly completely untrusted hub we can allow any number of
parties to mutually send and receive Bitcoins instantly with near-zero
transaction fees. Each participant creates one or two micropayment
channels with the hub; for Alice to send Bob some funds Alice first
sends the funds to the hub in some small increment, the hub sends the
funds to Bob, and finally the hub gives proof of that send to Alice. The
incremental amount of Bitcoins sent can be set arbitrarily low, limited
only by bandwidth and CPU time, and Bob does not necessarily need to
actually be online. The worst that the hub can do is leave user's funds
locked until the timeout expires.

Multiple Hubs
=============

Of course, hubs can in turn send to each other, again in a trustless
manner; multiple hops could act as a onion-style privacy scheme. The
micropayments could also use an additional chaum token layer for
privacy, although note that the k-anonymity set involves a trade-off
between privacy and total # of Bitcoins that could be stolen by the hub.

Of course, in general the micropayment hub breaks the linkage between
payor and payee, with respect to the data available from the blockchain.

Capital Requirements
====================

A business disadvantage with a hub-and-spoke system is that it ties up
capital, creating a tradeoff between fees saved and Bitcoins tied up.
How exactly to handle this is a business decision - for instance opening
the micropayment channel could involve a small initial payment to
account fo rthe time-value-of-money.

Embedded consensus/Colored coins
================================

Note how many embedded consensus schemes like colored coins are
compatible with micropayment channels. (though have fun figuring out who
deserves the dividends!)

--

-- 
'peter'[:-1] <at> petertodd.org
000000000000000012367d385ad11358a4a1eee86cf8ebe06a76add36dfb4622
------------------------------------------------------------------------------
Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server
from Actuate! Instantly Supercharge Your Business Reports and Dashboards
with Interactivity, Sharing, Native Excel Exports, App Integration & more
Get technology previously reserved for billion-dollar corporations, FREE
http://pubads.g.doubleclick.net/gampad/clk?id=164703151&iu=/4140/ostg.clktrk
_______________________________________________
Bitcoin-development mailing list
Bitcoin-development <at> lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Alex Mizrahi | 12 Dec 18:50 2014
Picon

Re: Setting the record straight on Proof-of-Publication


I think what Gareth was getting at was that with client-side validation there can be no concept of a soft-fork. And how certain are you that the consensus rules will never change?

Yes, it is true that you can't do a soft-fork, but you can do a hard-fork.
Using scheduled updates: client simply stops working at a certain block, and user is required to download an update.

In Bitcoin we can operate with some assurance that hard-forks will almost never happen, exactly because extensions are more likely to occur via soft-fork mechanisms. In such a case, old non-updated clients will still generate a correct view of the ledger state. But this is not so with client side validation!

You assume that an ability to operate with zero maintenance is very important, but is this a case?

There was a plenty of critical bugs in bitcoind, and in many cases people were strongly encouraged to upgrade to a new version.
So, you urge people to keep their clients up-to-date, but at the same time claim that keeping very old versions is critically important.
How does this make sense? Is this an exercise at double-think?

An alternative to this is to make updates mandatory. You will no longer need to maintain compatibility with version 0.1 (which is impossible) and you can also evolve consensus rules over time.

It looks like people make a cargo cult out of Bitcoin's emergent properties. 
------------------------------------------------------------------------------
Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server
from Actuate! Instantly Supercharge Your Business Reports and Dashboards
with Interactivity, Sharing, Native Excel Exports, App Integration & more
Get technology previously reserved for billion-dollar corporations, FREE
http://pubads.g.doubleclick.net/gampad/clk?id=164703151&iu=/4140/ostg.clktrk
_______________________________________________
Bitcoin-development mailing list
Bitcoin-development <at> lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Peter Todd | 12 Dec 10:05 2014

Setting the record straight on Proof-of-Publication

Introduction
============

While not a new concept proof-of-publication is receiving a significant
amount of attention right now both as an idea, with regard to the
embedded consensus systems that make use of it, and in regard to the
sidechains model proposed by Blockstream that rejects it. Here we give a
clear definition of proof-of-publication and its weaker predecessor
timestamping, describe some usecases for it, and finally dispel some of
the common myths about it.

What is timestamping?
=====================

A cryptographic timestamp proves that message m existed prior to some
time t.

This is the cryptographic equivalent of mailing yourself a patentable
idea in a sealed envelope to establish the date at which the idea
existed on paper.

Traditionally this has been done with one or more trusted third parties
who attest to the fact that they saw m prior to the time t. More
recently blockchains have been used for this purpose, particularly the
Bitcoin blockchain, as block headers include a block time which is
verified by the consensus algorithm.

What is proof-of-publication?
=============================

Proof-of-publication is what solves the double-spend problem.

Cryptographic proof-of-publication actually refers to a few closely
related proofs, and practical uses of it will generally make use of more
than one proof.

Proof-of-receipt
----------------

Prove that every member p in of audience P has received message m. A
real world analogy is a legal notice being published in a major
newspaper - we can assume any subscriber received the message and had a
chance to read it.

Proof-of-non-publication
------------------------

Prove that message m has *not* been published. Extending the above real
world analogy the court can easily determine that a legal notice was not
published when it should have been by examining newspaper archives. (or
equally, *because* the notice had not been published, some action a
litigant had taken was permissable)

Proof-of-membership
-------------------

A proof-of-non-publication isn't very useful if you can't prove that
some member q is in the audience P. In particular, if you are to
evaluate a proof-of-membership, q is yourself, and you want assurance
you are in that audience. In the case of our newspaper analogy because
we know what today's date is, and we trust the newspaper never to
publish two different editions with the same date we can be certain we
have searched all possible issues where the legal notice may have been
published.

Real-world proof-of-publication: The Torrens Title System
---------------------------------------------------------

Land titles are a real-world example, dating back centuries, with
remarkable simularities to the Bitcoin blockchain. Prior to the torrens
system land was transferred between owners through a chain of valid
title deeds going back to some "genesis" event establishing rightful
ownership independently of prior history. As with the blockchain the
title deed system has two main problems: establishing that each title
deed in the chain is valid in isolation, and establishing that no other
valid title deeds exist. While the analogy isn't exact - establishing
the validity of title deeds isn't as crisp a process as simple checking
a cryptographic signature - these two basic problems are closely related
to the actions of checking a transaction's signatures in isolation, and
ensuring it hasn't been double-spent.

To solve these problems the Torrens title system was developed, first in
Australia and later Canada, to establish a singular central registry of
deeds, or property transfers. Simplifying a bit we can say inclusion -
publication - in the official registery is a necessary pre-condition to
a given property transfer being valid. Multiple competing transfers are
made obvious, and the true valid transfer can be determined by whichever
transfer happened first.

Similarly in places where the Torrens title system has not been adopted,
almost always a small number of title insurance providers have taken on
the same role. The title insurance provider maintains a database of all
known title deeds, and in practice if a given title deed isn't published
in the database it's not considered valid.

Common myths
============

Proof-of-publication is the same as timestamping
------------------------------------------------

No. Timestamping is a significantly weaker primitive than
proof-of-publication. This myth seems to persist because unfortunately
many members of the Bitcoin development and theory community - and even
members of the Blockstream project - have frequently used the term
"timestamping" for applications that need proof-of-publication.

Publication means publishing meaningful data to the whole world
---------------------------------------------------------------

No. The data to be published can often be an otherwise meaningless
nonce, indistinguishable from any other random value. (e.g. an ECC
pubkey)

For example colored coins can be implemented by committing the hash of
the map of colored inputs to outputs inside a transaction. These maps
can be passed from payee to payor to prove that a given output is
colored with a set of recursive proofs, as is done in the author's
Smartcolors library. The commitment itself can be a simple hash, or even
a pay-to-contract style derived pubkey.

A second example is Zerocash, which depends on global consensus of a set
of revealed serial numbers. As this set can include "false-positives" -
false revealed serial numbers that do not actually correspond to a real
Zerocash transaction - the blockchain itself can be that set. The
Zerocash transactions themselves - and associated proofs - can then be
passed around via a p2p network separate from the blockchain itself.
Each Zerocash Pour proof then simply needs to specify what set of
priorly evaluated proofs makes up its particular commitment merkle tree
and the proofs are then evaluated against that proof-specific tree. (in
practice likely some kind of DAG-like structure) (note that there is a
sybil attack risk here: a sybil attack reduces your k-anonymity set by
the number of transactions you were prevented from seeing; a weaker
proof-of-publication mechanism may be appropriate to prevent that sybil
attack).

The published data may also not be meaningful because it is encrypted.
Only a small community may need to come to consensus about it; everyone
else can ignore it. For instance proof-of-publication for decentralized
asset exchange is an application where you need publication to be
timely, however the audience may still be small. That audience can share
an encryption key.

Proof-of-publication is always easy to censor
---------------------------------------------

No, with some precautions. This myth is closely related to the above
idea that the data must be globally meaningful to be useful. The colored
coin and Zerocash examples above are cases where censoring the
publication is obviously impossible as it can be made prior to giving
anyone at all sufficient info to determine if the publicaiton has been
made; the data itself is just nonces.

In the case of encrypted data the encryption key can also often be
revealed well after the publication has been made. For instance in a
Certificate Transparency scheme the certificate authority (CA) may use
proof-of-publication to prove that a certificate was in a set of
certificates. If that set of certificates is hashed into a merkelized
binary prefix tree indexed by domain name the correct certificate for a
given domain name - or lack thereof - is easily proven. Changes to that
set can be published on the blockchain by publishing successive prefix
tree commitments.

If these commitments are encrypted, each commitment C_i can also commit
to the encryption key to be used for C_{i+1}. That key need not be
revealed until the commitment is published; validitity is assured as
every client knows that only one C_{i+1} is possible, so any malfeasance
is guaranteed to be revealed when C_{i+2} is published.

Secondly the published data can be timelock encrypted with timelocks
that take more than the average block interval to decrypt. This puts
would-be censoring miners into a position of either delaying all
transactions, or accepting that they will end up mining publication
proofs. The only way to circumvent this is highly restrictive
whitelisting.

Proof-of-publication is easier to censor than (merge)-mined sidechains
----------------------------------------------------------------------

False under all circumstances. Even if the publications use no
anti-censorship techniques to succesfully censor a proof-of-publication
system at least 51% of the total hashing power must decide to censor it,
and they must do so by attacking the other 49% of hashing power -
specifically rejecting their blocks. This is true no matter how "niche"
the proof-of-publication system is - whether it is used by two people or
two million people it has the same security.

On the other hand a (merge)-mined sidechain with x% of the total hashing
power supporting it can be attacked at by anyone with >x% hashing power.
In the case of a merge-mined sidechain this cost will often be near zero
- only by providing miners with a significant and ongoing reward can the
marginal cost be made high. In the case of sidechains with niche
audiences this is particularly true - sidechain advocates have often
advocated that sidechains be initially protected by centralized
checkpoints until they become popular enough to begin to be secure.

Secondly sidechains can't make use of anti-censorship techniques the way
proof-of-publication systems can: they inherently must be public for
miners to be able to mine them in a decentralized fashion. Of course,
users of them may use anti-censorship techniques, but that leads to a
simple security-vs-cost tradeoff between using the Bitcoin blockchain
and a sidechain. (note the simularity to the author's treechains
proposal!)

Proof-of-publication can be made expensive
------------------------------------------

True, in some cases! By tightly constraining the Bitcoin scripting
system the available bytes for stenographic embedment can be reduced.
For instance P2SH^2 requires an brute force exponentially increasing
amount of hashes-per-byte-per-pushdata. However this is still
ineffective against publishing hashes, and to fully implement it -
scriptSigs included - would require highly invasive changes to the
entire scripting system that would greatly limit its value.

Proof-of-publication can be outsourced to untrusted third-parties
-----------------------------------------------------------------

Timestamping yes, but proof-of-publication no.

We're talking about systems that attempt to publish multiple pieces of
data from multiple parties with a single hash in the Bitcoin blockchain,
such as Factom.  Essentially this works by having a "child" blockchain,
and the hash of that child blockchain is published in the master Bitcoin
blockchain. To prove publicaiton you prove that your message is in that
child chain, and the child chain is itself published in the Bitcoin
blockchain.  Proving membership is possible for yourself by determining
if you have the contents corresponding to the most recent child-chain
hash.

The problem is proving non-publication. The set of all *potential*
child-chain hashes must be possible to obtain by scanning the Bitcoin
blockchain. As a hash is meaningless by itself, these hashes must be
signed. That introduces a trusted third-party who can also sign an
invalid hash that does not correspond to a block and publish it in the
blockchain. This in turn makes it impossible for anyone using the child
blockchain to prove non-publication - they can't prove they did not
publish a message because the content of *all* child blockchains is now
unknown.

In short, Factom and systems like it rely on trusted third parties who
can put you in a position where you can't prove you did not commit
fraud.

Proof-of-publication "bloats" the blockchain
--------------------------------------------

Depends on your perspective.

Systems that do not make use of the UTXO are no different technically
than any other transaction: they pay fees to publish messages to the
Bitcoin blockchain with no amortized increase in the UTXO set. Some
systems do grow the UTXO set - a potential scaling problem as currently
that all full nodes must have the entire UTXO set - although there are a
number of existing mechanisms and proposals to mitigate this issue such
as the (crippled) OP_RETURN scriptPubKey format, the dust rule, the
authors TXO commitments, UTXO expiry etc.

From an economic point of view proof-of-publication systems compete with
other uses of the blockchain as they pay fees; supply of blockchain
space is fixed so the increased demand must result in a higher
per-transaction price in fees. On the other hand this is true of *all*
uses of the blockchain, which collectively share the limited transaction
capacity. For instance Satoshidice and similar sites have been widely
condemned for doing conventional transactions on Bitcoin when they could
have potentially used off-chain transactions.

It's unknown what the effect on the Bitcoin price will actually be. Some
proof-of-publication uses have nothing to do with money at all - e.g.
certificate transparency. Others are only indirectly related, such as
securing financial audit logs such as merkle-sum-trees of total Bitcoins
held by exchanges. Others in effect add new features to Bitcoin, such as
how colored coins allows the trade of assets on the blockchain, or how
Zerocash makes Bitcoin transactions anonymous. The sum total of all
these effects on the Bitcoin price is difficult to predict.

The authors belief is that even if proof-of-publication is a
net-negative to Bitcoin as it is significantly more secure than the
alternatives and can't be effectively censored people will use it
regardless of effects to discourage them through social pressure. Thus
Bitcoin must make technical improvements to scalability that negate
these potentially harmful effects.

Proof-of-publication systems are inefficient
--------------------------------------------

If you're talking about inefficient from the perspective of a full-node
that does full validation, they are no different than (merge)-mined
sidechain and altcoin alternatives. If you're talking about efficiency
from the perspective of a SPV client, then yes, proof-of-publication
systems will often require more resources than mining-based
alternatives.

However it must be remembered that the cost of mining is the
introduction of a trusted third party - miners. Of course, mined
proof-of-publication has miners already, but trusting those miners to
determine the meaning of that data places significantly more trust in
them than mearly trusting them to create consensus on the order in which
data is published.

Many usecases involve trusted third-parties anyway - the role of
proof-of-publication is to hold those third-parties to account and keep
them honest. For these use-cases - certificate transparency, audit logs,
financial assets - mined alternatives simply add new trusted third
parties and points of failure rather than remove them.

Of course, global consensus is inefficient - Bitcoin itself is
inefficient. But this is a fundemental problem to Bitcoin's architecture
that applies to all uses of it, a problem that should be solved in
general.

Proof-of-publication needs "scamcoins" like Mastercoin and Counterparty
-----------------------------------------------------------------------

First of all, whether or not a limited-supply token is a "scam" is not a
technical question. However some types of embedded consensus systems, a
specific use-case for proof-of-publication, do require limited-supply
tokens within the system for technical reasons, for instance to support
bid orders functionality in decentralized marketplaces.

Secondly using a limited-supply token in a proof-of-publicaton system is
what lets you have secure client side validation rather than the
alternative of 2-way-pegging that requires users to trust miners not to
steal the pegged funds. Tokens also do not need to be, economically
speaking, assets that can appreciate in value relative to Bitcoin;
one-way-pegs where Bitcoins can always be turned into the token in
conjunction with decentralized exchange to buy and sell tokens for
Bitcoins ensure the token value will always closely approximate the
Bitcoin value as long as the protocol itself is considered valuable.

Finally only a subset of proof-of-publication use-cases involve tokens
at all - many like colored coins transact directly to and from Bitcoin,
while other use-cases don't even involve finance at all.

--

-- 
'peter'[:-1] <at> petertodd.org
00000000000000000681f4e5c84bc0bf7e6c5db8673eef225da652fbb785a0de
------------------------------------------------------------------------------
Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server
from Actuate! Instantly Supercharge Your Business Reports and Dashboards
with Interactivity, Sharing, Native Excel Exports, App Integration & more
Get technology previously reserved for billion-dollar corporations, FREE
http://pubads.g.doubleclick.net/gampad/clk?id=164703151&iu=/4140/ostg.clktrk
_______________________________________________
Bitcoin-development mailing list
Bitcoin-development <at> lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Tiago Docilio Caldeira | 10 Dec 18:07 2014
Picon

BitCoin TPB, P2P Currency/Torrent

Dear All,

I've been trying to better understand the bitcoin in the last few months, both in the mathematical as the programming point of view. I went through part of the documentation, and I got curious to find ways to use their data package to store some useful msg.

As most of you know, one of the freedom parts of the internet was (temporary) knock out yesterday, the PirateBay, a "service" that was built in open source and the freedom to share, so I start trying to design an idea to allow us to create the future of p2p transference, using bitcoin as a currency and data provider.

If you insert similar message that a torrent has inside of a bitcoin portion (for example, in a satochi), you could not only "tip" the person who would share the content (eventually, even the producer of it), but also, make a public (but anonymous) request of a certain content.

By doing this micro transference, you would receive an entry to a content torrent information.

What is your opinion about this? Would someone (with more pratice inside bitcoin algorithms) be interested in developing some front end with me?

Hope that you like the idea, and share the vision of a open content world, with fair return for the people who shares the data.

Best Regards,

Tiago Caldeira
bitcoin: 1BTdPcpLfpLVouDh32532SqCXibAjXoRqp

------------------------------------------------------------------------------
Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server
from Actuate! Instantly Supercharge Your Business Reports and Dashboards
with Interactivity, Sharing, Native Excel Exports, App Integration & more
Get technology previously reserved for billion-dollar corporations, FREE
http://pubads.g.doubleclick.net/gampad/clk?id=164703151&iu=/4140/ostg.clktrk
_______________________________________________
Bitcoin-development mailing list
Bitcoin-development <at> lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development

Gmane