trans issue tracker | 31 Jul 16:47 2015

[Trans] [trans] #102 (rfc6962-bis): "root" should be "trust anchor"

#102: "root" should be "trust anchor"

 Through the I-D we refer to root certificates, whereas the intent (and the
 practice) is that the "roots" actually include intermediates, too.

 This seems important, since browsers also include intermediates in their
 trust anchors.

 So, we need to correct the use of the term "root" throughout the document.


 Reporter:               |      Owner:  draft-ietf-trans-
  benl <at>        |  rfc6962-bis <at>
     Type:  defect       |     Status:  new
 Priority:  major        |  Milestone:
Component:  rfc6962-bis  |    Version:
 Severity:  -            |   Keywords:

Ticket URL: <>
trans <>
trans issue tracker | 28 Jul 14:47 2015

[Trans] [trans] #101 (gossip): Handle Shutdown Log Case

#101: Handle Shutdown Log Case

 Draft needs to handle what should be done when a log closes.  Shouldn't be
 too difficult, we can have clients pollinate the final STH - but will the
 client _know_ when a log has been shut down?

 It seems like there will be two cases:
  - The client knows the log is shut down or will be shut down at a
 specific datetime, and it can use this information to correctly pollinate
 the final STH.
  - The client doesn't know, the log shuts down, and it starts getting old
 STHs that it doesn't pollinate (believing them to be a malicious
 timestamp-fingerprint attack)


 Reporter:               |      Owner:  draft-ietf-trans-threat-
  tom <at>          |  analysis <at>
     Type:  defect       |     Status:  new
 Priority:  major        |  Milestone:
Component:  gossip       |    Version:
 Severity:  -            |   Keywords:

Ticket URL: <>
trans <>
trans issue tracker | 28 Jul 14:45 2015

[Trans] [trans] #100 (gossip): Review in light of blocking

#100: Review in light of blocking

 We should make sure the draft adequately handles the cases of a network
 attacker temporarily blocking certain upstream requests.


 Reporter:               |      Owner:  draft-ietf-trans-threat-
  tom <at>          |  analysis <at>
     Type:  defect       |     Status:  new
 Priority:  major        |  Milestone:
Component:  gossip       |    Version:
 Severity:  -            |   Keywords:

Ticket URL: <>
trans <>
Robin Wilton | 28 Jul 12:59 2015

Non-digital maps, and 'technical trust'...

How did we ever manage? I suspect that using printed maps is one of those skills (like changing gear manually, or shaving with a cut-throat razor) that men cling to out of some atavistic belief that, if it really came down to it, we’d still be able to club a mammoth to death, skin and gut it, and roast it over an open fire. ;^)

But to your point about combining different types/styles of PKI:

1 - this is another example of an interesting architectural shift from predominantly hierarchical models to mesh-based ones; it’s starting to be visible in identity interfederation, too.
(Not sure what conclusions to draw from that observation yet, but I offer it as a data point).

2 - Combined PKIs obviously raise questions of technical interoperability (not least because, as last week’s IETF sessions illustrated, you’ll have stakeholders like the automotive industry insisting on their own variants of basic elements like certificate format, PKIX, etc. etc.).

3 - More to the point, composite PKIs raise questions of non-technical interoperability between different trust models. Your faith in a WoT-supported assertion of identity, for instance, will be based on different trust factors from your faith in an eIDAS-compliant certificate-based one.

This interoperability problem between different trust models already exists in the CA PKI world (where, for instance, the service user and the service provider have certificates with different trust anchors), and hasn’t been elegantly fixed there. 

If you want to fix the problem centrally, you can fix it contractually, but that’s slow, expensive and inflexible. Fixing it technically would require something like Levels of Assurance, but for trust models, rather than identity assertions… that would be hard. (NB - I think we’re going to have to do something like that anyway, in due course, in order to make sense of the presence of PKI/crypto at multiple layers of the network infrastructure. For instance, what happens to the “trust score” of an inbound connection if it has come supported by a combination of DNSSEC and RPKI? But that’s some way off).

If you want to fix it at the client side instead, I think your client is going to end up being a lot thicker than you might currently be hoping.


On 27 Jul 2015, at 21:00, therightkey-request <at> wrote:

  1. Using composite PKIs (Phillip Hallam-Baker)

From: Phillip Hallam-Baker <phill <at>>
Subject: [therightkey] Using composite PKIs
Date: 27 July 2015 17:02:56 CEST

[Followups on therightkey please]

Thinking about the discussion of the OpenPGP/DANE draft in OpenPGP in my car, I came up with a metaphor for how to approach joining different PKIs. In particular, Werner's comment that Web of Trust doesn't scale. The CA model does scale but it isn't actually much better when trying to identify private individuals rather than employees of a company since the only thing I can validate economically is an email address and that isn't a person.

We can approach the problem mathematically by considering the work factor (in US$) for causing a breach.

Combining Web of Trust with CA approaches and interning the assertions in an immutable blockchain like log does provide an approach that scales. The blockchain makes the workfactor time dependent. If the workfactor is $100 before an assertion is enrolled, it will be $trillions after. 

Combining Web of Trust and CA trust is like building a dalek out of fiberglass: Individually, the glass fiber and the epoxy are weak. But using the two in combination locks the strands of glass fibre creating a lightweight shell that can support the weight of a small truck. This part is already written up:

The question we are facing now is how we make sense of that type of data. Which is where the car trip comes in.

I am using GPS to navigate a part of the city I don't know very well. There are multiple resources at my disposal:

1) My own knowledge of the area
2) Signposts on the road
3) The GPS maps in the car
4) Offline maps via my cell phone

Any one of these guides can be fallible. The GPS maps are pre big-dig (no CANBUS modem car for me) so they are out of date. Offline maps are more likely to be up to date but a malicious provider can direct specific individuals to the wrong place.

The fact that there is a human in the loop actually keeps the mapping service providers accountable. Even if 99% of drivers don't know where they are going. The fact that there are roadsigns and the fact that a few do know where they are going means that if the service defects, they are likely to be caught.

Using a pure online mapping service like Google Maps and a thin client means that I am always up to date but exposes all my movements to them. It also breaks if I am in Prague on a crappy AT&T data plan costing $20/Mb for international roaming. [Do the AT&T execs consider the semiotics of sending their customers a text message saying 'we are going to try to steal from you now' every time they cross an international border.

Using a pure offline map like the DVDRom based system in the Honda means that nobody can track me by my use of a mapping service. It also means that the maps are ten years out of date as I don't plan on replacing the van till the child whose car seat it was built to fit has learned to drive and won't trash the transmission.

The best map I have is actually an application on the phone that has downloadable maps for the whole of Europe and North America. The mapping service obviously preprocesses the maps so that the phone has as little work to do as possible. So it is a 'thick client' but not as thick as it might be.

I think the key to making a composite PKI work is to approach the problem in a similar fashion. In the short term we want to be using the 'thin client' as this allows us to change how we analyze data and add new formats. When trying to develop a new protocol, agility is key. But the medium term goal is to have a thick-ish client.

therightkey mailing list
therightkey <at>

therightkey mailing list
therightkey <at>
Bryan Ford | 24 Jul 00:27 2015

Re: [Trans] Call for adoption, draft-linus-trans-gossip-ct

From: Melinda Shore <melinda.shore <at>>
Subject: [Trans] Call for adoption, draft-linus-trans-gossip-ct
Date: July 23, 2015 at 2:03:24 PM GMT+2

Hi, all:

This is a call for adoption of
as a working group deliverable.  The call closes on August 6.

While the draft represents a fine start at defining a gossip mechanism, I would like to express serious reservations about the draft’s basic premise that a gossip mechanism is the best approach - or even an adequate approach - “to detect misbehavior by CT logs”, to quote the draft’s self-stated purpose.  As such, I have serious reservations about the WG adopting the draft, not because there’s anything wrong with the content of the draft per se but because gossip is simply not the right approach.

First, the gossip approach creates severe privacy problems, which are already well-discussed so I won’t reiterate them.

Second, the gossip approach can’t ensure that privacy-sensitive Web clients (who can’t or don’t want to reveal their whole browsing history to a Trusted Auditor) will ever actually be able “to detect misbehavior by CT logs”, if for example the misbehaving log’s private key is controlled by a MITM attacker between the Web client and the website the client thinks he is visiting.

Suppose, for example, it ever so happens that a single company ends up running both a log server and a [sub-]CA, and keeps both in the same poorly-protected safe.  If an attacker can steal those two private keys, then the attacker can silently MITM any CT-aware client all it wants, by producing all the [non-EV] certificates it needs with a valid SCT, valid STH inclusion proofs in a fake log, etc.  The web client isn’t going to learn about the problem via gossip with the website because that’s the MITM attacker, and the client can’t gossip with anyone else without the serious privacy risk of trusting a single audit server with his entire browsing history.  If the “trusted audit server” is actually co-located with the client, then the audit server in turn can’t gossip with the rest of the world without creating the same privacy risk.  Even if the client does voluntarily gossip with a remote trusted audit server, the MITM attacker - e.g., an ISP-level or state-level attacker - might simply block connections to all well-known audit servers.

Is it “too unlikely” that an attacker can compromise both a log server key and a CA key?  Even if some CT policy is established that a single company is never allowed to run both a log server and hold a [sub-]CA key, which might not be a bad idea, it doesn’t seem beyond reason that some state-level attackers are incapable of quietly exfiltrating (or just quietly subpoenaing) both a log server key and a CA key, and then they get the same, silent MITM power as they can get now.

Thus, while CT does potentially “raise the bar” a bit by requiring a MITM attacker to steal both [any one] CA key and [any one] log server key, the set of all well-known log servers still just forms another new weakest-link security chain, just like the set of all CAs and sub-CAs form an old weakest-link security chain.  While “best of two weakest-link chains” is admittedly better than one weakest-link chain, it does not seem like the best we can do or the target we should be aiming for.

I posted an E-mail last week suggesting an alternative approach, in which log servers would not individually sign their STHs but instead collectively sign them with the active participation of (a suitable quorum of) the auditors and/or monitors watching their logs.  But I got no response to that, so I’d like to try again.

In short: the log server would propose log entries, but the auditors and/or monitors contribute to the STH signature, only after performing at least the “auditing” function of verifying that the log server is behaving like a correct log server.  In particular, all participants could proactively verify that the log server never forks or reverts the log, never signs something syntactically invalid or with a wildly-incorrect timestamp (say +/- 24 hours), etc.  Then, even if the log server’s private key was successfully stolen by a state-level attacker (and the attacker perhaps even compromises a minority of the monitor/auditor keys), the attacker will be unable to forge an STH signature that a Web client or anyone else will believe.

This would, in effect, convert each log server from yet another juicy central attack target into a strongest-link set (I like to use the term “cothority” or collective authority) with all the monitor and auditor servers watching it and contributing to its signatures.  Of course, the attacker still wins if he successfully compromises any single CA (or sub-CA) *and* any single “logging cothority” (i.e., any log server *and* a sufficient quorum of its followers).  But doing the latter becomes way, way harder if there’s enough size and diversity in each logging cothority.

How does this improve things for the Web client in Repressistan who's getting MITM attacked?  If the client is only looking at SCTs that are individually signed by a log server (and are in any case only a “promise to log soon” and not actual evidence that the cert has been logged), then unfortunately not much: the MITM attacker can still forge at least one SCT for each of its forged certs, and no one else learns because the MITM can keep the client’s view isolated from the rest of the world.

But say we enhance CT so clients can demand STHs *with* cert inclusion proofs from the web servers they talk to - as I suggested in the meeting today, and which I believe Ben Laurie mentioned is already in the works.  Then any Web client that does this will be proactively protected from MITM attacks involving faked, forked, or otherwise bad log entries.  And the client doesn’t have to compromise its privacy by gossiping with anyone.


Attachment (smime.p7s): application/pkcs7-signature, 4834 bytes
Trans mailing list
Trans <at>
Salz, Rich | 23 Jul 21:58 2015

[Trans] TRANS Draft minutes for IETF-93

Please send/post corrections.

Paul, WG Status update
Charger unchanged; need to reset milestone.

Eran RFC6962-bis status+
Still needs some tweaks. Suggests waiting for Google to finish their implementation to clean out all nits
before WGLC
A log cannot do a single v1/v2 log, must run both in parallel.
Recently closed tickets 4, 64, 68, 69, 72, 81, 73, 65, 91, 80, 86, 90, 82, 83,  84, 92, 89, 58; 63, 74, 76, 77, 70;
See tracker for details
Open tickets 78 (alg agility needs more description) 83 (should require deterministic ECESA) 96 (dynamic
metadata; does only CA root list really change?) 95 (include get-entries response size in the log
metadata, for cursoring through a log)
Steve raised issue of exposing what certs a client is interested in if size of get-entries can shrink to one,
for example.
More on open: 87 (ref to attack model doc) 64 (remove spec of sig and hash lags) 93 (monitor description
inconsistencies) 94 (when/why clients should fetch inclusion proofs)
Stephen raised issue fhat if threat analysis is normative, schedule gets pushed out further. Should be informative.

Steve Kent, attack model
Name changed on doc, even if filename can't easily be changed. Not a threat model, we don't know what the
attackers are thinking, but we do know possible actions so it's an attack model.
Includes an intro to CT, he prefers it move into an arch document but if not it will stay.
"CT is a set of mechanisms, designed to detect, deter, and facilitate remediation of certificate mis-issuance"
Semantic mis-issuance: name in the cert refers to an entity incorrectly.
Syntactic mis-issuance: violation of certificate profile(s) that apply.
Reviewed a taxonomy of attacks.  Read the doc.  Discussion of additions and bigger picture needs.
Incorporated all (but one) comments.
Wants WG agreement via list on goals, definitions, attacks.
We have a half-dozen people commit to read and review the document.
Ben agrees about having an arch doc; Steve and Ben will collaborate on an arch doc

Dkg, Gossip
Gossip important to keep logs accountable by making sure everyone sees the same append-only data and keep
their MMD/SCT promises.
Works by browser's sharing and comparing SCT and STH
Three channels:
	SCTFeedback;  browser sends cert/sct to website, website sends to auditing function/third-party auditor
	STH Pollination: auditor/website send STH to each other.  STH are not privacy-sensitive
	Optional Trusted Auditor: browser passes sct/cert to auditor (e.g., the DNS resolver since it already
knows what you might be looking at)
Call for adoption is on the mailing list.

Dkg, CT for binary 
Goal is to know that you are running the same software as "everyone else," not guaranteeing that the
software isn't compromised.
Add a binary lLogEntryType; add binary and binary_digest to Signed_Type
Many details of what and how is signed are still open; need feedback from s/w distributors.
PHB suggest to not use ASN.1
Discussion and agreement that changing the s/w distribution format is a non-starter.

Rich Salz, selective logs
Some logs will not log every single cert from the CA's in their root list.
What do we do?
Discussion, no conclusion.

Senior Architect, Akamai Technologies
IM: richsalz <at> Twitter: RichSalz
trans issue tracker | 23 Jul 17:08 2015

[Trans] [trans] #99 (rfc6962-bis): Clearer definition of when a certificate is CT-compliant needed

#99: Clearer definition of when a certificate is CT-compliant needed

 The current text in the "Including the Signed Certificate Timestamp in the
 TLS Handshake" has a few problems, particularly:
 "The SCT data corresponding to at least one certificate in the chain
 from at least one log must be included in the TLS handshake..."

 * The text should make a clear assertion that this is for a certificate to
 be considered CT-compliant.
 * The 'must' should be a MUST.
 * The text currently requires 'at least one certificate in the chain'. It
 does not require the SCTs to be for the leaf cert (although currently
 there's no way to indicate any of the non-embedded SCTs are for a
 certificate that's not the leaf certificate).

 The text could be pivoted to indicate that any certificate in the chain
 accompanied by SCTs is considered CT-compliant.


 Reporter:  eranm <at>  |      Owner:  eranm <at>
     Type:  defect            |     Status:  new
 Priority:  major             |  Milestone:
Component:  rfc6962-bis       |    Version:
 Severity:  -                 |   Keywords:

Ticket URL: <>
trans <>
trans issue tracker | 23 Jul 16:33 2015

[Trans] [trans] #98 (gossip): Update gossip-ct for 6962bis

#98: Update gossip-ct for 6962bis

 The gossip-ct draft (draft-linus-trans-gossip-ct) is written with RFC6962
 in mind. It should be written for 6962bis.


 Reporter:               |      Owner:  draft-ietf-trans-threat-
  linus <at>        |  analysis <at>
     Type:  defect       |     Status:  new
 Priority:  major        |  Milestone:
Component:  gossip       |    Version:
 Severity:  -            |   Keywords:

Ticket URL: <>
trans <>
Melinda Shore | 23 Jul 14:03 2015

[Trans] Call for adoption, draft-linus-trans-gossip-ct

Hi, all:

This is a call for adoption of
as a working group deliverable.  The call closes on August 6.


Melinda & Paul
trans issue tracker | 23 Jul 11:49 2015

[Trans] [trans] #97 (rfc6962-bis): Allocate an OID for CMS precertificates

#97: Allocate an OID for CMS precertificates

 Section 3.1 currently says:
   'A precertificate is a CMS [RFC5652] "signed-data" object that contains
    a TBSCertificate [RFC5280] in its
    "SignedData.encapContentInfo.eContent" field, identified by the OID
    <TBD> in the "SignedData.encapContentInfo.eContentType" field.'

 Ben, please allocate an OID under the Google arc (

 .2.4.8 seems to me to be the obvious candidate.



 Reporter:  rob.stradling <at>  |      Owner:  benl <at>
     Type:  defect                    |     Status:  new
 Priority:  minor                     |  Milestone:
Component:  rfc6962-bis               |    Version:
 Severity:  -                         |   Keywords:

Ticket URL: <>
trans <>
Linus Nordberg | 23 Jul 11:16 2015

Re: [Trans] Review of draft-linus-trans-gossip-ct-02

Ben Laurie <benl <at>> wrote
Mon, 20 Jul 2015 13:51:33 +0100:

| Comments in {} when in quoted sections.
| "2.  Overview
|    Public append-only untrusted {verifiable is perhaps a better word} logs "

I kind of like "untrusted" and it seems like 6962 and 6962bis do
too. But if they're changing, I guess we should consider changing here

| Diagram in section 3 appears to omit SCTs delivered over OCSP.

Fixed in 03-dev (branch master in git repo at [0]).


| Also, the line from CA to Website is labelled "Cert and SCT" but the SCT
| may be omitted.

OK. Adding brackets around "& SCT". (We need a better notation for

| Section 5
| "Trusted Auditor Stream, HTTPS clients communicating directly with
|       trusted CT auditors/monitors sharing SCTs, certificate chains and
|       STHs. {Slightly confused: section 4 mentions auditors and monitors
| using this channel between themselves?}"

Addressed in 03-dev:

# Who gossips {#who}

- HTTPS clients and servers (SCT Feedback and STH Pollination)
- HTTPS servers and CT auditors (SCT Feedback)
- CT auditors and monitors (Trusted Auditor Relationship)

Additionally, some HTTPS clients may engage with an auditor who they
trust with their privacy:

- HTTPS clients and CT auditors (Trusted Auditor Relationship)

| 5.1
| " Note that clients
|    send the same SCTs and chains to servers multiple times with the
|    assumption that a potential man-in-the-middle attack eventually will
|    cease so that an honest server will receive collected malicious SCTs
|    and certificate chains. {note that if an SCT is sent over a channel that
| used an SCT from the same log, then that new SCT can replace the existing
| one for reporting - bad behaviour is still detected - i.e. an SCT can be
| considered "sent" if it is sent over a channel validated by the same log}"
| 5.1.1
| " The client MUST
|    send to the server the ones in the store that are for that server and
|    were not received from that server. {comment about "sent SCTs" applies}"

03-dev hsa the following.

   The client MUST NOT send the same set of SCTs to the same server more
   often than TBD.  [benl: "sent to the server" only really counts if
   the server presented a valid SCT in the handshake and the certificate
   is known to be unrevoked (which will be hard for a MitM to sustain)]

Does this capture it? (For purposes of tracking your requested
adddition, not resolving it.)

| "An SCT MUST NOT be sent to any other HTTPS server than one serving
|    the domain that the certificate signed by the SCT refers to.{Not all TLS
| clients care about privacy: e.g. crawlers, so this MUST NOT is too strong}"

I see. Wonder how we could express that. Should we try to differentiate
between browsers with real people using them and other clients? Or bring
in "user consent"?

| "First, the server
|    recieving the SCT would learn about other sites visited by the HTTPS
|    client{if SCTs are gossipped then this is clearly untrue: however, I
| agree that it is hard to avoid some kind of privacy leak}. "

It is true for the first N clients gossiping was my first reaction. Then
there was an interesting discussion off list about if either of those
claims could be proven correct through simulation. Without more data my
position is that we don't know and should err on the side of safety and
not gossip SCTs. Not counting SCT Feedback here. That we should do.

| "Secondly, auditors or monitors recieving SCTs from the HTTPS
|    server would learn information about the other HTTPS servers visited
|    by its clients.{ditto}"

See above.

| "If the HTTPS client has configuration options for not sending cookies
|    to third parties, SCTs MUST be treated as cookies with respect to
|    this setting.{don't understand - you already banned sending to third
| parties}"

This is for local attacks, i.e. someone digging through the local
store. Should probably expand. Added a TODO for now.

| "The data sent in the POST is defined in Section 5.1.3.{what if the POST
| fails, as it obviously will for most servers initially?}"

Then no SCT Feedback is happening. Do we need to add text?

| "   3.  if the leaf cert is not for a domain that the server is
|        authoritative for, the SCT MUST be discarded {only true if you ban
| SCT gossip}"


| " It's important to note
|    that the check must be on pairs of SCT and chain in order to catch
|    different chains accompanied by the same SCT.  [XXX why is this
|    important?]{given that logs do _not_ have to log multiple chains, I
| suspect this is actually not important}"

This could be of interest to the operator of the web site. Creating more
incentive for them to play along is probably valuable. It can be debated
whether it's worth it or not.

| " Note that an HTTPS server MAY perform a certificate chain validation
|    on a submitted certificate chain, and if it matches a trust root
|    configured on the server (but is otherwise unknown to the server),
|    the HTTPS server MAY store the certificate chain and MAY choose to
|    store any submitted SCTs even if they are unable to be verified.  The
|    risk of spamming and denial of service can be mitigated by
|    configuring the server with all known acceptable certificates (or
|    certificate hashes). {Confused by this, surely the point is to find
| certs/SCTs the server does not expect?}"

I think we thought this would help catching an SCT even if it doesn't
verify correctly by imposing some limits on the cert chain that came
with it.

| 5.1.3
| "      *  sct_data: An array of objects consisting of the base64
|          representation of the binary SCT data as defined in [RFC6962]
|          Section 3.2. {SCTs may be in the certs, so should they be
| replicated here or not?}"
| "   The 'x509_chain' element MUST contain at{typo} the leaf certificate and
| the
|    full chain to a known root."


| 5.2
| "   o  Logs cannot issue STHs too frequently.  This is restricted to 1
|       per hour. {probably should refer to 6962-bis for this restriction}"
| Also in 5.2, note that servers could function as their own auditors.

Would be good to mention that somewhere, yes.

| 5.2.2
| "After retrieving the consistency proof to the most recent STH, they
|    SHOULD pollinate this new STH among participating HTTPS Servers {and may
| safely discard the older STH}. "

Unless it should be gossiped some more? We only know that it's correct
with regard to the particular view of the log we're being served. I
don't think we've decided when to stop gossiping about a particular STH

| 5.2.3
| "  o  sths - an array of 0 or more fresh STH objects [XXX recently
|       collected] from the log associated with log_id.  Each of these
|       objects consists of {note that we've moved to binary format for SCTs
| in 6962-bis}"

Hmm. We've been looking at RFC6962 so far but that's not how to do it I
guess. I'll read up on 6962bis and change this draft accordingly.

| 6.1.1
| " Therefore, a
|    client with an SCT for a given server should transmit that
|    information in only two channels: to a server associated with the SCT
|    itself; and to a trusted CT auditor, if one exists.{only if the client
| cares about privacy}"

See comment above ("I see. Wonder how we could express that.[...]")

| 6.1.4
| "This is similar to
|    the fingerprinting attack described in Section 6.1.2, but it is
|    mitigated by the following factors: {mention rate limit?}"

The limitation on how often a log can issue an STH? Yes, we should look
over this section in the light of that change.

| 6.1.6
| "The log's inability to
|       provide either proof will not be externally cryptographically-
|       verifiable, as it may be indistinguishable from a network error.
| {note that anyone who sees the whole log can independently prove
| inconsistency/non-inclusion}"

That's a monitor, isn't it?