John Moser | 15 Apr 19:14 2014
Picon

Web app security proxies, specifications

Relevant to:  HTTP Web application security

I'm imagining (probably to no fruitful endeavor) a Web application security proxy.  I was thinking about that old mod_security thing in Apache, where you could essentially tell it what requests look like and it would reject bad requests, and it got me thinking:  a secure proxy for this would be interesting.

I could make a product... or I could make a specification and a reference implementation.  That brought up some interesting thoughts.

Suppose I define, among other things, the ability to recognize valid/invalid headers and requests.  Say I expect POST requests of the form (normalized to equivalent GET requests, with regexes):

/login.php?u=[\w\d]{3,16}&p=[\w\d]{6,45}

So we have a rule:

login.php {
  "POST" {
    u ~ /[\w\d]{3,16}/;
    p ~ /[\w\d]{6,45}/;
    !*;
  }
  !*;
}

Only POST requests, with u and p fields, no other fields, defined as regex.

Now let's say we install an application at http://www.example.com/myapp/ which has http://www.example.com/myapp/login.php and we want to define security rules for it.  I guess we could load these into a proxy, do all kinds of configuration?

Or...

We could set up a http://www.example.com/security.txt file:

/security.pxsd
/myapp/security.pxsd root=/myapp/

And in /myapp/security.pxsd we have the above rule, among others.

The Web server proper SHOULD deny access to security.txt and all pxsd files by application-specific configuration, except to the trusted security reverse proxy.

The trusted security reverse proxy SHOULD deny proxy access for these files as well.

The reverse proxy will:

 - Checks security.txt for all PXSD files required (ProXy Security Definition)
 - Checks the request against the full policy
 - Rejects the request if it violates

In implementation, there would be a caching period (seconds, minutes, etc), an if-modified-since and/or hash check, etc. so that excess work fetching, parsing, and integrating policy isn't done.

PXSD is root-relative; root is specified in security.pxsd.  Thus /myapp/security.pxsd cannot specify rules for /otherapp/login.py or whatnot.

Thus a Web application may ship with security definitions dictating valid data.  A Web server or a reverse proxy may read these definitions from a security file and apply standard validation.  The Web server itself may read a security file (/security.txt or some out-of-web-space file) and PXSD files, applying policy internally; or a reverse proxy (Squid, Varnish, nginx, etc.) may fetch and cache these policy files and prevent requests from passing.

The advantage of a proxy doing such is that it is a bastion host:  broken requests which pass as seemingly-valid HTTP but which are unorthodox and cause buffer overruns and other nastiness will stop at the bastion host.  Broken requests which are wholly invalid and crash the software will either stop at the bastion host or give an exploit onto the bastion host, which itself may not carry anything critical and can be rebooted or replaced with a functioning server in the event of an exploit.



In any case, the above is illustrative, wordy, and highly hypothetical.  The point is:  I believe there would be value in defining a DSL and standard for Web application input validation, such that either a Web server itself or a reverse proxy may read a set of standard-format files (starting with the expected file /security.txt) and obtain a definition of requests which wholly encompasses all valid requests (but may or may not also encompass some invalid requests).

I'm only interested in HTTP query validation in this scope.  I have no interest in access controls (i.e. only these IP addresses may do these things; only these file types are valid; you may not pull .htaccess; etc.).

Thoughts?  Would this be something worth researching, designing, and RFCing?
<div><div dir="ltr">
<div>
<div>
<div>
<div>
<div>
<div>
<div>
<div>
<div>
<div>
<div>
<div>Relevant to:&nbsp; HTTP Web application security<br>
</div>
<div>
<br>I'm imagining (probably to no fruitful endeavor) a Web application security proxy.&nbsp; I was thinking about that old mod_security thing in Apache, where you could essentially tell it what requests look like and it would reject bad requests, and it got me thinking:&nbsp; a secure proxy for this would be interesting.<br><br>
</div>I could make a product... or I could make a specification and a reference implementation.&nbsp; That brought up some interesting thoughts.<br><br>
</div>Suppose I define, among other things, the ability to recognize valid/invalid headers and requests.&nbsp; Say I expect POST requests of the form (normalized to equivalent GET requests, with regexes):<br><br>
</div>/login.php?u=[\w\d]{3,16}&amp;p=[\w\d]{6,45}<br><br>
</div>So we have a rule:<br><br>
</div>login.php {<br>
</div>
<div>&nbsp; "POST" {<br>
</div>&nbsp;&nbsp;&nbsp; u ~ /[\w\d]{3,16}/;<br>
</div>&nbsp;&nbsp;&nbsp; p ~ /[\w\d]{6,45}/;<br>&nbsp;&nbsp;&nbsp; !*;<br>
&nbsp; }<br>&nbsp; !*;<br>}<br><br>
</div>Only POST requests, with u and p fields, no other fields, defined as regex.<br><br>
</div>Now let's say we install an application at <a href="http://www.example.com/myapp/">http://www.example.com/myapp/</a> which has <a href="http://www.example.com/myapp/login.php">http://www.example.com/myapp/login.php</a> and we want to define security rules for it.&nbsp; I guess we could load these into a proxy, do all kinds of configuration?<br><br>
</div>Or...<br><br>
</div>We could set up a <a href="http://www.example.com/security.txt">http://www.example.com/security.txt</a> file:<br><br>
</div>/security.pxsd<br><div>/myapp/security.pxsd root=/myapp/≤br><br>
</div>
<div>And in /myapp/security.pxsd we have the above rule, among others.<br><br>
</div>
<div>The Web server proper SHOULD deny access to security.txt and all pxsd files by application-specific configuration, except to the trusted security reverse proxy.<br><br>
</div>
<div>The trusted security reverse proxy SHOULD deny proxy access for these files as well.<br><br>
</div>
<div>The reverse proxy will:<br><br>
</div>
<div>&nbsp;- Checks security.txt for all PXSD files required (ProXy Security Definition)<br>
</div>
<div>&nbsp;- Checks the request against the full policy<br>
</div>
<div>&nbsp;- Rejects the request if it violates<br><br>
</div>
<div>In implementation, there would be a caching period (seconds, minutes, etc), an if-modified-since and/or hash check, etc. so that excess work fetching, parsing, and integrating policy isn't done.<br><br>
</div>
<div>PXSD is root-relative; root is specified in security.pxsd.&nbsp; Thus /myapp/security.pxsd cannot specify rules for /otherapp/login.py or whatnot.<br>
</div>
<div><br></div>
<div>Thus a Web application may ship with security definitions dictating valid data.&nbsp; A Web server or a reverse proxy may read these definitions from a security file and apply standard validation.&nbsp; The Web server itself may read a security file (/security.txt or some out-of-web-space file) and PXSD files, applying policy internally; or a reverse proxy (Squid, Varnish, nginx, etc.) may fetch and cache these policy files and prevent requests from passing.<br><br>The advantage of a proxy doing such is that it is a bastion host:&nbsp; broken requests which pass as seemingly-valid HTTP but which are unorthodox and cause buffer overruns and other nastiness will stop at the bastion host.&nbsp; Broken requests which are wholly invalid and crash the software will either stop at the bastion host or give an exploit onto the bastion host, which itself may not carry anything critical and can be rebooted or replaced with a functioning server in the event of an exploit.<br><br><br><br>
</div>
<div>In any case, the above is illustrative, wordy, and highly hypothetical.&nbsp; The point is:&nbsp; I believe there would be value in defining a DSL and standard for Web application input validation, such that either a Web server itself or a reverse proxy may read a set of standard-format files (starting with the expected file /security.txt) and obtain a definition of requests which wholly encompasses all valid requests (but may or may not also encompass some invalid requests).<br><br>I'm only interested in HTTP query validation in this scope.&nbsp; I have no interest in access controls (i.e. only these IP addresses may do these things; only these file types are valid; you may not pull .htaccess; etc.).<br><br>
</div>
<div>Thoughts?&nbsp; Would this be something worth researching, designing, and RFCing?<br>
</div>
</div></div>
Tom Ritter | 7 Apr 15:29 2014

First Connection Active Attack in HPKP

Sorry to be second guessing you Chris, but wanted to ask about this change: https://code.google.com/p/key-pinning-draft/source/detail?r=5810662a42e56938272d9db4b2e5373914b266f4

Let's say an active attacker does MITM on a client.  (That's the scenario you reference, right?)  The client can have pins for the server or not.

If the client does have pins, the active attack fails and generates a report event.  The attacker knows the attack fails because a) perhaps they observe metadata about the report event being sent right away but more concretely b) because their attack fails.

If the client doesn't have pins, the active attack succeeds.  The attacker knows the attack succeeded and that the pinning wasn't effective because... it succeeded.  Whether or not a report event is generated the attacker knows it succeeded.  Furthermore, if an attacker is attacking the client and wants to learn if it supports pinning or not, it can look at the User Agent header, TLS handshake fingerprinting, or active javascript injection.

If the attack succeeds, and the attack already knows about it succeeding, I don't understand the harm in generating a report event. Maybe the attack will block it (and know it was generated), but maybe they won't and it will get out.  Maybe the report will be held for some time and sent later. 

-tom
<div><div dir="ltr">Sorry to be second guessing you Chris, but wanted to ask about this change:&nbsp;<a href="https://code.google.com/p/key-pinning-draft/source/detail?r=5810662a42e56938272d9db4b2e5373914b266f4" target="_blank">https://code.google.com/p/key-pinning-draft/source/detail?r=5810662a42e56938272d9db4b2e5373914b266f4</a><div>

<br>
</div>
<div>Let's say an active attacker does MITM on a client. &nbsp;(That's the scenario you reference, right?) &nbsp;The client can have pins for the server or not.</div>
<div><br></div>
<div>If the client does have pins, the active attack fails and generates a report event. &nbsp;The attacker knows the attack fails because a) perhaps they observe metadata about the report event being sent right away but more concretely b) because their attack fails.</div>

<div><br></div>
<div>If the client doesn't have pins, the active attack succeeds. &nbsp;The attacker knows the attack succeeded and that the pinning wasn't effective because... it succeeded. &nbsp;Whether or not a report event is generated the attacker knows it succeeded. &nbsp;Furthermore, if an attacker is attacking the client and wants to learn if it supports pinning or not, it can look at the User Agent header, TLS handshake fingerprinting, or active javascript injection.</div>

<div><br></div>
<div>If the attack succeeds, and the attack already knows about it succeeding, I don't understand the harm in generating a report event. Maybe the attack will block it (and know it was generated), but maybe they won't and it will get out. &nbsp;Maybe the report will be held for some time and sent later.&nbsp;</div>

<div><br></div>
<div>-tom</div>
</div></div>
Hill, Brad | 21 Mar 20:45 2014
Picon

Last Call Announcement: UI Security at W3C WebAppSec WG

WebSec WG members,

  The WebAppSec WG at the W3C has recently announced a Last Call Working Draft of "User Interface Directives
for Content Security Policy".

http://www.w3.org/TR/UISecurity/

  This specification describes a set of policy statements and screen-shot comparison heuristics that web
resource authors and user agents may use to protect embedded or framed resources from "clickjacking"
attacks.  The "frame-options" directive, an evolution of the "X-Frame-Options" header, was briefly
part of this spec, although now it has been moved to the mainstream CSP 1.1 specification as "frame-ancestors".

 The WG would appreciate review and comments.  The last call period ends 18-June-2014, and comments can be
submitted to:

   public-webappsec <at> w3.org

Thank you,

Brad Hill
Co-chair, WebAppSec WG
Hill, Brad | 21 Mar 20:49 2014
Picon

First Public Working Draft announcement: Subresource Integrity

WebSec WG members,

  The WebAppSec WG at the W3C has recently announced a First Public Working Draft for "Subresource Integrity".

http://www.w3.org/TR/SRI/

  This specification describes a method to add metadata about the hash identity of resources (like script
files and images) to HTML and specify policy about how to verify and manage such resources before they are
added to a web resource's Document Object Model.

 The WG would appreciate review and comments.  Comments can be submitted to:

   public-webappsec <at> w3.org

Thank you,

Brad Hill
Co-chair, WebAppSec WG
Yoav Nir | 26 Feb 17:30 2014
Picon

Public-Key-Pins-Report-Only - attempt at summary

Hi

I think all the issues raised during this WGLC have been (I think) addressed (and correct me on another
thread if I’ve missed something), with the exception of some issues with the Report-Only header.

First issue was the interaction between PKP and PKPRO if both are present. Current text ([1]) says “If a
Host sets both the Public-Key-Pins header and the Public-Key-Pins-Report-Only header, the UA MUST NOT
enforce Pin Validation”.  This was objected to by some ([2[), as it doesn’t follow the CSP model. Chris
suggested alternative text that allows them both ([3]), where PKP is enforced, and PKPRO is only noted and
reported. There were no objections to this, except that Tom corrected a typo. Can we consider this resolved?

Then Trevor brought up another issue ([4]). He asked whether the UA actually notes PKPRO pins or just
reports on them. Nobody has responded yet, but I think that’s a good point. Is there any value to noting
PKPRO for, say, a month, and then reporting after two weeks that the current certificates do not match? 
When I imagine how someone would use PKPRO, I guess they generate a pins string, issue them as PKPRO, and if
no reports arrive for, say, 7 days, they are moved into “production”, which is the regular PKP.
Suppose the pins in PKPRO do generate reports, I guess the administrator checks the reports, fixes
whatever is wrong, and posts the good pins as PKPRO again. Does it make sense to keep receiving reports for
the old pins?  OTOH if we accept the non-noting idea, then the max-age directive makes no sense and should be
omitted.  As there has been no discussion yet, we need to consider this a bit. 

Please do.

Yoav & Tobias

[1] http://tools.ietf.org/html/draft-ietf-websec-key-pinning-11#section-2.1.3
[2] http://www.ietf.org/mail-archive/web/websec/current/msg02001.html
[3] http://www.ietf.org/mail-archive/web/websec/current/msg02026.html
[4] http://www.ietf.org/mail-archive/web/websec/current/msg02030.html
Trevor Perrin | 21 Feb 09:24 2014
Picon

Public-Key-Pins-Report-Only

Hi websec,

How should HPKP's Public-Key-Pins-Report-Only header work?

Does it only apply a check to the current TLS connection, or is the UA
is expected to remember the pins and apply them to future connections?

If the UA is expected to remember them, how do "Report-Only" pins
interact with regular pins?  Do they override each other or are
Report-Only pins tracked separately, so that a browser might have a
Report-Only pin and a "regular" pin for the same site?

Trevor

Yoav Nir | 13 Feb 12:56 2014
Picon

Pre-loaded pins vs dynamic pins

[ I'm forking the thread so as to avoid confusing it with the other issue ]

Section 2.7 ( http://tools.ietf.org/html/draft-ietf-websec-key-pinning-11#section-2.7 )
describes the interaction between pre-loaded pin lists and the dynamic pins that the UA observes.

The draft says that any later observation (new pin, unpin, missing pin) always trumps an earlier
observation. Seems right, except this creates an interesting issue with updates to the pre-loaded pin
list, such as when there are browser updates, or just a dynamic update from the list:
 - what happens if a pin in the fresh list differs from a dynamic pin?
 - what happens if a pin that is in the list is one that had been unpinned before?

One way is to require the static list to have "observation dates" for each entry, and also require the UA to
keep track of all noted pins for the lifetime of that pin. By keeping track I mean recording all replacement
and unpinning events, including dates, so as to be able to construct a timeline with the static list, and be
able to tell which event happened last. For example, if the UA notes a pin on 1-Feb, then notes an unpinning
on 9-Feb, then on 15-Feb it downloads a new list with the old pin observed at 7-Feb, the pin should remain deleted.

Another way is to treat all updates from the static pins as if they had just been observed at the moment of
download. That is obviously much easier to implement.

A third way (Trevor's) is to treat the static and dynamic pins as two separate pin databases, both of which
are enforced and no interaction between them. But the UA vendor can also push updates for deleting dynamic
pins, to save web site operators who have shot themselves in the foot.

Yet a fourth way is to observer that none of this affects interoperability in any way and leave it up to the UA
vendor to choose their way. The web site operator should only include valid pins in their HPKP headers, and
they should also enter only valid pins in pre-loaded lists. They should also deal with the fact that don't
control usage patterns, so anytime they pin something for X seconds, their website MUST conform to that
pin for at least those X seconds or bad things will happen.  

Trevor: if I didn't capture your opinion, please correct me.

People other than Trevor: I don't really believe that only Trevor cares about this issue. Please speak up
about these alternatives above

Thanks

Yoav

internet-drafts | 6 Feb 20:01 2014
Picon

I-D Action: draft-ietf-websec-key-pinning-10.txt


A New Internet-Draft is available from the on-line Internet-Drafts directories.
 This draft is a work item of the Web Security Working Group of the IETF.

        Title           : Public Key Pinning Extension for HTTP
        Authors         : Chris Evans
                          Chris Palmer
                          Ryan Sleevi
	Filename        : draft-ietf-websec-key-pinning-10.txt
	Pages           : 23
	Date            : 2014-02-06

Abstract:
   This memo describes an extension to the HTTP protocol allowing web
   host operators to instruct user agents (UAs) to remember ("pin") the
   hosts' cryptographic identities for a given period of time.  During
   that time, UAs will require that the host present a certificate chain
   including at least one Subject Public Key Info structure whose
   fingerprint matches one of the pinned fingerprints for that host.  By
   effectively reducing the number of authorities who can authenticate
   the domain during the lifetime of the pin, pinning may reduce the
   incidence of man-in-the-middle attacks due to compromised
   Certification Authorities.

The IETF datatracker status page for this draft is:
https://datatracker.ietf.org/doc/draft-ietf-websec-key-pinning/

There's also a htmlized version available at:
http://tools.ietf.org/html/draft-ietf-websec-key-pinning-10

A diff from the previous version is available at:
http://www.ietf.org/rfcdiff?url2=draft-ietf-websec-key-pinning-10

Please note that it may take a couple of minutes from the time of submission
until the htmlized version and diff are available at tools.ietf.org.

Internet-Drafts are also available by anonymous FTP at:
ftp://ftp.ietf.org/internet-drafts/

Daniel Kahn Gillmor | 20 Jan 16:44 2014
Picon

dual meaning of "pinning" [was: Re: [Uta] Proposed list of deliverables]

On 01/20/2014 06:51 AM, Alexey Melnikov wrote:
> http://datatracker.ietf.org/doc/draft-melnikov-email-tls-certs/
> http://datatracker.ietf.org/doc/draft-moore-email-tls/

Both of these drafts use the term "pinning" in line with the way it is
used in RFC 6125, which is in contradiction to the way the term
"pinning" is used in websec's Key Pinning draft:

 https://datatracker.ietf.org/doc/draft-ietf-websec-key-pinning/

In 6125 and the two e-mail drafts above, "pinning" is used to refer to
the activity that firefox describes as "adding a security exception" and
chrome calls "proceed anyway" -- where the user (or someone) overrides
the TLS verification stack to indicate that a particular certificate is
acceptable to them for a particular site, regardless of the name or
other entity identifiers found in the certificate.  I'd call this
"permissive" pinning, since it increases the set of acceptable
certificates in general.

In the draft-websec-key-pinning, the term "pinning" is used to indicate
a set of keys (not certificates) that the server communicates to the
client which *must* be used for future connections to that server.  I'd
call this "restrictive" pinning, because it reduces the set of
acceptable certificates in general (certificates that are signed by
other trusted root authorities over end entity keys that aren't in the
set of pins will be considered unacceptable).

Even worse, these two "pinning" ideas could overlap for a particular
browser/website: a site could use the restrictive key pinning to
indicate which public keys are acceptable, and then offer a certificate
over one of those keys which isn't signed by a root authority accepted
by the client; so the client would need to "set a security exception" or
"proceed anyway", which is forbidden by draft-ietf-websec-key-pinning
(section 2.6).  But it's also possible that the user already added a
"permissive" pin (a "security exception") for that web site before
receiving draft-ietf-websec-key-pinning pinning instructions from the
web server :/

I think we're heading for trouble with this terminology overlap, in a
space that is already difficult to understand for implementers, server
administrators, and browser users.  but i don't know how to best avoid
further confusion.

In the server administration circles i travel in, people who are
security conscious tend to associate the term "pinning" with the ideas
behind draft-ietf-websec-key-pinning.  If you tell them that "setting a
security exception" in firefox is also "pinning", it will cause no small
amount of concern (since their goal in deploying public-key-pinningis to
avoid having their users tricked by an MITM offering bogus
certificates).  OTOH, RFC 6125 itself is published, so its meaning of
"pinning" won't be going away any time soon :/

At the very least, it seems like new drafts should make it clear that
their meaning of "pinning" does not mean the other well-known "pinning".

Any better suggestions for avoiding the ambiguiity?

	--dkg

On 01/20/2014 06:51 AM, Alexey Melnikov wrote:
> http://datatracker.ietf.org/doc/draft-melnikov-email-tls-certs/
> http://datatracker.ietf.org/doc/draft-moore-email-tls/

Both of these drafts use the term "pinning" in line with the way it is
used in RFC 6125, which is in contradiction to the way the term
"pinning" is used in websec's Key Pinning draft:

 https://datatracker.ietf.org/doc/draft-ietf-websec-key-pinning/

In 6125 and the two e-mail drafts above, "pinning" is used to refer to
the activity that firefox describes as "adding a security exception" and
chrome calls "proceed anyway" -- where the user (or someone) overrides
the TLS verification stack to indicate that a particular certificate is
acceptable to them for a particular site, regardless of the name or
other entity identifiers found in the certificate.  I'd call this
"permissive" pinning, since it increases the set of acceptable
certificates in general.

In the draft-websec-key-pinning, the term "pinning" is used to indicate
a set of keys (not certificates) that the server communicates to the
client which *must* be used for future connections to that server.  I'd
call this "restrictive" pinning, because it reduces the set of
acceptable certificates in general (certificates that are signed by
other trusted root authorities over end entity keys that aren't in the
set of pins will be considered unacceptable).

Even worse, these two "pinning" ideas could overlap for a particular
browser/website: a site could use the restrictive key pinning to
indicate which public keys are acceptable, and then offer a certificate
over one of those keys which isn't signed by a root authority accepted
by the client; so the client would need to "set a security exception" or
"proceed anyway", which is forbidden by draft-ietf-websec-key-pinning
(section 2.6).  But it's also possible that the user already added a
"permissive" pin (a "security exception") for that web site before
receiving draft-ietf-websec-key-pinning pinning instructions from the
web server :/

I think we're heading for trouble with this terminology overlap, in a
space that is already difficult to understand for implementers, server
administrators, and browser users.  but i don't know how to best avoid
further confusion.

In the server administration circles i travel in, people who are
security conscious tend to associate the term "pinning" with the ideas
behind draft-ietf-websec-key-pinning.  If you tell them that "setting a
security exception" in firefox is also "pinning", it will cause no small
amount of concern (since their goal in deploying public-key-pinningis to
avoid having their users tricked by an MITM offering bogus
certificates).  OTOH, RFC 6125 itself is published, so its meaning of
"pinning" won't be going away any time soon :/

At the very least, it seems like new drafts should make it clear that
their meaning of "pinning" does not mean the other well-known "pinning".

Any better suggestions for avoiding the ambiguiity?

	--dkg

Tobias Gondrom | 10 Dec 20:19 2013

Re: HPKP: The strict directive and TLS proxies

Hello,
re-send, as I just received an error message from the websec-mailing-list mail-server on not delivering this email on Dec-3. 
Best regards, Tobias


-------- Original Message -------- Subject: Date: From: To: CC:
Re: [websec] HPKP: The strict directive and TLS proxies
Tue, 03 Dec 2013 20:43:49 +0000
Tobias Gondrom <tobias.gondrom <at> gondrom.org>
palmer <at> google.com, synp71 <at> live.com
websec <at> ietf.org


Hi Chris, <hat=WG chair> Yes, please roll the updates into a new version and post it as soon as possible. Please remember version-numbers are cheap, so rather update often. Plus, I would really like to give the doc in its final state another good read before we go to IESG. </hat> regarding: SHA-1/SHA-256: please consider that we should have hash agility whenever possible. There will be SHA-3 and future ones.... Best regards, Tobias On 03/12/13 00:24, Chris Palmer wrote: > Hi all, > > Thanks for the discussion. We are going to roll another version of the > draft to clarify the confusing things. Also, my semi-off-the-cuff > thoughts on some of the issues: > > Strict: I support what Yoav calls option (B): Drop "strict" - not > interested in local policy. (It has only been a source of confusion. > Let's keep things simple.) > > SHA-1: Let's just get rid of it. SHA-256 only; MUST implement; no > truncation. (Maximum simplicity.) > > Non-overridable failure mode on pin validation failure: To make HPKP > less of a footgun, I support saying that UAs SHOULD disallow user > override, or SHOULD provide some way of telling the user that pin > validation failure happened. But no longer MUST. > _______________________________________________ > websec mailing list > websec <at> ietf.org > https://www.ietf.org/mailman/listinfo/websec

<div>
    Hello, <br>
      re-send, as I just received an error message from the
      websec-mailing-list mail-server on not delivering this email on
      Dec-3.&nbsp; <br>
    <div class="moz-forward-container">Best regards, Tobias<br><br><br>
      -------- Original Message --------
      <table class="moz-email-headers-table" border="0" cellpadding="0" cellspacing="0">
<tr>Subject:

            <td>Re: [websec] HPKP: The strict directive and TLS proxies</td>
          </tr>
<tr>Date: 
            <td>Tue, 03 Dec 2013 20:43:49 +0000</td>
          </tr>
<tr>From: 
            <td>Tobias Gondrom <a class="moz-txt-link-rfc2396E" href="mailto:tobias.gondrom <at> gondrom.org">&lt;tobias.gondrom <at> gondrom.org&gt;</a>
</td>
          </tr>
<tr>To: 
            <td>
<a class="moz-txt-link-abbreviated" href="mailto:palmer <at> google.com">palmer <at> google.com</a>, <a class="moz-txt-link-abbreviated" href="mailto:synp71 <at> live.com">synp71 <at> live.com</a>
</td>
          </tr>
<tr>CC: 
            <td><a class="moz-txt-link-abbreviated" href="mailto:websec <at> ietf.org">websec <at> ietf.org</a></td>
          </tr>
</table>
<br><br>Hi Chris,

&lt;hat=WG chair&gt;
Yes, please roll the updates into a new version and post it as soon as
possible.
Please remember version-numbers are cheap, so rather update often.
Plus, I would really like to give the doc in its final state another
good read before we go to IESG.
&lt;/hat&gt;

regarding: SHA-1/SHA-256: please consider that we should have hash
agility whenever possible. There will be SHA-3 and future ones....

Best regards, Tobias

On 03/12/13 00:24, Chris Palmer wrote:
&gt; Hi all,
&gt;
&gt; Thanks for the discussion. We are going to roll another version of the
&gt; draft to clarify the confusing things. Also, my semi-off-the-cuff
&gt; thoughts on some of the issues:
&gt;
&gt; Strict: I support what Yoav calls option (B): Drop "strict" - not
&gt; interested in local policy. (It has only been a source of confusion.
&gt; Let's keep things simple.)
&gt;
&gt; SHA-1: Let's just get rid of it. SHA-256 only; MUST implement; no
&gt; truncation. (Maximum simplicity.)
&gt;
&gt; Non-overridable failure mode on pin validation failure: To make HPKP
&gt; less of a footgun, I support saying that UAs SHOULD disallow user
&gt; override, or SHOULD provide some way of telling the user that pin
&gt; validation failure happened. But no longer MUST.
&gt; _______________________________________________
&gt; websec mailing list
&gt; <a class="moz-txt-link-abbreviated" href="mailto:websec <at> ietf.org">websec <at> ietf.org</a>
&gt; <a class="moz-txt-link-freetext" href="https://www.ietf.org/mailman/listinfo/websec">https://www.ietf.org/mailman/listinfo/websec</a>

      <br>
</div>
    <br>
</div>
Ralf Skyper Kaiser | 7 Dec 15:31 2013

On CIPHER-SUITE pinning

Hi,

To let old browsers connect to a host most hosts will support
weak or broken ciphers for the forseable future.

A feature to pin the CIPHER SUITE would be desirable.

It would allow a client to learn a set of 'strong' ciphers available
on client and host side. Any downgrade attack to a weaker cipher
would fail.

This feature could be optional or mandatory to be configured on the host.

Please discuss. Opinions welcome.

regards,

ralf
<div><div dir="ltr"><div><div>Hi,<br><br>To let old browsers connect to a host most hosts will support<br>weak or broken ciphers for the forseable future.<br><br>A feature to pin the CIPHER SUITE would be desirable.<br><br>It would allow a client to learn a set of 'strong' ciphers available<br>
on client and host side. Any downgrade attack to a weaker cipher<br>would fail.<br><br>This feature could be optional or mandatory to be configured on the host.<br><br>Please discuss. Opinions welcome.<br><br>regards,<br><br>ralf<br>
</div></div></div></div>

Gmane