ianG | 22 Oct 10:57 2014

CFP by 24 Nov - Usable Security - San Diego 8th Feb

The Workshop on Usable Security (USEC) will be held in conjunction with
NDSS on February 8, 2015. The deadline for USEC Workshop submissions is
November 24, 2014. – In previous years, USEC has also been collocated
with FC; for example in Okinawa, Bonaire, and Trinidad and Tobago.

Additional information and paper submission instructions:

http://www.internetsociety.org/events/ndss-symposium-2015/usec-workshop-call-papers

******************

The Workshop on Usable Security invites submissions on all aspects of
human factors and usability in the context of security and privacy. USEC
2015 aims to bring together researchers already engaged in this
interdisciplinary effort with other computer science researchers in
areas such as visualization, artificial intelligence and theoretical
computer science as well as researchers from other domains such as
economics or psychology. We particularly encourage collaborative
research from authors in multiple fields.

Topics include, but are not limited to:

* Evaluation of usability issues of existing security and privacy models
or technology

* Design and evaluation of new security and privacy models or technology

* Impact of organizational policy or procurement decisions

* Lessons learned from designing, deploying, managing or evaluating
(Continue reading)

Jason Iannone | 22 Oct 04:22 2014
Picon

Define Privacy

On a fundamental level I wonder why privacy is important and why we
should care about it.  Privacy advocates commonly cite pervasive
surveillance by businesses and governments as a reason to change an
individual's behavior.  Discussions are stifled and joking references
to The List are made.  The most relevant and convincing issues are
documented cases of chilled expression from authors, artists,
activists, and average Andrews.  Other concerns deal with abuse, ala
LOVEINT, etc.  Additional arguments tend to be obfuscated by nuance
and lack any striking insight.

The usual explanations, while appropriately concerning, don't do it
for me.  After scanning so many articles, journal papers, and NSA
surveillance documents, fundamental questions remain: What is privacy?
 How is it useful?  How am I harmed by pervasive surveillance?  Why do
I want privacy (to the extent that I'm willing to take operational
measures to secure it)?

I read a paper by Julie Cohen for the Harvard Law Review called What
Privacy is For[1] that introduced concepts I hadn't previously seen on
paper.  She describes privacy as a nebulous space for growth.  Cohen
suggests that in private, we can make mistakes with impunity.  We are
self-determinate and define our own identities free of external
subjective forces.  For an example of what happens without the
impunity and self-determination privacy provides, see what happens
when popular politicians change their opinions in public.  I think
Cohen's is a novel approach and her description begins to soothe some
of my agonizing over the topic.  I'm still searching.

[1]http://www.juliecohen.com/attachments/File/CohenWhatPrivacyIsFor.pdf
(Continue reading)

ianG | 15 Oct 02:03 2014

SSL bug: This POODLE Bites: Exploiting The SSL 3.0 Fallback

https://www.openssl.org/~bodo/ssl-poodle.pdf

SSL 3.0 [RFC6101] is an obsolete and insecure protocol. While for most practical purposes it has been replaced by its successors TLS 1.0 [RFC2246], TLS 1.1 [RFC4346], and TLS 1.2 [RFC5246], many TLS implementations remain backwards­compatible with SSL 3.0 to interoperate with legacy systems in the interest of a smooth user experience. The protocol handshake provides for authenticated version negotiation, so normally the latest protocol version common to the client and the server will be used.

However, even if a client and server both support a version of TLS, the security level offered by SSL 3.0 is still relevant since many clients implement a protocol downgrade dance to work around server­side interoperability bugs. In this Security Advisory, we discuss how attackers can exploit the downgrade dance and break the cryptographic security of SSL 3.0. Our POODLE attack (Padding Oracle On Downgraded Legacy Encryption) will allow them, for example, to steal “secure” HTTP cookies (or other bearer tokens such as HTTP Authorization header contents).

We then give recommendations for both clients and servers on how to counter the attack: if disabling SSL 3.0 entirely is not acceptable out of interoperability concerns, TLS implementations should make use of TLS_FALLBACK_SCSV.

CVE­2014­3566 has been allocated for this protocol vulnerability.


http://googleonlinesecurity.blogspot.co.uk/2014/10/this-poodle-bites-exploiting-ssl-30.html


_______________________________________________
cryptography mailing list
cryptography@...
http://lists.randombit.net/mailman/listinfo/cryptography
Krisztián Pintér | 13 Oct 18:39 2014
Picon

Re: What's the point of using non-NIST ECC Curves?


Derek Miller (at Monday, October 13, 2014, 6:19:07 PM):
> However, both scenarios (NSA engineered them to be bad, NSA
> engineered them to be good) mean that the NSA knows a great deal
> more about weaknesses in Elliptic Curve Cryptography than we do.
> Doesn't that give you great pause in using the algorithm at all?

actually, you have a point. if there is any doubt, there is no doubt.
without even doing anything, just by being secretive, NSA can weaken
crypto. good job guys!
Tony Arcieri | 13 Oct 18:28 2014
Picon

Re: What's the point of using non-NIST ECC Curves?

On Mon, Oct 13, 2014 at 9:19 AM, Derek Miller <dreemkiller-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> wrote:
However, both scenarios (NSA engineered them to be bad, NSA engineered them to be good) mean that the NSA knows a great deal more about weaknesses in Elliptic Curve Cryptography than we do. Doesn't that give you great pause in using the algorithm at all?

Sure, that's why djb and friends are also working on implementing McEliece and Merkle signatures

--
Tony Arcieri
_______________________________________________
cryptography mailing list
cryptography@...
http://lists.randombit.net/mailman/listinfo/cryptography
Derek Miller | 13 Oct 18:19 2014
Picon

Re: What's the point of using non-NIST ECC Curves?

Krisztian,
Thanks for the additional scenario (I had not even considered trusting the NSA, so had not considered that scenario).
However, both scenarios (NSA engineered them to be bad, NSA engineered them to be good) mean that the NSA knows a great deal more about weaknesses in Elliptic Curve Cryptography than we do. Doesn't that give you great pause in using the algorithm at all?

On Mon, Oct 13, 2014 at 10:53 AM, Derek Miller <dreemkiller-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> wrote:
For curve P-192, SEED = 3045ae6f c8422f64 ed579528 d38120ea e12196d5 
For curve P-224, SEED = bd713447 99d5c7fc dc45b59f a3b9ab8f 6a948bc5
For curve P-256, SEED = c49d3608 86e70493 6a6678e1 139d26b7 819f7e90
etcetera...



On Mon, Oct 13, 2014 at 10:43 AM, Krisztián Pintér <pinterkr-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> wrote:
On Mon, Oct 13, 2014 at 5:38 PM, Ryan Carboni <ryacko-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> wrote:
>> > However, considering one of the scenarios where these curves might be
>> > compromised (the NSA knew of weaknesses in certain curves, and
>> > engineered
>> > the NIST Prime curves to be subject to those weaknesses)
> I forget, what was the original inputs to the hash?

another unexplained constant, if i'm not mistaken. it makes no sense
in any circumstances.
_______________________________________________
cryptography mailing list
cryptography <at> randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


_______________________________________________
cryptography mailing list
cryptography@...
http://lists.randombit.net/mailman/listinfo/cryptography
coderman | 13 Oct 17:45 2014
Picon

Re: caring harder requires solving once for the most demanding threat model, to the benefit of all lesser models

On 10/13/14, ianG <iang@...> wrote:
> ...
> your welcome ;-)

a considered and insightful response to my saber rattling diatribe.

i owe you a beer, sir!

> Ah well, there is another rule we should always bring remember:
>
>      Do not use known-crap crypto.
>
> Dual_EC_DRBG is an example of a crap RNG.  For which we have data going
> back to 2006 showing it is a bad design.

let's try another example: Intel RDRAND or RDSEED.  depend on it as
the sole source of entropy?

in theory, the only attacks that would allow to manipulate the output
are outside scope. (e.g. the data shows them as nation state level
hypothetical)

is "depending on a single entropy source" the "known-crap" part? or is
it the un-verifiable output of this specific source that is
"known-crap"?

(or am i overreaching, and you advocate direct and sole use of RDRAND
everywhere? :)

> Others in this category include:  RC4, DES, MD5, various wifi junk
> protocols, etc.

if RC4 is known-crap, then how is a downgrade to known-crap not a problem?

>> Q: 'Should I switch away from 1024 bit strength RSA keys?'
>
> I agree with that, and I'm on record for it in the print media.  I am
> not part of the NIST lemmings craze.
>
> So, assuming you think I'm crazy, let's postulate that the NSA has a box
> that can crunch a 1024 key in a day.  What's the risk?
> ...
> WYTM?  The world that is concerned about the NSA is terrified of open
> surveillance.  RSA1024 kills open surveillance dead.

consider a service provider that i use, like Google, with a
hypothetical 1024 bit RSA key to secure TLS. they don't use forward
secrecy, so recovery of their private key can recover content.

what is the risk that a Google-like provider key could be attacked? i
have no idea.  but certainly more than my risk as a single individual.

regarding open surveillance, this is a potential mechanism for it
despite the appearance of privacy.

at what point does an insufficient key length become "known-crap" vs.
needless lemming craziness?

said another way, "the data" is only useful if you or those you trust
is not an outlier.  in addition, "the data" is only retrospective; by
definition class breaks and novel attacks are not worth considering
until they become known and used.  does the difficulty in migrating
away from a new-known-crap mistake factor into how you draw the line?

> Actually, I thought there was data on this which shows that auto-update
> keeps devices more secure, suffer less problems.  I think Microsoft have
> published on this, anyone care to comment?

microsoft updates are not the standard upon which to measure all
application updates. the vast majority don't check certificates or
secure digests at all, hence the hundreds of vectors in evilgrade that
provide a seamless path from MitM at coffee shop to administrator on
your laptop.

is the "not using crypto" or "not using crypto right" parts the
"known-crap" piece of this equation?

is the MitM or DNS poison dependency "low risk" enough per the data
that the "known crap" of the update itself no longer matters?

---

thank you taking the time to address these points in depth so that i
can better understand your reasoning.

this is an interesting discussion because i arrived at the opposite
conclusion: given the reasonableness of long keys and secure designs,
and in view of ever improving attacks, the best course of action is to
solve _once_ for the hardest threat model, so that you don't rely on
past indicators to predict future security and all lesser threat
models can benefit from the protection provided.

i dream of a future where the sudden development of very many qubit
computers does not cause a panic to replace key infrastructure or
generate new keys. where the protocols have only one mode, and it is
secure. where applications don't need to be updated frequently for
security reasons. where entire classes of vulnerabilities don't exist.

in short, i dream of a future where the cooperative solution to the
most demanding threat models is pervasive, to the benefit of all
lesser models, now and into the future.

best regards,

P.S. part of the context for this bias is my perspective as developer
of fully decentralized systems. any peer in such a system is
potentially the highest profile target; the threat model for any peer
the most demanding threat model any one peer may operate under. the
usual "client vs. server", or "casual vs. professional" distinctions
in threat models no longer apply...
Derek Miller | 13 Oct 16:51 2014
Picon

What's the point of using non-NIST ECC Curves?

Like many people, I consider the seed values used to generate the NIST Prime curves suspicious.
However, considering one of the scenarios where these curves might be compromised (the NSA knew of weaknesses in certain curves, and engineered the NIST Prime curves to be subject to those weaknesses), does it even make sense to use ECC at all?
If the NIST curves are weak in a way that we don't understand, this means that ECC has properties that we don't understand.
Thus, if you don't trust the NIST Prime curves, does it make sense to trust any ECC curves at all?

I appreciate your responses,
D
_______________________________________________
cryptography mailing list
cryptography@...
http://lists.randombit.net/mailman/listinfo/cryptography
coderman | 13 Oct 02:03 2014
Picon

RC4 Forevar! [was: RC4 is dangerous in ways not yet known - heads up on near injection WPA2 downgrade to TKIP RC4]

On 9/22/14, coderman <coderman <at> gmail.com> wrote:
> ...
>> Please elaborate.  TKIP has not been identified as a ‘active attack’
>> vector.

hi nymble,

it appears no one cares about downgrade attacks, like no one cares
about MitM (see mobile apps and software update mechanisms). [0]

> to be specific about the problems, in case not concise enough above:
> 0. lack of a way to enforce TKIP disable.
> 1. lack of visual signal of TKIP downgraded security in WPA2 to users.
> 2. insult to injury with "unspecified" bozofail TKIP transition to ON
> flaws in some hw.

i would like to clarify that #0 is a driver domain behavior, your
"suggestions" from userspace via wpa-supplicant are meaningless
against the motivated.

also, the definitive paper at http://www.isg.rhul.ac.uk/tls/ still
insists, "For WPA/TKIP, the only reasonable countermeasure is to
upgrade to WPA2." which is either incompetently incorrect, or
intentional indirection.

best regards,

0. "no one cares" - this is not strictly true; people care a bit more
if you have done significant and detailed analysis of the sort that
eats lives by the quarter-year. i have long since quit giving freebies
freely, and instead pick my disclosures carefully with significant
limitations.

perhaps i should re-state: "no one working in the public interest
cares". there is a roaring business for silence and proprietary
development, and these people care quite a bit.
_______________________________________________
cryptography mailing list
cryptography <at> randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography
John Young | 8 Oct 13:59 2014
Picon

State Hash

http://sphincs.cr.yp.to/

Special note to law-enforcement agents: The word "state" is
a technical term in cryptography. Typical hash-based signature
schemes need to record information, called "state", after every
signature. Google's Adam Langley refers to this as a "huge
foot-cannon" from a security perspective. By saying "eliminate
the state" we are advocating a security improvement, namely
adopting signature schemes that do not need to record information
after every signature. We are not talking about eliminating other
types of states. We love most states, especially yours! Also,
"hash" is another technical term and has nothing to do with cannabis. 
Kevin | 4 Oct 22:12 2014
Picon

any updates on shellshock?

Hello.  I am wondering if we have any knew info on shellshock?  How much 
of a threat is it at this point?  Patch Tuesday anyone?

--

-- 
Kevin

Gmane