Givon Zirkind | 20 Nov 12:08 2014

Re: new encrypted phones

this whole hulabalu about encrypted phones, its only the data on the 
phone that's encrypted.  not the conversations.  right?

does the encryption extend to call logs?
John Young | 16 Nov 17:22 2014

Call for publication of all Snowden papers gets louder


Call for publication of all Snowden papers gets louder

The Amsterdam Media Professor Geert Lovink left 
no doubt that in his opinion the medium term, the 
entire treasure of the NSA Whistleblowers must be 
made and archived publicly. Such an approach has 
a very different range than the global 24-hour 
news machine. "Journalists want to make headlines 
and politics", was the founder of the Institute 
of Network Cultures . But it was important to think long-term.

The Snowden-papers offer unique insight into a 
loud Lovink "information-military complex in the 
making": They showed roughly, "who are the actors 
and what technologies they use." In addition, 
they made different impact of monitoring on 
individual countries or regions significantly and 
who personally was affected as a group or 
society. Taken together, this was "for all 
investigative journalists around the world for decades interesting".


Ruf nach Veröffentlichung der Snowden-Papiere wird lauter

Der Amsterdamer Medienprofessor Geert Lovink ließ 
(Continue reading)

John Young | 4 Nov 23:18 2014

Wind River Security Features and Cryptography Libraries

Wind River Security Features and Cryptography Libraries (which appear to be the
basis of the $750,000 fine by BIS)
ianG | 22 Oct 10:57 2014

CFP by 24 Nov - Usable Security - San Diego 8th Feb

The Workshop on Usable Security (USEC) will be held in conjunction with
NDSS on February 8, 2015. The deadline for USEC Workshop submissions is
November 24, 2014. – In previous years, USEC has also been collocated
with FC; for example in Okinawa, Bonaire, and Trinidad and Tobago.

Additional information and paper submission instructions:


The Workshop on Usable Security invites submissions on all aspects of
human factors and usability in the context of security and privacy. USEC
2015 aims to bring together researchers already engaged in this
interdisciplinary effort with other computer science researchers in
areas such as visualization, artificial intelligence and theoretical
computer science as well as researchers from other domains such as
economics or psychology. We particularly encourage collaborative
research from authors in multiple fields.

Topics include, but are not limited to:

* Evaluation of usability issues of existing security and privacy models
or technology

* Design and evaluation of new security and privacy models or technology

* Impact of organizational policy or procurement decisions

* Lessons learned from designing, deploying, managing or evaluating
(Continue reading)

Jason Iannone | 22 Oct 04:22 2014

Define Privacy

On a fundamental level I wonder why privacy is important and why we
should care about it.  Privacy advocates commonly cite pervasive
surveillance by businesses and governments as a reason to change an
individual's behavior.  Discussions are stifled and joking references
to The List are made.  The most relevant and convincing issues are
documented cases of chilled expression from authors, artists,
activists, and average Andrews.  Other concerns deal with abuse, ala
LOVEINT, etc.  Additional arguments tend to be obfuscated by nuance
and lack any striking insight.

The usual explanations, while appropriately concerning, don't do it
for me.  After scanning so many articles, journal papers, and NSA
surveillance documents, fundamental questions remain: What is privacy?
 How is it useful?  How am I harmed by pervasive surveillance?  Why do
I want privacy (to the extent that I'm willing to take operational
measures to secure it)?

I read a paper by Julie Cohen for the Harvard Law Review called What
Privacy is For[1] that introduced concepts I hadn't previously seen on
paper.  She describes privacy as a nebulous space for growth.  Cohen
suggests that in private, we can make mistakes with impunity.  We are
self-determinate and define our own identities free of external
subjective forces.  For an example of what happens without the
impunity and self-determination privacy provides, see what happens
when popular politicians change their opinions in public.  I think
Cohen's is a novel approach and her description begins to soothe some
of my agonizing over the topic.  I'm still searching.

(Continue reading)

ianG | 15 Oct 02:03 2014

SSL bug: This POODLE Bites: Exploiting The SSL 3.0 Fallback

SSL 3.0 [RFC6101] is an obsolete and insecure protocol. While for most practical purposes it has been replaced by its successors TLS 1.0 [RFC2246], TLS 1.1 [RFC4346], and TLS 1.2 [RFC5246], many TLS implementations remain backwards­compatible with SSL 3.0 to interoperate with legacy systems in the interest of a smooth user experience. The protocol handshake provides for authenticated version negotiation, so normally the latest protocol version common to the client and the server will be used.

However, even if a client and server both support a version of TLS, the security level offered by SSL 3.0 is still relevant since many clients implement a protocol downgrade dance to work around server­side interoperability bugs. In this Security Advisory, we discuss how attackers can exploit the downgrade dance and break the cryptographic security of SSL 3.0. Our POODLE attack (Padding Oracle On Downgraded Legacy Encryption) will allow them, for example, to steal “secure” HTTP cookies (or other bearer tokens such as HTTP Authorization header contents).

We then give recommendations for both clients and servers on how to counter the attack: if disabling SSL 3.0 entirely is not acceptable out of interoperability concerns, TLS implementations should make use of TLS_FALLBACK_SCSV.

CVE­2014­3566 has been allocated for this protocol vulnerability.

cryptography mailing list
Krisztián Pintér | 13 Oct 18:39 2014

Re: What's the point of using non-NIST ECC Curves?

Derek Miller (at Monday, October 13, 2014, 6:19:07 PM):
> However, both scenarios (NSA engineered them to be bad, NSA
> engineered them to be good) mean that the NSA knows a great deal
> more about weaknesses in Elliptic Curve Cryptography than we do.
> Doesn't that give you great pause in using the algorithm at all?

actually, you have a point. if there is any doubt, there is no doubt.
without even doing anything, just by being secretive, NSA can weaken
crypto. good job guys!
Tony Arcieri | 13 Oct 18:28 2014

Re: What's the point of using non-NIST ECC Curves?

On Mon, Oct 13, 2014 at 9:19 AM, Derek Miller <> wrote:
However, both scenarios (NSA engineered them to be bad, NSA engineered them to be good) mean that the NSA knows a great deal more about weaknesses in Elliptic Curve Cryptography than we do. Doesn't that give you great pause in using the algorithm at all?

Sure, that's why djb and friends are also working on implementing McEliece and Merkle signatures

Tony Arcieri
cryptography mailing list
Derek Miller | 13 Oct 18:19 2014

Re: What's the point of using non-NIST ECC Curves?

Thanks for the additional scenario (I had not even considered trusting the NSA, so had not considered that scenario).
However, both scenarios (NSA engineered them to be bad, NSA engineered them to be good) mean that the NSA knows a great deal more about weaknesses in Elliptic Curve Cryptography than we do. Doesn't that give you great pause in using the algorithm at all?

On Mon, Oct 13, 2014 at 10:53 AM, Derek Miller <> wrote:
For curve P-192, SEED = 3045ae6f c8422f64 ed579528 d38120ea e12196d5 
For curve P-224, SEED = bd713447 99d5c7fc dc45b59f a3b9ab8f 6a948bc5
For curve P-256, SEED = c49d3608 86e70493 6a6678e1 139d26b7 819f7e90

On Mon, Oct 13, 2014 at 10:43 AM, Krisztián Pintér <> wrote:
On Mon, Oct 13, 2014 at 5:38 PM, Ryan Carboni <> wrote:
>> > However, considering one of the scenarios where these curves might be
>> > compromised (the NSA knew of weaknesses in certain curves, and
>> > engineered
>> > the NIST Prime curves to be subject to those weaknesses)
> I forget, what was the original inputs to the hash?

another unexplained constant, if i'm not mistaken. it makes no sense
in any circumstances.
cryptography mailing list
cryptography <at>

cryptography mailing list
coderman | 13 Oct 17:45 2014

Re: caring harder requires solving once for the most demanding threat model, to the benefit of all lesser models

On 10/13/14, ianG <iang@...> wrote:
> ...
> your welcome ;-)

a considered and insightful response to my saber rattling diatribe.

i owe you a beer, sir!

> Ah well, there is another rule we should always bring remember:
>      Do not use known-crap crypto.
> Dual_EC_DRBG is an example of a crap RNG.  For which we have data going
> back to 2006 showing it is a bad design.

let's try another example: Intel RDRAND or RDSEED.  depend on it as
the sole source of entropy?

in theory, the only attacks that would allow to manipulate the output
are outside scope. (e.g. the data shows them as nation state level

is "depending on a single entropy source" the "known-crap" part? or is
it the un-verifiable output of this specific source that is

(or am i overreaching, and you advocate direct and sole use of RDRAND
everywhere? :)

> Others in this category include:  RC4, DES, MD5, various wifi junk
> protocols, etc.

if RC4 is known-crap, then how is a downgrade to known-crap not a problem?

>> Q: 'Should I switch away from 1024 bit strength RSA keys?'
> I agree with that, and I'm on record for it in the print media.  I am
> not part of the NIST lemmings craze.
> So, assuming you think I'm crazy, let's postulate that the NSA has a box
> that can crunch a 1024 key in a day.  What's the risk?
> ...
> WYTM?  The world that is concerned about the NSA is terrified of open
> surveillance.  RSA1024 kills open surveillance dead.

consider a service provider that i use, like Google, with a
hypothetical 1024 bit RSA key to secure TLS. they don't use forward
secrecy, so recovery of their private key can recover content.

what is the risk that a Google-like provider key could be attacked? i
have no idea.  but certainly more than my risk as a single individual.

regarding open surveillance, this is a potential mechanism for it
despite the appearance of privacy.

at what point does an insufficient key length become "known-crap" vs.
needless lemming craziness?

said another way, "the data" is only useful if you or those you trust
is not an outlier.  in addition, "the data" is only retrospective; by
definition class breaks and novel attacks are not worth considering
until they become known and used.  does the difficulty in migrating
away from a new-known-crap mistake factor into how you draw the line?

> Actually, I thought there was data on this which shows that auto-update
> keeps devices more secure, suffer less problems.  I think Microsoft have
> published on this, anyone care to comment?

microsoft updates are not the standard upon which to measure all
application updates. the vast majority don't check certificates or
secure digests at all, hence the hundreds of vectors in evilgrade that
provide a seamless path from MitM at coffee shop to administrator on
your laptop.

is the "not using crypto" or "not using crypto right" parts the
"known-crap" piece of this equation?

is the MitM or DNS poison dependency "low risk" enough per the data
that the "known crap" of the update itself no longer matters?


thank you taking the time to address these points in depth so that i
can better understand your reasoning.

this is an interesting discussion because i arrived at the opposite
conclusion: given the reasonableness of long keys and secure designs,
and in view of ever improving attacks, the best course of action is to
solve _once_ for the hardest threat model, so that you don't rely on
past indicators to predict future security and all lesser threat
models can benefit from the protection provided.

i dream of a future where the sudden development of very many qubit
computers does not cause a panic to replace key infrastructure or
generate new keys. where the protocols have only one mode, and it is
secure. where applications don't need to be updated frequently for
security reasons. where entire classes of vulnerabilities don't exist.

in short, i dream of a future where the cooperative solution to the
most demanding threat models is pervasive, to the benefit of all
lesser models, now and into the future.

best regards,

P.S. part of the context for this bias is my perspective as developer
of fully decentralized systems. any peer in such a system is
potentially the highest profile target; the threat model for any peer
the most demanding threat model any one peer may operate under. the
usual "client vs. server", or "casual vs. professional" distinctions
in threat models no longer apply...
Derek Miller | 13 Oct 16:51 2014

What's the point of using non-NIST ECC Curves?

Like many people, I consider the seed values used to generate the NIST Prime curves suspicious.
However, considering one of the scenarios where these curves might be compromised (the NSA knew of weaknesses in certain curves, and engineered the NIST Prime curves to be subject to those weaknesses), does it even make sense to use ECC at all?
If the NIST curves are weak in a way that we don't understand, this means that ECC has properties that we don't understand.
Thus, if you don't trust the NIST Prime curves, does it make sense to trust any ECC curves at all?

I appreciate your responses,
cryptography mailing list