John Young | 28 Jul 22:30 2014

NSA Systems Abroad Query

What is NSA "WB Quad System" for GCHQ, Amberwind (PL), TIGERFIRE (IN), IBIS (AU, JP), GCSB SSO Site (NZ):

This is an NSA travel expenses report. Via Jason Leopold.
cryptography mailing list
Lodewijk andré de la porte | 28 Jul 18:23 2014

Weak random data XOR good enough random data = better random data?

Hey everyone,

If I XOR probably random data with good enough random data, does that result in at least good enough random data?

I'm working on some Javascript client side crypto. There's a cryptographic quality random generator present in modern browsers, but not in older ones. I also don't trust browsers' random generators' quality.

I'd like to ship a few KB (enough) of random data and XOR it with whatever the best-available RNG comes up with. That way the user can still verify that I didn't mess with the randomness, no MITM attacks can mess with the randomness, but given a good transport layer I can still supplement usually bad randomness.

I don't see how it could reduce the randomness to XOR with patterned data. If someone knows better of this, let me know. If I'm correct that also means it should be okay to reuse the few KB's should they ever run out (in this system), at worst it no longer improves the randomness. I don't expect that to ever happen, and I'd prefer requesting new KB's, but it's still interesting.

Could someone confirm this whole thought-train for me? That means, is it a good idea to (over HTTPS) send some randomness*, XOR it with the best-available RNG for better randomness? I actually feel pretty confident about it, just asking for (a few) second opinion(s).

Best regards,

* It'd probably siphon down from a Linux OS, but ofc the code is portable so randomness is probably low quality too.
cryptography mailing list
David Jr Adamson | 27 Jul 12:49 2014

I'd like to add you to my professional network on LinkedIn

I'd like to add you to my professional network on LinkedIn.
- David Jr
Confirm that you know David Jr
David Jr Adamson
technical lead at Statoil
More og Romsdal County, Norway
You are receiving Invitation to Connect emails. Unsubscribe
Learn why we included this.
© 2014, LinkedIn Corporation. 2029 Stierlin Ct. Mountain View, CA 94043, USA
cryptography mailing list
Lodewijk andré de la porte | 26 Jul 17:03 2014

Browser JS (client side) crypto FUD

Is surprisingly often passed around as if it is the end-all to the idea of client side JS crypto.

TL;DR: It's a fantastic load of horse crap, mixed in with some extremely generalized cryptography issues that most people never thought about before that do not harm JS crypto at all.

I'm not sure why the guy wrote it. Maybe he's NSA motivated? Maybe he's worked a lot on secure systems and this just gives him the creeps? Maybe he's the kind of guy that thinks <dash>JS</dash> dynamic scripted languages are not a real languages?

Somebody, please, give me something to say against people that claim JS client side crypto can just never work!

Aside from that it's, well, fundamentally moronic to claim that something is "harmful" when you actually means it does nothing, it's also just (almost!) never true that no attacks are prevented.

But, let's go with the flow of the article. Rants won't really settle arguments.

Two example usages are given.

The first is client-side hashing of a password, so that it's never sent in the clear. This is so legitimate it nearly makes me drop my hat, but, the author decides to use HMAC-SHA1 instead of SHA2 for reasons that are fully beyond me. Perhaps just trying to make things less secure?

The second is using AES keys to client side encrypt. The author must've thought he was being helpful when he imagined the scheme for this. Or maybe he was drunk. "So you generate an AES key for each note, send it to the user's browser to store locally, forget the key, and let the user wrap and unwrap their data.". Somehow trusting the transport layer is all back in vogue. The only key-generation problem in JS is entropy, which is a problem everywhere tbh. If you really want to ensure entropy, send a random data blob and XOR it with whatever client-side best-shot at randomness. Whatever. 

The author bluntheadedly claims "They will both fail to secure users". In principle I agree, his methods sucked balls. He, however, blames it on JS. Okay.. Let's go on.

For several reasons, including the following:
1 Secure delivery of Javascript to browsers is a chicken-egg problem.
2 Browser Javascript is hostile to cryptography.
3 The "view-source" transparency of Javascript is illusory. 
Until those problems are fixed, Javascript isn't a serious crypto research environment, and suffers for it.
(points numbered for pointwise addressing)

1 - Yeah. Duh. What do you think of delivering anything client side? There's the whole SSL infrastructure, if that doesn't cut it for you, well, welcome to the Internet. (I suggest the next article is about how the Internet is fundamentally flawed.) I would suggest, however, that once your delivery pathway is exploited you're fundamentally screwed in every way. You can't communicate anything, you can't authenticate anyone, you really can't do anything! So let's leave out the "Javascript" part of this point, and just do whatever we're already doing to alleviate this issue.

2 - This is a conclusion without any basis so far (aside from being.. meaningless to a computer scientist. Hostile?)

3 - Then just look at what data was transferred. Does every crypto application require checkable source? Is any SSL implementation "considered harmful" because nobody is able to flawlessly read the code, no compilers are trusted, etc?

Okay so that chapter meant absolutely nothing. The author goes on to try to defend his brabble:


If you don't trust the network to deliver a password, or, worse, don't trust the server not to keep user secrets, you can't trust them to deliver security code. The same attacker who was sniffing passwords or reading diaries before you introduce crypto is simply hijacking crypto code after you do."
A fair point against a single thread model. Interestingly the last line does absolutely not have to hold, sniffing (after the fact) and on-the-fly rewriting are worlds apart. Take Tempest of Xkeyscore, for example, they can't do rewrites. They need specialized programs for that. (Conclusion: nope, nothing to see here)

The next chapter tries to justify the fallacies made earlier on. Equating a rewrite to a read, ad-homineming the JS crypto "industry" (and failing to distinguish operational security from actual security), and lastly claiming that misplaced trust is bad (which is obvious and unrelated).

The next chapter claims SSL is safe, and "real" crypto unlike JS crypto. Then firmly cements his baseless ridicule by claiming that if you use non-JS crypto to make JS crypto work, then obviously there's no point.

The next chapter "WHAT'S HARD ABOUT DEPLOYING JAVASCRIPT CRYPTO CODE OVER SSL/TLS?" claims all the page has to be SSL/TLS and that makes it hard. It's not hard and you should already be doing it to have /any/ security. Not to mention it's not true, only that interpreted as page contents has to be SSL'ed (eg, images don't need to be transported over SSL).

So, point 1 has no merit against JS whatsoever. There's also a lot of FUD-like text that denies reality. Especially the assumption that SSL and desktop programs are somehow more secure.

So point 2.

(letterized for pointwise addressing)
In a dispriting variety of ways, among them:

a - The prevalence of content-controlled code.
b - The malleability of the Javascript runtime.
c - The lack of systems programming primitives needed to implement crypto.
d - The crushing weight of the installed base of users.

Each of these issues creates security gaps that are fatal to secure crypto. Attackers will exploit them to defeat systems that should otherwise be secure. There may be no way to address them without fixing browsers."

a, c, d are, at first sight, all rubbish. b is a very genuine point however. With prototyping and the like it can be VERY hard to see what's going on. It's an often mentioned thing about JS that it's too powerful in some ways, and it can be true. The same goes for C and memory control.

Next chapter confirms that a is rubbish.

Chapter after that explains some basic Comp Sci about when you can trust something (and discredits something that can help in a lot of cases, any if you do it correctly (which is too hard))

Chapter after that rehashes the idea that you can't trust the environment unless you trust the whole environment, which is also the same everywhere. (I also refer to trusting the compiler)

Next chapter is titled "WELL THEN, COULDN'T I WRITE A SIMPLE BROWSER EXTENSION THAT WOULD ALLOW JAVASCRIPT TO VERIFY ITSELF?". And guess what, the author agrees. You can indeed do this. If you're just doing it for yourself or a single kind of crypto you could also make a plugin for that. Which is what the WhatWG is doing with the HTML5 crypto extension. Then claims crypto is to PGP as programming languages are to Lisp, which is rubbish.

The author then goes on to actually talk about random generators. Which are not always required, but who cares, right? Then Secure erase, which is only important if you expect the client device to be exploited. Then ?timing attacks? which is even more specific and can be alleviated easily enough. 

Then tries to generalize his claim to remove JS specifically from the equation, removing is last viable (but not definitive) arguments.

Some hating on key management, which is justified but again bullocks wrt the main argument. (not to mention it's a problem everywhere, and it can be solved like everywhere)

Some hate on people running old browsers, which has actually been solved by background-auto-updating by now. (huzzah for all the added insecurity there)

Then something about graceful degrading. Which is fair except for him not sufficiently providing any reason JS crypto never works. (and not relevant). He apparently meant this with d. Depends greatly on the deployment situation, but in general it's FUD.


We meant that you can't just look at a Javascript file and know that it's secure, even in the vanishingly unlikely event that you were a skilled cryptographer, because of all the reasons we just cited."

Yeah. Welcome to programming. There's absolutely no truth to this claim btw. Vagely referring to a large body of rubbish is not an argument.

The rest does not even try to take a direct shot anymore. Something about how users that use 100 programs are more likely to find an insecure one than people that use only 2 or 3.

He's the kind of guy that claims cracking and rewriting SSL connections is easy, whereas using AES for a secure "cryptosystem" is hard. I don't know what's up with this guy.
cryptography mailing list
John Young | 24 Jul 04:01 2014

Increased and Diverse Disclosure Initiatives Needed

Cryptome has canceled the Kickstarter. Following the purpose of
the kickstarter it urges support for increased and diverse disclosure

A few suggestions:

1. Many more and diverse disclosure initiatives are needed to broaden
public participation, to diversify content and to increase unpaid access.

2. They should be novel and unexpected.

3. They should evolve and avoid being static, preferrably brand-free.

4. These may be online, offline or neither, inventive and variable.

5. They may be short- or long-lived or episodic and erratic.

6. Might be hit and run, for a single disclosure or unpredictable series.

7. Provided by individuals, groups or variable.

8. Funded by individuals, groups or variable.

9. Anonymous, nonymous, pseudononymous or variable.

10. Legal, extra-legal, quasi-legal, pushing against legal or variable.

11. Low-key, low-profile, low-recognition, the opposite or variable.

12. Reputable, disreputable or variable.

13. Risky, dangerous, outrageous, the opposite or variable.

14. Citizens' duty should be to disclose, resist secrecy, official 
secrecy most so.
Maarten Billemont | 22 Jul 06:46 2014

Hashing power of attackers

Is there any kind of recent estimation of what kind of hashing power we should expect identity thieves and
other attackers to posses?  Is there public research to demonstrate what kind of cost would be associated
with, say, 10B, 50B, 100B SHA-256 hashes per second?  Can we expect the cost for increasing the speed of
hashing to increase linearly for all hashes?

To get started, I found a few numbers on

Hash Type       PC1         PC2         PC3         PC4         PC5
MD4             15445M c/s  4245M c/s   19868M c/s  5718M c/s   183232M c/s
MD5             7893M c/s   2802M c/s   10436M c/s  3178M c/s   93800M c/s
SHA1            2495M c/s   879M c/s    3833M c/s   1103M c/s   29528M c/s
SHA256          1036M c/s   337M c/s    1413M c/s   406M c/s    12328M c/s
SHA512          179M c/s    103M c/s    383M c/s    90M c/s     1952M c/s
SHA-3(Keccak)   157M c/s    91M c/s     277M c/s    111M c/s    2005M c/s

The scrypt paper has a table with cost estimates:

Table 1. Estimated cost of hardware to crack a password in 1 year.

KDF             6 letters   8 letters   8 chars     10 chars    40-char text    80-char text
DES CRYPT       < $1        < $1        < $1        < $1        < $1            < $1
MD5             < $1        < $1        < $1        $1.1k       $1              $1.5
TMD5 CRYPT      < $1        < $1        $130        $1.1M       $1.4k           $1.5 × 10^15
PBKDF2 (100 ms) < $1        < $1        $18k        $160M       $200k           $2.2 × 10^17
bcrypt (95 ms)  < $1        $4          $130k       $1.2B       $1.5M           $48B
scrypt (64 ms)  < $1        $150        $4.8M       $43B        $52M            $6 × 10^19
PBKDF2 (5.0 s)  < $1        $29         $920k       $8.3B       $10M            $11 × 10^18
bcrypt (3.0 s)  < $1        $130        $4.3M       $39B        $47M            $1.5T
scrypt (3.8 s)  $900        $610k       $19B        $175T       $210B           $2.3 × 10^23

How realistic are these numbers (and are the odd drops such as $175T -> $210B typo's?), how modern are they
and is there any other reliable research in this area?  In particular, I'm interested in finding out about
the different class of attackers and what kind of hashing power we might expect from them (script kiddy,
criminal group with eg. a botnet, state / well funded organization).

— Maarten Billemont (lhunath) —
me: – business: http://www.lyndir.com

Attachment (smime.p7s): application/pkcs7-signature, 5589 bytes
cryptography mailing list
ianG | 20 Jul 11:37 2014

who cares about advanced persistent tracking?

From the "strange bedfellows" department, who cares about us all being
tracked everywhere?  The Chinese, that's who ;)

BEIJING  - Chinese state broadcaster CCTV has accused US technology
giant Apple of threatening national security through its iPhone's
ability to track and time-stamp a user's location.

The "frequent locations" function, which can be switched on or off by
users, could be used to gather "extremely sensitive data", and even
state secrets, said Ma Ding, director of the Institute for Security of
the Internet at People's Public Security University in Beijing.

The tool gathers information about the areas a user visits most often,
partly to improve travel advice. In an interview broadcast Friday, Ma
gave the example of a journalist being tracked by the software as a
demonstration of her fears over privacy.

"One can deduce places he visited, the sites where he conducted
interviews, and you can even see the topics which he is working on:
political and economic," she said.

Eugen Leitl | 11 Jul 13:51 2014

hashes based on lots of concatenated LUT lookups

It's hard to make a cryptocurrency hash that's ASICproof.

Cheap/multisource serve/PC COTS hardware has large memory 
size, and intrinsic random access latencies that can't be 
much improved upon for physical reasons (embedded memory
is limited in size due to die yield reasons, so large
LUTs are always much slower than embedded memory).

As such any hash that needs lots of serial/concatenated 
lookups on large (several GByte), random (same preparation as one-time
pads) memory-locked LUTs to compute is ASIC/FPGA/GPU-proof
since it can't be parallized without replicating the expensive
LUT. Dedicated hardware LUT doesn't have price advantages
over COTS-based LUT, though at very large scales LUTs requiring no
refresh are more energy-efficient.

LUT size can be variable to track technology improvements.
Distribution of several GByte LUT across participating nodes
is not too difficult with P2P protocols (Bittorrent & Co)
as it only happens once on bootstrap.

Memory-bound code, especially if run at low priority does
not make end user all-purpose (ASIC is intrinsically special-purpose) 
hardware unusable for other tasks the way GPU mining is.

How would you construct such a hash?
John Young | 10 Jul 22:39 2014
Lodewijk andré de la porte | 10 Jul 12:54 2014

Finally! Hyperledger is a "trust N out of a selected M" ledger system!

With this nifty little tool one can manage pools that validate transactions. So instead of a consortium of anonymous miners motivated exclusively by profit you can trust a consortium selected according to a predefined procedure.

Then if you trust the procedure, you can probably trust the consortium. With the trust problem solved you are very likely to be able to happily use money as you should.

Fizz-bang Bitcoin is much less unique and useful. People will have a cheaper alternative that seems like it's just as good and more usable.

Problem is that consortia are never good enough. There's always too big an opponent that can take down too much of a consortium. Bitcoin is a tease stronger than that. But much more expensive.

I don't think it will take off though, there doesn't seem to be an early adopter advantage.

cryptography mailing list
Ondrej Mikle | 29 Jun 22:25 2014

Fault attacks on Bitcoin's secp256k1

Could anyone give an example what flaws a secp256k1 implementation needs to have
in order to succumb to the fault attack described in this tweet: ?

It mentions that an implementation is susceptible "unless the implementation
checks everything", but doesn't go into details.

I don't understand the fault attacks much, but IIRC it requires a raw point that
is not on the curve to enter an incorrectly written algorithm. I don't see where
the problematic raw point comes into play.