John Young | 25 Apr 16:44 2015
Picon

Re: Shamir Reveals Sisyphus Algorithm

Indeed. Inherent leakiness of digital technology is a gift of Olympus,
aka James Bidzos aka RSA. Never enough fault-rich crypto.

Digitalization might be the quintessential foolhardiness, or comsec
opportunism, which had led to endless apologies, apologias, apoplexies.

And hurrahs among investors, urging hackers and insiders to spill
secrets sufficient to assure comsec-panicky buyers. Bug bounties
hilariously, assuredly, self-defeating as backdoor warnings.

At 09:49 AM 4/25/2015, Ben Laurie wrote:
>On 22 April 2015 at 17:24, John Young <jya@...> wrote:
> > Futility of trying to eliminate every single vulnerability in a given piece
> > of software.
>
>The name of the game is to protect the secrets despite bugs. And I
>don't mean with cryptography.
John Young | 23 Apr 02:02 2015
Picon

NSA releases 52, 000 pages of William F. Friedman Collection

NSA releases 52,000 pages of William F. Friedman Collection, searchable:

https://www.nsa.gov/public_info/declass/friedman_documents/index.shtml

Document index spreadsheet:

https://www.nsa.gov/public_info/_files/friedmanDocuments/Friedman_Collection_Metadata.xlsx
_______________________________________________
cryptography mailing list
cryptography@...
http://lists.randombit.net/mailman/listinfo/cryptography
John Young | 22 Apr 18:24 2015
Picon

Shamir Reveals Sisyphus Algorithm

Adi Shamir at RSA Conference:

Fully secure systems don't exist now and won't exist in the future.

Cryptography won't be broken, it will be bypassed.

Futility of trying to eliminate every single vulnerability in a given 
piece of software.

https://threatpost.com/fully-secure-systems-dont-exist/112380#sthash.sKPz03sv.dpuf
Picon

OpenPGP in Python: Security evaluations?

Hi all,

for any developer willing to use OpenPGP with a python developed
application currently the main choice is to go with python-gnupg, that's
a wrapper on top of GnuPG binary (https://pythonhosted.org/python-gnupg/).

That's architecturally a very bad choice, plenty of constraint (for
example you need to enable "/bin/sh" execution under apparmor sandboxing
profile of a python application under Linux).

Currently there are only two pure-python OpenPGP implementation:

* PGPy: https://github.com/SecurityInnovation/PGPy

* OpenPGP-Python: https://github.com/singpolyma/OpenPGP-Python

Both stacks rely on Python Cryptography for Cryptographic primitives
implementations https://pypi.python.org/pypi/cryptography .

We're considering switching away from GnuPG for the server-side PGP
processing and would like to ask an opinion to the list about those
implementations.

Are there anyone engaging in metrics to evaluate the security of an
OpenPGP implementation and/or already evaluated PGPy/OpenPGP-Python ?

--

-- 
Fabio Pietrosanti (naif)
HERMES - Center for Transparency and Digital Human Rights
http://logioshermes.org - https://globaleaks.org - https://tor2web.org -
https://ahmia.fi
=JeffH | 20 Apr 18:01 2015

stanford talk: Juan Garay: The Bitcoin Backbone Protocol: Analysis and Applications

this seems to be their associated paper..

https://eprint.iacr.org/2014/765.pdf

Subject: Tuesday,
  April 21 -- Juan Garay: The Bitcoin Backbone Protocol: Analysis
  and Applications
From: David Wu <dwu4@...>
Date: Thu, 16 Apr 2015 18:56:43 -0700
To: security-seminar@...

   The Bitcoin Backbone Protocol: Analysis and Applications

                          Juan Garay

                   Tuesday, April 21, 2015
                        Talk at 4:15pm
                          Gates 498

Abstract:

Bitcoin is the first and most popular decentralized cryptocurrency to date.
In this work, we extract and analyze the core of the Bitcoin protocol,
which we term the Bitcoin "backbone," and prove two of its fundamental
properties which we call "common prefix" and "chain quality" in the static
setting where the number of players remains fixed. Our proofs hinge on
appropriate and novel assumptions on the "hashing power" of the
adversary relative to network synchronicity; we show our results to be
tight under high synchronization.

Next, we propose and analyze applications that can be built "on top'' of the
backbone protocol, specifically focusing on Byzantine agreement (BA)
and on the notion of a  public transaction ledger. Regarding BA, we observe
hat Nakamoto's suggestion falls short of solving it, and present a simple
alternative which works assuming that the adversary's hashing power is
bounded by 1/3. The public transaction ledger captures the essence of
Bitcoin's operation as a cryptocurrency, in the sense that it guarantees the
liveness and  persistence of committed  transactions. Based on this  notion
we describe and analyze the Bitcoin system as well as a more elaborate BA
protocol, proving them secure assuming high network synchronicity and that
the adversary's hashing power is strictly less than 1/2, while the
adversarial
bound needed for security decreases  as the network desynchronizes.

This is joint work with Aggelos Kiayias (U. of Athens) and Nikos Leonardos
(U. Paris Diderot -- Paris 7).
Ron Garret | 17 Apr 19:56 2015

Introducing SC4 -- feedback appreciated

TL;DR: I took tweet-NaCl-JS and wrapped a little PGP-like webapp around it.  I would like to solicit
feedback and code review from this community before I submit it for a formal audit and release it to the
general public.

Links:

Source code: https://github.com/Spark-Innovations/SC4

Live demo: https://sc4.us/sc4.html

FAQ for experts: http://sc4.us/expert_faq.html

FAQ for non-experts: http://sc4.us/regular_faq.html

Note that the FAQ links are not secure.  This will be fixed eventually.  The production push process is a work-in-progress.

Unique features of SC4:

1.  It is a standalone web application.  The server only serves static files.  You can even run SC4 from a FILE:
URL, though this requires the keys to be embedded in the code.  SC4 includes code to automatically generate
a standalone version.  This is mainly intended to be a proof-of-concept, but it does work.

2.  It’s tiny, and therefore easy to audit.  It consists of three standard libraries (tweet-NaCl, jQuery,
and purify) plus <1000 lines of additional code, and that includes the HTML and CSS.

3.  It runs in FF, Chrome and Safari.  It might even run in IE but I haven’t tried it.

SC4 aims for a point in the design space that balances security against ease of use.  PGP is bullet-proof, but
not widely deployed because there is a lot of friction in getting it up and running.  SC4 aims to eliminate
this friction while remaining reasonably secure.  It is also based on open standards so that more secure
implementations can be easily produced in the future.  (Part of my long-term plan is to build an HSM dongle
using a Teensy 3 board.)

Feedback and constructive criticism much appreciated.  Also, I’m seeking someone to serve as a paid
consultant on this project.  If you’re interested please contact me off-line.  My SC4 key is:

---START KEY---
X-sc4-content-type: public-key
From: ron@...
Timestamp: Fri, 17 Apr 2015 17:40:56 GMT
AocfySUwQXhMGFezXFEJKPL77AoMLupwREpCeOZgRB4=
RBDrBehSHbm1x/o+yPFrpdD6kWwSV3QQI8S/y8MdeEg=
JaP8eUTkBh2OKRPJYSti9uTuB/vd8a+HV9rCCknw7l95
a9C8lRa1PfP7/rcR8qwUM3BvXkBvT8kaZMJhcCoPCw==
---END KEY---

Thanks,
Ron Garret
Thierry Moreau | 17 Apr 19:25 2015

Entropy is forever ...

Dear all:

Quoting the basic definition of entropy from Wikipedia, "In information 
theory, entropy is the average amount of information contained in each 
message received. Here, message stands for an event, sample or character 
drawn from a distribution or data stream." In applied cryptography, the 
entropy of a truly random source of "messages" is an important 
characteristic to ascertain. There are significant challenges when 
applying the information theory, probability, and statistics concepts to 
applied cryptography. The truly random source (and computations using 
random message data) must be kept secret. Also the following dilemma 
should be noted: the truly random source is needed in a digital 
processor system typically engineered with determinism as a design goal 
derived from the basic reliability requirement. Quantitatively, the 
entropy measure for applied cryptography, in the order of hundreds of 
bits, is way beyond any workable statistical analysis processes. In 
practice, a truly random source usable for applied cryptography is a 
system procurement issue that can seldom be blindly delegated as an 
ordinary operating system service. Thus, one wants a reliable source of 
uncertainty, a trustworthy one than can barely be tested, as a universal 
operating system service totally dependent on hardware configuration.

Applying the information theory to actual situations is error-prone. Is 
there a lower entropy in "Smith-725" than in "gg1jXWXh" as a password 
string? This question makes no sense as the entropy assessment applies 
to the message source. A password management policy that rejects 
"Smith-725" as a message originating from the end-user population 
actually constraints this message source with the hope of a higher 
average amount of information in user-selected passwords. From a single 
end-user perspective having to deal with an ever growing number of 
passwords, the entropy concept appears as a formalization of the 
impossible task he/she faces.

Significant conceptual deviation may occur from the common (and correct) 
system arrangement where a software-based pseudo-random number generator 
(PRNG) of a suitable type for cryptography is initially seeded from a 
secret true random source and then used for drawing secret random 
numbers. It is often inconvenient for statistical testing to apply 
directly to the true random source messages, but statistical testing of 
the PRNG output gives no clue about the true random source. The design 
of PRNG seeding logic is an involved task dependent on the true random 
source which may be hard to modelize in the first place. In actual 
system operations, the inadequate seeding may have catastrophic indirect 
consequences but it may be difficult to detect, and it is certainly a 
challenging error condition for service continuity (programmers may be 
inclined to revert to insecure PRNG seeding when the proper true random 
source breaks down).

Despite these pitfalls, I assume my reader to share my endorsement of 
the true random seeding of a cryptographic PRNG as the main source of 
random secrets for a digital processor system dedicated to cryptographic 
processing. As this PRNG output is being used in various ways, chunks of 
the output sequence may be disclosed to remote parties. It is an 
essential requirement for a cryptographic PRNG that no output chunk may 
allow the recovery of its internal state (i.e. some data equivalent to 
PRNG seed data leading to the same PRNG output sequence as the secret PRNG).

In this note, I challenge the view that an entropy pool maintained by an 
operating system ought to be depleted as it is used. I am referring here 
to the Linux "entropy pool." My challenge does not come through a review 
of the theory applied to the implementation. Instead, I propose a 
counter-example in the form of the above arrangement and a very specific 
example of its use.

The central question is this problem. A system is booted and receives 
2000 bits of true randomness (i.e. a 2000 bits message from a source 
with 2000 bits of entropy) that are used to seed a cryptographic PRNG 
having an internal state of 2000 bits. This PRNG is used to generate 4 
RSA key pairs with moduli sizes of 2400 bits. The private keys are kept 
secret until their use in their respective usage contexts. No data leak 
occurred during the system operation. After the key generation, the 
system memory is erased. What is the proper entropy assessment for each 
of the RSA key pairs (assume there are 2^2000 valid RSA moduli for a 
moduli size of 2400 bits, a number-theoretic assumption orthogonal to 
the entropy question)?

My answer is that each of the 4 RSA key pairs are independently backed 
by 2000 bits of entropy assurance. The entropy characterization 
(assessment) of a data element is a meta-data element indicating the 
entropy of a data source at the origin of the data, plus the implicit 
statement that no information loss occurred in the transformation of the 
original message into the assessed data element. Accordingly, my answer 
should be made more precise by referring to an unbiased RSA key 
generation process (which should not be considered a reasonable 
assumption for the endorsement of lower ranges of entropy assessments).

To summarize, the entropy assessment is a characterization of a the data 
source being used as a secret true random source. It also refers to the 
probability distribution of messages from the data source and the 
quantitative measure of information contents derived from the 
probability distribution according to the information theory. This 
mathematical formalism is difficult to apply to actual arrangements 
useful for cryptography, notably because the probability distribution is 
not reflected in any message. The information theory is silent about the 
secrecy requirement essential for cryptographic applications. Maybe 
there is confusion by assuming that entropy is lost when part of the 
random message is disclosed, while only (!) data suitability for 
cryptographic usage is being lost. In applying the information theory to 
the solution of actual difficulties in applied cryptography, we should 
address secrecy requirements independently. The probability distribution 
preservation through random message transformations is an important 
lesson from the theory that might have been overlooked (at least as an 
explicit requirement).

A note about the genesis of the ideas put forward. In my efforts to 
design applied cryptography key management schemes without taking 
anything for granted and paying attention to the lessons from the 
academia and their theories, I came with a situation very similar to the 
above problem statement. The 2000 bit random message from a 2000 bits 
entropy truly random source is a simplification to the actual situation 
in which a first message transformation preserves the probability 
distribution of random dice shuffling. In the above problem statement, 
the PRNG seeding is another distribution preserving transformation. The 
actual PRNG is based on the Blum-Blum-Shub x^2 mod N generator, which 
comes with two bits of entropy loss upon seeding. The above problem 
statement is thus concrete.

Maybe the term entropy is used, more or less by consensus, with a 
definition departing from the information theory. Indeed, NIST documents 
covering the topic of secret random numbers for cryptography use 
conflicting definitions surrounding the notion of entropy.

Although my own answer to the stated problem puts into question the 
Linux "entropy pool" depletion on usage, I do not feel competent to make 
suggestions. For instance, my note hints that a PRNG algorithm selection 
should be part of the operating system service definition for 
/dev/?random offered for cryptographic purposes but I have just a vague 
idea of whether and how the open source community might move in this 
direction.

Entropy is forever ... until a data leak occurs.
A diamond is forever ... until burglars break in.

Regards,

- Thierry Moreau
Natanael | 14 Apr 15:04 2015
Picon

Feedback requested: ECC anonymous signatures with per-message revocable anonymity and pair-wise message linkability

The goal is to be able to publish signed messages anonymously and then later on prove it was you who signed them, at a time of your choosing.

NOTE: I'm not a professional cryptographer, just a hobbyist, but I think I've got a good grip of how things work. I'm willing to learn, feel free to point out errors or flawed assumptions! It might be complete crap, or it might be useful. I'm trying to keep it reasonably formal, but there's probably some ambiguity left. I'm happy to answer questions.

Bonus features:
1: To be able to publish multiple messages without revealing at that time that they're signed by the same person.
2: To be able to selectively link together multiple sets of arbitarily chosen messages at a later point in time.
3: Not needing to store any state, so you don't need to store any data beyond your original private key. The random numbers needed are derived from values you already have whenever you need them.

Here's the most recent version of it that I'm considering;

You start off with a preexisting ECC keypair publicly associated with you, with private key x_p and public key g^x_p (p means known publicly). You have a message m_i (there may multiple messages, i is a variable that identifies the current message). To sign a message anonymously, you create a new keypair. To be able to prove it was created by you at a later point in time, without allowing others to claim authorship, this is how it is created;

You first compute a unique per-message committed value H_ci = HMAC(m_i, x_p). From that you compute a per-message value from that used for derivation of a new ECC keypair, H_di = HMAC(H_ci, m_i).

To derive the new keypair (note that we are using the ECC versions of multiplication/division, etc, not arithmetic), you compute the derived private key x_i = x_p * H_di and the derived public key g^x_i = g^x_p + H_di (deriving new valid keypairs is a neat feature of many ECC implementations; also deriving nonces from the the private key + the message has precedence in deterministic signatures). This keypair is used to sign m_i.

To later on prove that you were the signer of the original message, you compute and publish H_ci, signed with both keypairs x_p;g^x_p and x_i;g^x_i (you probably should also include m_i and g^x_p and g^x_i in the signed message). Thus anybody can compute g^x_p + HMAC(H_ci, m_i) = g^x_p + H_di = g^x_i and validate both signatures to confirm the following;

1: That the signing keypair was derived from the original keypair, using a value derived from that message (thanks to the hash commitment, it isn't merely random), so the keypairs are related (the exact origin of H_ci is irrelevant for verification, it just acts as a committed nonce/salt, H_di is the important value; H_ci is derived from the message + main private key in order to keep things stateless).
2: That as a result of 1, the message must have been known to the holder of the original keypair at the time of signing, as the signature can't be created before the keypair is created (this binds the message to your keypair such that nobody else can claim authorship in contradiction to you - assuming your original signature of the message was preserved when the message was published and known by the verifying parties). Timestamping a hash of the signed message before publishing can assist you in your claim of authorship as you can prove your signature is the earliest timestamped one.
3: That both public keys represent *valid* keypairs which you hold and can sign messages with.
4: That you hold the private keys of both keypairs *simultaneously* (decreases risk of collusion between the signer and another party).
5: That you "approve of this message" with both the signing keypair and your main keypair (you intend to prove authorship and link the keys).

It is also possible to link together pairs of messages to the same pseudonymous identity without revealing at that time that that they're related to your main keypair, and you can even create a chain of these proofs for multiple messages, which in turn can be linked to your main keypair at will;

You compute l_ij = x_i/x_j = x_p * H_di / x_p * H_dj (where l means linking value, j represents the second message), where g^x_i + (x_i/x_j) = g^x_i + l_ij = g^x_j. l_ij here has the same function as H_di above, and you sign l_ij with both keypairs x_i;g^x_i and x_j;g^x_j, thus proving you hold both keypairs simultaneously and intend to link them (again, you should include g^x_i, g^x_j, m_i and m_j in the signed message).

This version do not show that you already knew one message while signing the other like it does with showing the tie between x_p;g^x_p and x_i;g^x_i, so this proves nothing about the origins of the keypairs, but you probably don't need to do that either in this case. You can however later on decide to link the messages to your main keypair by revealing the committed values H_ci, H_cj as above, which then *does* prove the origin of the keypairs.

_______________________________________________
cryptography mailing list
cryptography@...
http://lists.randombit.net/mailman/listinfo/cryptography
Jeffrey Walton | 12 Apr 20:56 2015
Alexander Klimov | 7 Apr 13:31 2015
Picon

New algorithm for the discrete logarithm problem on elliptic curves

Although calculating the exact time/memory complexity of algorithms 
based on the Grobner basis is not easy, the new approach is 
interesting:

  A new algorithms for computing discrete logarithms on elliptic 
  curves defined over finite fields is suggested. It is based on a new 
  method to find zeroes of summation polynomials. In binary elliptic 
  curves one is to solve a cubic system of Boolean equations. Under a 
  first fall degree assumption the regularity degree of the system is 
  at most $4$. Extensive experimental data which supports the assumption is 
  provided. An heuristic analysis suggests a new asymptotical 
  complexity bound $2^{c\sqrt{n\ln n}}, c\approx 1.69$ for computing discrete 
  logarithms on an elliptic curve over a field of size $2^n$. For 
  several binary elliptic curves recommended by FIPS the new method 
  performs better than Pollard's. 

<http://eprint.iacr.org/2015/310.pdf> or
<http://arxiv.org/pdf/1504.01175v1>

--

-- 
Regards,
ASK
Erkan Uslu | 7 Apr 00:12 2015
Picon

slide attack to lightweight algorithms

Hi ! Which lightweight algorithms are vulnerable against slide attack? Specifically, can we use the slide attack to Tiny encryption algorithm(TEA) or PRESENT?

Thanks already for all responses . 
_______________________________________________
cryptography mailing list
cryptography@...
http://lists.randombit.net/mailman/listinfo/cryptography

Gmane