Skip to content
This repository has been archived by the owner on Dec 29, 2022. It is now read-only.

Change digital signature algorithms to use SHA256 #17

Closed
GoogleCodeExporter opened this issue Mar 7, 2015 · 23 comments
Closed

Change digital signature algorithms to use SHA256 #17

GoogleCodeExporter opened this issue Mar 7, 2015 · 23 comments

Comments

@GoogleCodeExporter
Copy link

Change as per NIST recommendations

Original issue reported on code.google.com by stevew...@gmail.com on 12 Aug 2008 at 5:46

@GoogleCodeExporter
Copy link
Author

One note on this:  It's probably not necessary to change the key hash algorithm 
to SHA256, since the usage model already accepts the possibility of collisions 
(indeed, they're rather easy to generate since the hash is truncated to four 
btes).

Also, rather than introducing incompatibilities by just changing the hash used 
for message HMACs, I think a better approach (actually suggested by 
bleichen@google.com, not my idea) is just to introduce new key types that have 
SHA256 as their algorithm.  That way we can seamlessly upgrade a key set by 
just creating a new key with the stronger hash algorithm (which should be the 
default).  Assuming both sides have updated versions of the library, the normal 
Keyczar key rotation support will make the transition perfectly smooth.

Comments?

Original comment by swillden@google.com on 20 Jan 2012 at 2:30

@GoogleCodeExporter
Copy link
Author

It seems to me that new key types would be more difficult because you can only 
have one key type per keyset and so you would not be able to rotate in the new 
algorithm.  I think Ideally you would add a digest="SHA256" to the Rsa and Dsa 
Public key json, and like with the RSA padding,  if it doesn't exist you assume 
"SHA1" and have a slightly different keyhash when the newer digest is used, 
then you still have backward compatibility and rotation would make it very 
smooth. 

Like it says in the title RSA and DSA signing seem like the most critical here, 
as it's 2013 it's not really the right call for a digest used with digital 
signature algorithms.

It would probably be a good idea to this for HMAC as well, while not critical, 
but at least there would then be a means to update in the future.

Original comment by jtu...@gmail.com on 24 Jan 2013 at 3:07

@GoogleCodeExporter
Copy link
Author

First, I'm wondering if the claim "one key type per keyset" is correct for 
keyczar. So far I was under the impression it is "one purpose per keyset".

I've introduced new key types in keymaster whenever extending an existing key 
type would  have lead to problems. E.g. I've done this for HMACs. The main 
reason there
was to consistently truncate the HMAC. We had a hack that introduced a 
digest_length earlier with an ugly default: no digest_length or digest_length = 
0 means digest_length = 20. That lead to bugs and confusion.
Adding a new key type simplified a lot here. E.g. it was possible to require
that all fields in the key have a value set and to abandon values with special 
meaning. E.g. keys of the new key type with digest_length = 0 are invalid and 
get rejected.

For DSA I'm using that the standard does not allow hashes with a size smaller 
than
the size of q. Hence extending the DSA type to 2048 and 3072-bit p's 256-bit 
q's and SHA256 (i.e. I'm always using the hash that fits the size of q) did not 
add any ambiguities.

For RSA signatures it is not clear to me whether just adding a field for the 
hash
or adding a new extended key type is best. In keymaster I added a new field. 
One of the issues here is that an application might do its own key management, 
not use the crypto library to copy a key and hence overlook new fields.
For keymaster it is easier to get some confidence that such changes should not 
break anything than for keyczar by searching the our code. 

Original comment by bleic...@google.com on 24 Jan 2013 at 11:32

@GoogleCodeExporter
Copy link
Author

See KeyMetadata http://code.google.com/p/keyczar/wiki/KeyMetadata and KeyType 
http://code.google.com/p/keyczar/wiki/KeyType

KeyMetaData is where the key type is stored, one per keyset. It might be better 
if it was per purpose+a/symmetric, It's a more complex json deserialization, 
but certainly doable and would have the ability to rotating out algorithms.

DSA option sounds reasonable.

If you have an additional field on RSA for digest, you can still validate 
clients using SHA1 with the same keyset, if you add a new KeyType you can't.

I think with keyczar you can have reasonable confidence that adding a field 
wouldn't break things, because keys are stored in json. But i think that's out 
of the scope of the listed keyczar philosophy, and right now it seems like 
keyczar is stuck for whatever reason with SHA1, this bug was filed in 2008, and 
it's important that it just be fixed somehow.

If you could have multiple keytypes in one keyset I would be all for just make 
a new KeyType every time something new needs to be addressed, it would make 
things easier, but that's not how  Keyczar is implemented right now.

Original comment by jtu...@gmail.com on 24 Jan 2013 at 1:57

@GoogleCodeExporter
Copy link
Author

Thanks. I wasn't aware of that. Keymaster has a key type per version, which of 
course simplifies things a bit. My proposal above mentioned by swillden was 
based on that assumption.

Alternatively: for keymaster I'm defining what a well defined representation of 
a key is and what kind of legacy representations need to be supported 
additionally. Such legacy representations include for example keys with missing 
arguments or
keys that are not compatible between implementations.
There is a tool that checks keysets for such legacy keys and other stuff such 
as insufficient key sizes.  


Original comment by bleic...@google.com on 24 Jan 2013 at 4:01

@GoogleCodeExporter
Copy link
Author

I was thinking that the KeyType would need to be stored in the Key json to 
change to that behavior, completely overlooking the fact that it could be 
stored in the versions metadata. That would actually be a pretty doable 
implementation change. Most of the compatibly work could be done in the 
keyczartool, most of the usability changes would be in the keyczar tool 
everything else API wise is pretty well encapsulated, just add a keyset 
"format" to identify compatibility and upgrade-ability, specify the "kind" of 
keyset it is in addition to the purpose to make it easy to check correct 
KeyTypes, and then move the "type" to versions, and key deserialization stays 
the same, and defaulting and rotating in new key types in the future becomes 
easy. 

{
 "format":"1"
 "name":"Testing",
 "purpose":"DECRYPT_AND_ENCRYPT",
 "kind","SYMMETRIC",
 "encrypted":false,
 "versions":[{"type":"AES", "versionNumber":1,"status":"ACTIVE","exportable":false},
             {"type":"AES","versionNumber":2,"status":"PRIMARY","exportable":false}]}

{
 "format":"1"
 "name":"Testing",
 "purpose":"DECRYPT_AND_ENCRYPT",
 "kind","PRIVATE",
 "encrypted":false,
 "versions":[{"type":"RSA_PRIV", "versionNumber":1,"status":"ACTIVE","exportable":false},
             {"type":"RSA_PRIV","versionNumber":2,"status":"PRIMARY","exportable":false}]}

{
 "format":"1"
 "name":"Testing",
 "purpose":"ENCRYPT",
 "kind","PUBLIC",
 "encrypted":false,
 "versions":[{"type":"RSA_PUB", "versionNumber":1,"status":"ACTIVE","exportable":false},
             {"type":"RSA_PUB","versionNumber":2,"status":"PRIMARY","exportable":false}]}

Original comment by jtu...@gmail.com on 24 Jan 2013 at 4:43

@GoogleCodeExporter
Copy link
Author

I think putting the key type in the metadata was a mistake in the creation of 
the first versions of Keyczar.  It was supposed to be a "port" (more or less) 
of Keymaster.

While Daniel and I have some disagreements with regard to the proliferation of 
key types, I think it's clear that it makes perfect sense for key versions to 
have different types with a common purpose. I'm not sure we need the "kind" 
field, and there are reasons to avoid making the metadata larger if we don't 
have to (e.g. the metadata string is used in session data, so making it bigger 
can make it impossible to use smaller symmetric keys to encrypt the session 
data), but other than that, my opinion is that this is the right approach.

With regard to the original problem (hash algorithm), I think I'd prefer to add 
a hash algorithm specifier in the version rather than create a different key 
type. From a logical perspective the key type is a combination of:

1.  Purpose
2.  Type
3.  Hash
4.  Mode
5.  Padding

and perhaps some other bits I'm not thinking of right now. Purpose should 
clearly be inherited from the keyset, the remainder should be specified 
per-version. Daniel argues for encoding all of it into the key type; I'd prefer 
to separate the pieces.

I have been thinking that we should defer significant changes like this until 
we're ready to rev the Keyczar/Keymaster version field in the messages. 
However, upon reflection I don't think those should be tied to one another. The 
version field in the message structure should indicate the message format 
version; I think it's reasonable to change key formats separately from message 
formats.

However, I am concerned about forward and backward compatibility. I think if 
we're going to make these sorts of changes we need to ensure that an old 
version of Keyczar that does not support a new logical key type cannot fail 
silently if given a new key. It needs to error out rather than quietly 
encrypting a message incorrectly, for example.

Andrew Sacamano and I talked about this and came to the conclusion that perhaps 
the best approach is to create a new key file format, including a new metadata 
format. Still JSON, and as similar as possible to the previous format, but we 
think it's important to be deliberately incompatible. New Keyczar versions 
should be able to understand old keysets and keys, of course, but we want to 
make sure that old Keyczar versions choke on new keysets and keys. The most 
obvious way to accomplish this is to rename some required fields. Along the way 
we think we should also add a version number to the metadata.

This may be the solution to the session blob size problem as well.  If we 
continue using the old metadata and key format until we change the message 
format, then we don't have to worry about session blobs getting too large. When 
we change the message format we should switch from serialized JSON for the 
session blob to protobuf, which is a compact binary format (and the format 
Keymaster uses, so it'll improve interoperability).

Original comment by swillden@google.com on 29 Jan 2013 at 2:42

@GoogleCodeExporter
Copy link
Author

Just a note:
The main reason why I prefer just a single enum for the key type is that the 
type is in the wrong place (at least in keymaster). 
The type should be part of the key not the metadata.
Same goes for Purpose, Hash, Mode and Padding. The classes implementing 
individual key types in keymaster do not have access to the metadata (I believe 
the same might be true for keyczar too), hence such data must be passed during 
construction.
Thus parameters such as hash, encryption mode and padding should either follow 
from the key type or be stored as part of the key material.

However, I'm pretty convinced that each type should only be suitable for one 
purpose. 
RSA seems like the only exception, though in my opinion encryption with RSA and 
signing with RSA should be treated as two separate cryptographic primitives and 
hence also have distinct key types.



Original comment by bleic...@google.com on 29 Jan 2013 at 5:24

@GoogleCodeExporter
Copy link
Author

I'm confused, Daniel (It happens a lot :-)), in comment #5 you said the 
Keymaster key type is in the version, not the metadata. Or did you mean the key 
purpose?

In Keyczar, type and purpose are currently in the metadata, while mode and 
padding are in the key versions. Hash isn't specified anywhere because there's 
only one hash algorithm supported. I'd like to rev the Keyczar key formats and 
move type to the key version where it ought to be.

Original comment by swillden@google.com on 29 Jan 2013 at 5:28

@GoogleCodeExporter
Copy link
Author

I'd like to correct the suggestion that KeyMetadata is used in the 
SessionMaterial, it really only uses the AesKey json specifically.

An unofficial feature of the C# keyczar, is that it has a optional KeyPacker 
Interface for session material to be packed any way you want, I include a BSON 
keypacker in my Unoffical namespace, not just to reduced the size, but because 
I needed to pack it with an additional "type" field to allow the possibility of 
a different symmetric KeyType to be exchanged, this also in effect required a 
two pass bson deserialization. I definitely think that there should be a 
different SessionMaterial format, not just for size and flexibility, but also 
because the SignedSessions really should sign the SessionMaterial as well as 
the ciphertext, and I agree that it should be with a message format change, but 
as long as the json for the very specific KeyType "AES" doesn't change you 
don't have to worry about a json change effecting SignedSessions.

In Issue 121,for c#, I've actually went through an implemented moving "type" in 
the metadata to KeyVersion as a proposal, having a "kind" field helps keeps the 
logic simple for what operations you can do with a keyset with the keytool 
(specifically with addkey, pubkey, import). For instance because while you have 
a Purpose SIGN_AND_VERIFY on the keyset, you probably shouldn't have a keyset 
that contains both HMAC_SHA1 and DSA_PRIV keys and you need logic to deal with 
it. I also added a KeyMetadata version identifier field I named "format" in 
which I just put an arbitrary string "1" in for this metadata change and had my 
json deserializer treat missing the field as "0".

With the Daniel's proposal supporting new hashes with new KeyTypes, they 
shouldn't silently fail, at this point I'm pretty familiar with each 
implementation of keyczar (c++ the least though) and they all do a string to 
class mapping to choose the right class to be deserialized too, so if one 
implementation of keyczar doesn't have a specific KeyType the mapping will 
fail, non-silently. Same with my metadata proposal removing KeyType from the 
base level of the metadata will cause old versions to choke on the KeySet. Also 
using new KeyType's for new features of the same algorithm makes backwards 
compatibility really really easy.

The important thing in this issue is that Keyczar is overdue for an algorithm 
spring cleaning, and this situation of being so over due demonstrates that 
Keyczar needs an easy way to add and rotate algorithms in the future. 

Above Daniel Bleichen pointed out, without any change to to the DSA key json, 
adding support for updated hash algorithm can be done because the larger key 
size was part of the same fips additions for DSA2 as the hashes, which is an 
easy change, great! But for me, investigating how I would do it for C#, I found 
out I can support reading in the key data, verifying and signing with the 
different digest and larger keysize, but there aren't any C# libraries that 
support generating the larger size key data. It turns out that the spec came 
out in 2009, it was very rapid, there weren't any test vectors there was a lot 
of confusion, and while people cared about it in 2009 they just moved to 
something else in the c# world.

I think its important to be able to change algorithms, I think while it's good 
to be conservative on algorithm choices, but it's good to also have stronger or 
even just safe alternative options available for future sake even if the are 
not the default and having the ability for the implementation to also add them 
easily, like Shawn said, in a forward alerting/backwards compatible safe way.

Critical:
DSA needs to be DSA2 or KeyczarTool's need a different asymmetric signing 
default.
RSA for signing needs SHA256 or better support and certainly not use sha1 by 
default. 

Not critical, and probably shouldn't even be the default, but keyczar should 
offer for future:
HMAC SHA256
AES Then HMAC SHA256

Just having alternative symmetric KeyType options would also help highlight the 
keyczar code base changes that would need to be more flexible for future 
algorithms too.

Original comment by jtu...@gmail.com on 29 Jan 2013 at 5:36

@GoogleCodeExporter
Copy link
Author

I think there needs to be clarification on terms.  I've been using the 
KeyVersion term, as in the wiki page, describe the element of the "versions" 
section of the metadata rather than the key data json.

Original comment by jtu...@gmail.com on 29 Jan 2013 at 5:45

@GoogleCodeExporter
Copy link
Author

Original comment by jtu...@gmail.com on 2 Mar 2013 at 8:48

  • Added labels: Implementation-Python, Implementation-Java, Implementation-Cpp

@GoogleCodeExporter
Copy link
Author

What about creating a new algorithm-agnostic KeyType, say RSA_PRIV_SIGNING, 
which is required to have an algorithm in it's json spec? That way, we're 
guaranteed to have non-supporting implementations fail loudly from the unknown 
keytype, and the algorithm can still change with rotations. While we're mucking 
around, we're free to also add a version field or whatever else we're 
interested in to the metadata.

Original comment by psch...@google.com on 16 Jul 2013 at 7:03

@GoogleCodeExporter
Copy link
Author

I have a proposed java implementation of this, with two new keytypes 
RSA_PRIV_SIGNING and RSA_PUB_SIGNING available at:

https://code.google.com/r/pschorf-keyczar/source/detail?r=cbd3ac52eed0f6e9902327
6832294c5788579b22

Original comment by psch...@google.com on 18 Jul 2013 at 9:16

@GoogleCodeExporter
Copy link
Author

Err, 
https://code.google.com/r/pschorf-keyczar/source/detail?r=dec69ce40c5dd6ba51a9bc
2e919059da77536f59

Original comment by psch...@google.com on 18 Jul 2013 at 9:18

@GoogleCodeExporter
Copy link
Author

You should edit your options so others can review your code. 
https://code.google.com/p/support/wiki/GettingStarted

Original comment by dlundb...@google.com on 18 Jul 2013 at 9:36

@GoogleCodeExporter
Copy link
Author

Done, sorry.

Original comment by psch...@google.com on 18 Jul 2013 at 10:02

@GoogleCodeExporter
Copy link
Author

For DSA, JCE only supports DSA with SHA1 and it will only generate key sizes 
equal to [512, 768, 1024].
Also Pycrypto will only work with DSA key sizes that are multiples of 64 
between 512 and 1024. M2Crypto supports 2048 bit and 3072 bit keys just like 
C++ (since it is merely a swig wrapper for openssl) 
It sounds like the C# libraries are in the same boat as pycrypto and jce. 
Openssl will use 256 bit q's for any key size greater than 1024.

It looks like in order to actually support this feature for DSA, code that 
implemented DSA would have to be written for each version or we would have to 
force a switch to M2Crypto for python and write a java version. This sounds 
unfeasible.

The options I see for DSA are: 
  1) fix the C++ implementation to use SHA256 for 2048 and 3072. This breaks compatibility, but then we can implement it in python (if M2Crypto is installed) and once the JCE is fixed, there too.
  2) Keep on using SHA1 for C++ and implement the same for python with M2Crypto.
  3) Do nothing until more support from the underlying libraries is created. 

I feel like at this point in time doing nothing is the best option. When 
building K2 we should make sure the SHA2s are used for the appropriate sizes, 
but until then we shouldn't break compatibility since there is not much to be 
gained.

One thing I do plan to do to address interoperability and security issues (to 
address issue 109) is make 1024 bit DSA the default for C++ and display a 
warning if SHA1 is used with 2048 or 3072 bit DSA. 



Original comment by dlundb...@google.com on 30 Aug 2013 at 6:52

@GoogleCodeExporter
Copy link
Author

Issue 156 has been merged into this issue.

Original comment by devin60...@gmail.com on 14 Jan 2015 at 7:35

@wsargent
Copy link

wsargent commented Oct 8, 2015

Putting together some references to track this bug...

Discussion:

SHA-1 prestart collision attack and Arstechnica link:

"Now, based on research completed last month, the international team of researchers believe that such an attack could be carried out this year for $75,000 to $120,000. The much lower cost is the result of efficiencies the researchers discovered in the way graphics cards can use a technique known as "boomeranging" to find SHA1 collisions."

SP 800-131A "SHA-1 shall not be used for digital signature generation after December 31, 2013"

@jbtule
Copy link
Contributor

jbtule commented Oct 8, 2015

This one is very sad that it has sat so long, while the pull request wasn't complete, it was pretty close, see comments:
https://code.google.com/r/pschorf-keyczar/source/detail?r=06a78002feb632cd75ff423b15958eec9fcd60ef

For reference, a functioning keytype in C# fixed for this issue ( added in the unofficial namespace since since it doesn't exist in other keyczars).
https://github.com/jbtule/keyczar-dotnet/blob/master/Keyczar/Keyczar/Unofficial/RsaPrivateSignKey.cs
https://github.com/jbtule/keyczar-dotnet/blob/master/Keyczar/Keyczar/Unofficial/RsaPublicSignKey.cs

@wsargent
Copy link

wsargent commented Oct 9, 2015

@jbtule From the pull request it looks like stylistic changes at best. How come it didn't make it?

@jbtule
Copy link
Contributor

jbtule commented Oct 9, 2015

@wsargent A bit more than stylistic, they were things that needed adjusted security wise, like for the digest algorithm being chosen, and for interoperability such as the key representation in json file and how the key material should be hashed to identify it. However, you are right in that these things were pretty trivial and identified in the review, and were (and still are) easy to fix.

@sweis sweis closed this as completed Mar 15, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

4 participants