When user space issues a KEYCTL_PKEY_QUERY system call for a NIST P521
key, the key_size is incorrectly reported as 528 bits instead of 521.
That's because the key size obtained through crypto_sig_keysize() is in
bytes and software_key_query() multiplies by 8 to yield the size in bits.
The underlying assumption is that the key size is always a multiple of 8.
With the recent addition of NIST P521, that's no longer the case.
Fix by returning the key_size in bits from crypto_sig_keysize() and
adjusting the calculations in software_key_query().
The ->key_size() callbacks of sig_alg algorithms now return the size in
bits, whereas the ->digest_size() and ->max_size() callbacks return the
size in bytes. This matches with the units in struct keyctl_pkey_query.
Fixes: a7d45ba77d ("crypto: ecdsa - Register NIST P521 and extend test suite")
Signed-off-by: Lukas Wunner <lukas@wunner.de>
Reviewed-by: Stefan Berger <stefanb@linux.ibm.com>
Reviewed-by: Ignat Korchagin <ignat@cloudflare.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Now that the broken drivers have been fixed, remove the unnecessary
inclusions from crypto/ctr.h.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Rather than storing the folio as is and handling it later, convert
it to a scatterlist right away.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add a new helper ACOMP_REQUEST_CLONE that will transform a stack
request into a dynamically allocated one if possible, and otherwise
switch it over to the sycnrhonous fallback transform. The intended
usage is:
ACOMP_STACK_ON_REQUEST(req, tfm);
...
err = crypto_acomp_compress(req);
/* The request cannot complete synchronously. */
if (err == -EAGAIN) {
/* This will not fail. */
req = ACOMP_REQUEST_CLONE(req, gfp);
/* Redo operation. */
err = crypto_acomp_compress(req);
}
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add a helper to create an on-stack fallback request from a given
request. Use this helper in acomp_do_nondma.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Use the newly added request flag helpers to manage the request
flags.
Also add acomp_request_flags which lets bottom-level users to
access the request flags without the bits private to the acomp
API.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Merge crypto tree to pick up scompress and ahash fixes. The
scompress fix becomes mostly unnecessary as the bugs no longer
exist with the new acompress code. However, keep the NULL assignment
in crypto_acomp_free_streams so that if the user decides to call
crypto_acomp_alloc_streams again it will work.
Disable hash request chaining in case a driver that copies an
ahash_request object by hand accidentally triggers chaining.
Reported-by: Manorit Chawdhry <m-chawdhry@ti.com>
Fixes: f2ffe5a918 ("crypto: hash - Add request chaining API")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Tested-by: Manorit Chawdhry <m-chawdhry@ti.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
<crypto/internal/chacha.h> is now included only by crypto/chacha.c, so
fold it into there.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Following the example of the crc32 and crc32c code, make the crypto
subsystem register both generic and architecture-optimized chacha20,
xchacha20, and xchacha12 skcipher algorithms, all implemented on top of
the appropriate library functions. This eliminates the need for every
architecture to implement the same skcipher glue code.
To register the architecture-optimized skciphers only when
architecture-optimized code is actually being used, add a function
chacha_is_arch_optimized() and make each arch implement it. Change each
architecture's ChaCha module_init function to arch_initcall so that the
CPU feature detection is guaranteed to run before
chacha_is_arch_optimized() gets called by crypto/chacha.c. In the case
of s390, remove the CPU feature based module autoloading, which is no
longer needed since the module just gets pulled in via function linkage.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
crypto_ctr_encrypt_walk() is no longer used so remove it.
Note that some existing drivers currently rely on the transitive
includes of some other crypto headers so retain those for the time
being.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Update the documentation to be consistent with the fact that shash
may not be used in hard IRQs.
Reported-by: Eric Biggers <ebiggers@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Move the dynamic stream allocation code into acomp and make it
available as a helper for acomp algorithms.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Per-cpu buffers can be wasteful when the number of CPUs is large,
especially if the buffer itself is likely to never be used. Reduce
such wastage by only allocating them on first use of a particular
CPU.
On start-up allocate a single buffer on the first possible CPU.
For every other CPU a work struct will be scheduled on first use
to allocate the buffer for that CPU. Until the allocation succeeds
simply use the first CPU's buffer which is protected under a spin
lock.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Commit 9ae4577bc0 ("crypto: api - Use work queue in
crypto_destroy_instance") introduced a work struct to free an
instance after the last user goes away.
Move the delayed work from the instance into its template so that
when the template is unregistered it can ensure that all its
instances have been freed before returning.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
All implementations of chacha_init_arch() just call
chacha_init_generic(), so it is pointless. Just delete it, and replace
chacha_init() with what was previously chacha_init_generic().
Signed-off-by: Eric Biggers <ebiggers@google.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
For many users, it's easier to supply a folio rather than an SG
list since they already have them. Add support for folios to the
acomp interface.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add ACOMP_REQUEST_ALLOC which is a wrapper around acomp_request_alloc
that falls back to a synchronous stack reqeust if the allocation
fails.
Also add ACOMP_REQUEST_ON_STACK which stores the request on the stack
only.
The request should be freed with acomp_request_free.
Finally add acomp_request_alloc_extra which gives the user extra
memory to use in conjunction with the request.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
As the only user of acomp/scomp uses a trivial single-page SG
list, remove support for everything else in preprataion for the
addition of virtual address support.
However, keep support for non-trivial source SG lists as that
user is currently jumping through hoops in order to linearise
the source data.
Limit the source SG linearisation buffer to a single page as
that user never goes over that. The only other potential user
is also unlikely to exceed that (IPComp) and it can easily do
its own linearisation if necessary.
Also keep the destination SG linearisation for IPComp.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Curiously, the Crypto API scatterwalk incremented pages by hand
rather than using nth_page. Possibly because scatterwalk predates
nth_page (the following commit is from the history tree):
commit 3957f2b34960d85b63e814262a8be7d5ad91444d
Author: James Morris <jmorris@intercode.com.au>
Date: Sun Feb 2 07:35:32 2003 -0800
[CRYPTO]: in/out scatterlist support for ciphers.
Fix this by using nth_page.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Now that the address returned by scatterwalk_map() is always being
stored into the same struct scatter_walk that is passed in, make
scatterwalk_map() do so itself and return void.
Similarly, now that scatterwalk_unmap() is always being passed the
address field within a struct scatter_walk, make scatterwalk_unmap()
take a pointer to struct scatter_walk instead of the address directly.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Separate out the HKDF functions into a separate module to
to make them available to other callers.
And add a testsuite to the module with test vectors
from RFC 5869 (and additional vectors for SHA384 and SHA512)
to ensure the integrity of the algorithm.
Signed-off-by: Hannes Reinecke <hare@kernel.org>
Acked-by: Eric Biggers <ebiggers@kernel.org>
Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Keith Busch <kbusch@kernel.org>
This adds request chaining and virtual address support to the
acomp interface.
It is identical to the ahash interface, except that a new flag
CRYPTO_ACOMP_REQ_NONDMA has been added to indicate that the
virtual addresses are not suitable for DMA. This is because
all existing and potential acomp users can provide memory that
is suitable for DMA so there is no need for a fall-back copy
path.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Rather than allocating the stream memory in the request object,
move it into a per-cpu buffer managed by scomp. This takes the
stress off the user from having to manage large request objects
and setting up their own per-cpu buffers in order to do so.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The tfm argument is completely unused and meaningless as the
same stream object is identical over all transforms of a given
algorithm. Remove it.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Mark the src.virt.addr field in struct skcipher_walk as a pointer
to const data. This guarantees that the user won't modify the data
which should be done through dst.virt.addr to ensure that flushing
is done when necessary.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Reuse the addr field from struct scatter_walk for skcipher_walk.
Keep the existing virt.addr fields but make them const for the
user to access the mapped address.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Rather than returning the address and storing the length into an
argument pointer, add an address field to the walk struct and use
that to store the address. The length is returned directly.
Change the done functions to use this stored address instead of
getting them from the caller.
Split the address into two using a union. The user should only
access the const version so that it is never changed.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Move the definition of struct crypto_type into internal.h as it
is only used by API implementors and not algorithm implementors.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add an AEAD template that does hash-then-cipher (unlike authenc that does
cipher-then-hash). This is required for a number of Kerberos 5 encoding
types.
[!] Note that the net/sunrpc/auth_gss/ implementation gets a pair of
ciphers, one non-CTS and one CTS, using the former to do all the aligned
blocks and the latter to do the last two blocks if they aren't also
aligned. It may be necessary to do this here too for performance reasons -
but there are considerations both ways:
(1) firstly, there is an optimised assembly version of cts(cbc(aes)) on
x86_64 that should be used instead of having two ciphers;
(2) secondly, none of the hardware offload drivers seem to offer CTS
support (Intel QAT does not, for instance).
However, I don't know if it's possible to query the crypto API to find out
whether there's an optimised CTS algorithm available.
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Herbert Xu <herbert@gondor.apana.org.au>
cc: "David S. Miller" <davem@davemloft.net>
cc: Chuck Lever <chuck.lever@oracle.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: Eric Dumazet <edumazet@google.com>
cc: Jakub Kicinski <kuba@kernel.org>
cc: Paolo Abeni <pabeni@redhat.com>
cc: Simon Horman <horms@kernel.org>
cc: linux-afs@lists.infradead.org
cc: linux-nfs@vger.kernel.org
cc: linux-crypto@vger.kernel.org
cc: netdev@vger.kernel.org
Rather than accessing 'alg' directly to avoid the aliasing issue
which leads to unnecessary reloads, use the __restrict keyword
to explicitly tell the compiler that there is no aliasing.
This generates equivalent if not superior code on x86 with gcc 12.
Note that in skcipher_walk_virt the alg assignment is moved after
might_sleep_if because that function is a compiler barrier and
forces a reload.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>