summaryrefslogtreecommitdiffstats
path: root/include/crypto/algapi.h
Commit message (Collapse)AuthorAgeFilesLines
* Merge branch 'linus' of ↵Linus Torvalds2019-07-091-7/+0Star
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6 Pull crypto updates from Herbert Xu: "Here is the crypto update for 5.3: API: - Test shash interface directly in testmgr - cra_driver_name is now mandatory Algorithms: - Replace arc4 crypto_cipher with library helper - Implement 5 way interleave for ECB, CBC and CTR on arm64 - Add xxhash - Add continuous self-test on noise source to drbg - Update jitter RNG Drivers: - Add support for SHA204A random number generator - Add support for 7211 in iproc-rng200 - Fix fuzz test failures in inside-secure - Fix fuzz test failures in talitos - Fix fuzz test failures in qat" * 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (143 commits) crypto: stm32/hash - remove interruptible condition for dma crypto: stm32/hash - Fix hmac issue more than 256 bytes crypto: stm32/crc32 - rename driver file crypto: amcc - remove memset after dma_alloc_coherent crypto: ccp - Switch to SPDX license identifiers crypto: ccp - Validate the the error value used to index error messages crypto: doc - Fix formatting of new crypto engine content crypto: doc - Add parameter documentation crypto: arm64/aes-ce - implement 5 way interleave for ECB, CBC and CTR crypto: arm64/aes-ce - add 5 way interleave routines crypto: talitos - drop icv_ool crypto: talitos - fix hash on SEC1. crypto: talitos - move struct talitos_edesc into talitos.h lib/scatterlist: Fix mapping iterator when sg->offset is greater than PAGE_SIZE crypto/NX: Set receive window credits to max number of CRBs in RxFIFO crypto: asymmetric_keys - select CRYPTO_HASH where needed crypto: serpent - mark __serpent_setkey_sbox noinline crypto: testmgr - dynamically allocate crypto_shash crypto: testmgr - dynamically allocate testvec_config crypto: talitos - eliminate unneeded 'done' functions at build time ...
| * crypto: algapi - remove crypto_tfm_in_queue()Eric Biggers2019-05-301-7/+0Star
| | | | | | | | | | | | | | Remove the crypto_tfm_in_queue() function, which is unused. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* | treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 152Thomas Gleixner2019-05-301-6/+1Star
|/ | | | | | | | | | | | | | | | | | | | | Based on 1 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms of the gnu general public license as published by the free software foundation either version 2 of the license or at your option any later version extracted by the scancode license scanner the SPDX license identifier GPL-2.0-or-later has been chosen to replace the boilerplate/reference in 3029 file(s). Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Allison Randal <allison@lohutok.net> Cc: linux-spdx@vger.kernel.org Link: https://lkml.kernel.org/r/20190527070032.746973796@linutronix.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* crypto: api - add a helper to (un)register a array of templatesXiongfeng Wang2019-01-251-0/+2
| | | | | | | | | This patch add a helper to (un)register a array of templates. The following patches will use this helper to simplify the code. Signed-off-by: Xiongfeng Wang <xiongfeng.wang@linaro.org> Reviewed-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: algapi - remove crypto_alloc_instance()Eric Biggers2019-01-111-4/+2Star
| | | | | | | | | | | Now that all "blkcipher" templates have been converted to "skcipher", crypto_alloc_instance() is no longer used. And it's not useful any longer as it creates an old-style weakly typed instance rather than a new-style strongly typed instance. So remove it, and now that the name is freed up rename crypto_alloc_instance2() to crypto_alloc_instance(). Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: api - Introduce notifier for new crypto algorithmsMartin K. Petersen2018-09-041-0/+10
| | | | | | | | | | | | | | | | Introduce a facility that can be used to receive a notification callback when a new algorithm becomes available. This can be used by existing crypto registrations to trigger a switch from a software-only algorithm to a hardware-accelerated version. A new CRYPTO_MSG_ALG_LOADED state is introduced to the existing crypto notification chain, and the register/unregister functions are exported so they can be called by subsystems outside of crypto. Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com> Suggested-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: api - Introduce generic max blocksize and alignmaskKees Cook2018-09-041-1/+3
| | | | | | | | | | | | | | | | In the quest to remove all stack VLA usage from the kernel[1], this exposes a new general upper bound on crypto blocksize and alignmask (higher than for the existing cipher limits) for VLA removal, and introduces new checks. At present, the highest cra_alignmask in the kernel is 63. The highest cra_blocksize is 144 (SHA3_224_BLOCK_SIZE, 18 8-byte words). For the new blocksize limit, I went with 160 (20 8-byte words). [1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com Signed-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: api - laying defines and checks for statically allocated buffersSalvatore Mesoraca2018-04-201-0/+8
| | | | | | | | | | | | | | | In preparation for the removal of VLAs[1] from crypto code. We create 2 new compile-time constants: all ciphers implemented in Linux have a block size less than or equal to 16 bytes and the most demanding hw require 16 bytes alignment for the block buffer. We also enforce these limits in crypto_check_alg when a new cipher is registered. [1] http://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com Signed-off-by: Salvatore Mesoraca <s.mesoraca16@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: api - Remove unused crypto_type lookup functionHerbert Xu2018-03-301-1/+0Star
| | | | | | | | | | The lookup function in crypto_type was only used for the implicit IV generators which have been completely removed from the crypto API. This patch removes the lookup function as it is now useless. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: algapi - make crypto_xor() take separate dst and src argumentsArd Biesheuvel2017-08-041-0/+19
| | | | | | | | | | | | | | | | | | | | | | | | There are quite a number of occurrences in the kernel of the pattern if (dst != src) memcpy(dst, src, walk.total % AES_BLOCK_SIZE); crypto_xor(dst, final, walk.total % AES_BLOCK_SIZE); or crypto_xor(keystream, src, nbytes); memcpy(dst, keystream, nbytes); where crypto_xor() is preceded or followed by a memcpy() invocation that is only there because crypto_xor() uses its output parameter as one of the inputs. To avoid having to add new instances of this pattern in the arm64 code, which will be refactored to implement non-SIMD fallbacks, add an alternative implementation called crypto_xor_cpy(), taking separate input and output arguments. This removes the need for the separate memcpy(). Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: algapi - use separate dst and src operands for __crypto_xor()Ard Biesheuvel2017-08-041-2/+2
| | | | | | | | | | In preparation of introducing crypto_xor_cpy(), which will use separate operands for input and output, modify the __crypto_xor() implementation, which it will share with the existing crypto_xor(), which provides the actual functionality when not using the inline version. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: api - Add crypto_requires_off helperHerbert Xu2017-02-271-1/+6
| | | | | | | | | This patch adds crypto_requires_off which is an extension of crypto_requires_sync for similar bits such as NEED_FALLBACK. Cc: stable@vger.kernel.org #4.10 Suggested-by: Marcelo Cerri <marcelo.cerri@canonical.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: algapi - make crypto_xor() and crypto_inc() alignment agnosticArd Biesheuvel2017-02-111-2/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | Instead of unconditionally forcing 4 byte alignment for all generic chaining modes that rely on crypto_xor() or crypto_inc() (which may result in unnecessary copying of data when the underlying hardware can perform unaligned accesses efficiently), make those functions deal with unaligned input explicitly, but only if the Kconfig symbol HAVE_EFFICIENT_UNALIGNED_ACCESS is set. This will allow us to drop the alignmasks from the CBC, CMAC, CTR, CTS, PCBC and SEQIV drivers. For crypto_inc(), this simply involves making the 4-byte stride conditional on HAVE_EFFICIENT_UNALIGNED_ACCESS being set, given that it typically operates on 16 byte buffers. For crypto_xor(), an algorithm is implemented that simply runs through the input using the largest strides possible if unaligned accesses are allowed. If they are not, an optimal sequence of memory accesses is emitted that takes the relative alignment of the input buffers into account, e.g., if the relative misalignment of dst and src is 4 bytes, the entire xor operation will be completed using 4 byte loads and stores (modulo unaligned bits at the start and end). Note that all expressions involving misalign are simply eliminated by the compiler when HAVE_EFFICIENT_UNALIGNED_ACCESS is defined. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: engine - move crypto engine to its own headerCorentin LABBE2016-09-071-70/+0Star
| | | | | | | | This patch move the whole crypto engine API to its own header crypto/engine.h. Signed-off-by: Corentin Labbe <clabbe.montjoie@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: api - Optimise away crypto_yield when hard preemption is onHerbert Xu2016-07-181-0/+2
| | | | | | | When hard preemption is enabled there is no need to explicitly call crypto_yield. This patch eliminates it if that is the case. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: api - Add crypto_inst_setnameHerbert Xu2016-07-011-0/+2
| | | | | | | | This patch adds the helper crypto_inst_setname because the current helper crypto_alloc_instance2 is no longer useful given that we now look up the algorithm after we allocate the instance object. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: hash - Remove crypto_hash interfaceHerbert Xu2016-02-061-18/+0Star
| | | | | | | This patch removes all traces of the crypto_hash interface, now that everyone has switched over to shash or ahash. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: engine - Introduce the block request crypto engine frameworkBaolin Wang2016-02-011-0/+70
| | | | | | | | | | | | | | | | | | | Now block cipher engines need to implement and maintain their own queue/thread for processing requests, moreover currently helpers provided for only the queue itself (in crypto_enqueue_request() and crypto_dequeue_request()) but they don't help with the mechanics of driving the hardware (things like running the request immediately, DMA map it or providing a thread to process the queue in) even though a lot of that code really shouldn't vary that much from device to device. Thus this patch provides a mechanism for pushing requests to the hardware as it becomes free that drivers could use. And this framework is patterned on the SPI code and has worked out well there. (https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/ drivers/spi/spi.c?id=ffbbdd21329f3e15eeca6df2d4bc11c04d9d91c0) Signed-off-by: Baolin Wang <baolin.wang@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: api - Introduce crypto_queue_len() helper functionBaolin Wang2016-02-011-0/+4
| | | | | | | | This patch introduces crypto_queue_len() helper function to help to get the queue length in the crypto queue list now. Signed-off-by: Baolin Wang <baolin.wang@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: api - Add instance free function to crypto_typeHerbert Xu2015-07-141-0/+2
| | | | | | | | | | | | Currently the task of freeing an instance is given to the crypto template. However, it has no type information on the instance so we have to resort to checking type information at runtime. This patch introduces a free function to crypto_type that will be used to free an instance. This can then be used to free an instance in a type-safe manner. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: api - Remove unused __crypto_dequeue_requestHerbert Xu2015-07-141-1/+0Star
| | | | | | The function __crypto_dequeue_request is completely unused. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: aead - Convert top level interface to new styleHerbert Xu2015-05-131-32/+1Star
| | | | | | | | | | | | | | | This patch converts the top-level aead interface to the new style. All user-level AEAD interface code have been moved into crypto/aead.h. The allocation/free functions have switched over to the new way of allocating tfms. This patch also removes the double indrection on setkey so the indirection now exists only at the alg level. Apart from these there are no user-visible changes. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: api - Add crypto_grab_spawn primitiveHerbert Xu2015-05-131-0/+2
| | | | | | | | | | | | | | This patch adds a new primitive crypto_grab_spawn which is meant to replace crypto_init_spawn and crypto_init_spawn2. Under the new scheme the user no longer has to worry about reference counting the alg object before it is subsumed by the spawn. It is pretty much an exact copy of crypto_grab_aead. Prior to calling this function spawn->frontend and spawn->inst must have been set. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: api - Change crypto_unregister_instance argument typeHerbert Xu2015-04-031-1/+1
| | | | | | | | This patch makes crypto_unregister_instance take a crypto_instance instead of a crypto_alg. This allows us to remove a duplicate CRYPTO_ALG_INSTANCE check in crypto_unregister_instance. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: api - Move crypto_yield() to algapi.hMarek Vasut2014-06-201-0/+6
| | | | | | | | It makes no sense for crypto_yield() to be defined in scatterwalk.h , move it into algapi.h as it's an internal function to crypto API. Signed-off-by: Marek Vasut <marex@denx.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: allow blkcipher walks over AEAD dataArd Biesheuvel2014-03-101-0/+4
| | | | | | | | This adds the function blkcipher_aead_walk_virt_block, which allows the caller to use the blkcipher walk API to handle the input and output scatterlists. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: remove direct blkcipher_walk dependency on transformArd Biesheuvel2014-03-101-1/+4
| | | | | | | | | In order to allow other uses of the blkcipher walk API than the blkcipher algos themselves, this patch copies some of the transform data members to the walk struct so the transform is only accessed at walk init time. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: crypto_memneq - add equality testing of memory regions w/o timing leaksJames Yonan2013-10-071-1/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When comparing MAC hashes, AEAD authentication tags, or other hash values in the context of authentication or integrity checking, it is important not to leak timing information to a potential attacker, i.e. when communication happens over a network. Bytewise memory comparisons (such as memcmp) are usually optimized so that they return a nonzero value as soon as a mismatch is found. E.g, on x86_64/i5 for 512 bytes this can be ~50 cyc for a full mismatch and up to ~850 cyc for a full match (cold). This early-return behavior can leak timing information as a side channel, allowing an attacker to iteratively guess the correct result. This patch adds a new method crypto_memneq ("memory not equal to each other") to the crypto API that compares memory areas of the same length in roughly "constant time" (cache misses could change the timing, but since they don't reveal information about the content of the strings being compared, they are effectively benign). Iow, best and worst case behaviour take the same amount of time to complete (in contrast to memcmp). Note that crypto_memneq (unlike memcmp) can only be used to test for equality or inequality, NOT for lexicographical order. This, however, is not an issue for its use-cases within the crypto API. We tried to locate all of the places in the crypto API where memcmp was being used for authentication or integrity checking, and convert them over to crypto_memneq. crypto_memneq is declared noinline, placed in its own source file, and compiled with optimizations that might increase code size disabled ("Os") because a smart compiler (or LTO) might notice that the return value is always compared against zero/nonzero, and might then reintroduce the same early-return optimization that we are trying to avoid. Using #pragma or __attribute__ optimization annotations of the code for disabling optimization was avoided as it seems to be considered broken or unmaintained for long time in GCC [1]. Therefore, we work around that by specifying the compile flag for memneq.o directly in the Makefile. We found that this seems to be most appropriate. As we use ("Os"), this patch also provides a loop-free "fast-path" for frequently used 16 byte digests. Similarly to kernel library string functions, leave an option for future even further optimized architecture specific assembler implementations. This was a joint work of James Yonan and Daniel Borkmann. Also thanks for feedback from Florian Weimer on this and earlier proposals [2]. [1] http://gcc.gnu.org/ml/gcc/2012-07/msg00211.html [2] https://lkml.org/lkml/2013/2/10/131 Signed-off-by: James Yonan <james@openvpn.net> Signed-off-by: Daniel Borkmann <dborkman@redhat.com> Cc: Florian Weimer <fw@deneb.enyo.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: Unlink and free instances when deletedSteffen Klassert2011-11-091-0/+1
| | | | | | | | | | We leak the crypto instance when we unregister an instance with crypto_del_alg(). Therefore we introduce crypto_unregister_instance() to unlink the crypto instance from the template's instances list and to free the recources of the instance properly. Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: Add a report function pointer to crypto_typeSteffen Klassert2011-10-211-0/+2
| | | | | | | | | We add a report function pointer to struct crypto_type. This function pointer is used from the crypto userspace configuration API to report crypto algorithms to userspace. Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: skcipher - Add ablkcipher_walk interfacesDavid S. Miller2010-05-191-0/+40
| | | | | | | | | | | | | | | | | | | | | | These are akin to the blkcipher_walk helpers. The main differences in the async variant are: 1) Only physical walking is supported. We can't hold on to kmap mappings across the async operation to support virtual ablkcipher_walk operations anyways. 2) Bounce buffers used for async more need to be persistent and freed at a later point in time when the async op completes. Therefore we maintain a list of writeback buffers and require that the ablkcipher_walk user call the 'complete' operation so we can copy the bounce buffers out to the real buffers and free up the bounce buffer chunks. These interfaces will be used by the new Niagara2 crypto driver. Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: hash - Remove legacy hash/digest codeBenjamin Gilbert2009-10-191-1/+0Star
| | | | | | | | | 6941c3a0 disabled compilation of the legacy digest code but didn't actually remove it. Rectify this. Also, remove the crypto_hash_type extern declaration from algapi.h now that the struct is gone. Signed-off-by: Benjamin Gilbert <bgilbert@cs.cmu.edu> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* Merge git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6Linus Torvalds2009-09-111-11/+26
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (102 commits) crypto: sha-s390 - Fix warnings in import function crypto: vmac - New hash algorithm for intel_txt support crypto: api - Do not displace newly registered algorithms crypto: ansi_cprng - Fix module initialization crypto: xcbc - Fix alignment calculation of xcbc_tfm_ctx crypto: fips - Depend on ansi_cprng crypto: blkcipher - Do not use eseqiv on stream ciphers crypto: ctr - Use chainiv on raw counter mode Revert crypto: fips - Select CPRNG crypto: rng - Fix typo crypto: talitos - add support for 36 bit addressing crypto: talitos - align locks on cache lines crypto: talitos - simplify hmac data size calculation crypto: mv_cesa - Add support for Orion5X crypto engine crypto: cryptd - Add support to access underlaying shash crypto: gcm - Use GHASH digest algorithm crypto: ghash - Add GHASH digest algorithm for GCM crypto: authenc - Convert to ahash crypto: api - Fix aligned ctx helper crypto: hmac - Prehash ipad/opad ...
| * crypto: api - Fix aligned ctx helperHerbert Xu2009-07-241-6/+2Star
| | | | | | | | | | | | | | | | | | | | | | | | | | The aligned ctx helper was using a bogus alignment value thas was one off the correct value. Fortunately the current users do not require anything beyond the natural alignment of the platform so this hasn't caused a problem. This patch fixes that and also removes the unnecessary minimum check since if the alignment is less than the natural alignment then the subsequent ALIGN operation should be a noop. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * crypto: cryptd - Switch to template create APIHerbert Xu2009-07-141-0/+3
| | | | | | | | | | | | | | | | This patch changes cryptd to use the template->create function instead of alloc in anticipation for the switch to new style ahash algorithms. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * crypto: api - Remove frontend argument from extsize/init_tfmHerbert Xu2009-07-141-4/+2Star
| | | | | | | | | | | | | | As the extsize and init_tfm functions belong to the frontend the frontend argument is superfluous. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * crypto: api - Add crypto_attr_alg2 helperHerbert Xu2009-07-081-1/+10
| | | | | | | | | | | | | | | | This patch adds the helper crypto_attr_alg2 which is similar to crypto_attr_alg but takes an extra frontend argument. This is intended to be used by new style algorithm types such as shash. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * crypto: api - Add new style spawn supportHerbert Xu2009-07-081-0/+6
| | | | | | | | | | | | | | | | | | This patch modifies the spawn infrastructure to support new style algorithms like shash. In particular, this means storing the frontend type in the spawn and using crypto_create_tfm to allocate the tfm. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * crypto: api - Add crypto_alloc_instance2Herbert Xu2009-07-071-0/+2
| | | | | | | | | | | | | | | | | | | | | | This patch adds a new argument to crypto_alloc_instance which sets aside some space before the instance for use by algorithms such as shash that place type-specific data before crypto_alg. For compatibility the function has been renamed so that existing users aren't affected. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * crypto: api - Add new template create functionHerbert Xu2009-07-071-0/+1
| | | | | | | | | | | | | | | | | | | | | | This patch introduces the template->create function intended to replace the existing alloc function. The intention is for create to handle the registration directly, whereas currently the caller of alloc has to handle the registration. This allows type-specific code to be run prior to registration. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* | crypto: skcipher - Fix skcipher_dequeue_givcrypt NULL testHerbert Xu2009-08-291-0/+1
|/ | | | | | | | | | | | | | As struct skcipher_givcrypt_request includes struct crypto_request at a non-zero offset, testing for NULL after converting the pointer returned by crypto_dequeue_request does not work. This can result in IPsec crashes when the queue is depleted. This patch fixes it by doing the pointer conversion only when the return value is non-NULL. In particular, we create a new function __crypto_dequeue_request that does the pointer conversion. Reported-by: Brad Bosch <bradbosch@comcast.net> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: hash - Export shash through hashHerbert Xu2008-12-251-0/+5
| | | | | | | | This patch allows shash algorithms to be used through the old hash interface. This is a transitional measure so we can convert the underlying algorithms to shash before converting the users across. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: api - Rebirth of crypto_alloc_tfmHerbert Xu2008-12-251-0/+10
| | | | | | | | | | | | | | | This patch reintroduces a completely revamped crypto_alloc_tfm. The biggest change is that we now take two crypto_type objects when allocating a tfm, a frontend and a backend. In fact this simply formalises what we've been doing behind the API's back. For example, as it stands crypto_alloc_ahash may use an actual ahash algorithm or a crypto_hash algorithm. Putting this in the API allows us to do this much more cleanly. The existing types will be converted across gradually. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: api - Move type exit function into crypto_tfmHerbert Xu2008-12-251-1/+0Star
| | | | | | | | | | | | | | | | | | | | | | | The type exit function needs to undo any allocations done by the type init function. However, the type init function may differ depending on the upper-level type of the transform (e.g., a crypto_blkcipher instantiated as a crypto_ablkcipher). So we need to move the exit function out of the lower-level structure and into crypto_tfm itself. As it stands this is a no-op since nobody uses exit functions at all. However, all cases where a lower-level type is instantiated as a different upper-level type (such as blkcipher as ablkcipher) will be converted such that they allocate the underlying transform and use that instead of casting (e.g., crypto_ablkcipher casted into crypto_blkcipher). That will need to use a different exit function depending on the upper-level type. This patch also allows the type init/exit functions to call (or not) cra_init/cra_exit instead of always calling them from the top level. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] skcipher: Remove crypto_spawn_ablkcipherHerbert Xu2008-01-101-8/+0Star
| | | | | | | Now that gcm and authenc have been converted to crypto_spawn_skcipher, this patch removes the obsolete crypto_spawn_ablkcipher function. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] skcipher: Add crypto_grab_skcipher interfaceHerbert Xu2008-01-101-4/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Note: From now on the collective of ablkcipher/blkcipher/givcipher will be known as skcipher, i.e., symmetric key cipher. The name blkcipher has always been much of a misnomer since it supports stream ciphers too. This patch adds the function crypto_grab_skcipher as a new way of getting an ablkcipher spawn. The problem is that previously we did this in two steps, first getting the algorithm and then calling crypto_init_spawn. This meant that each spawn user had to be aware of what type and mask to use for these two steps. This is difficult and also presents a problem when the type/mask changes as they're about to be for IV generators. The new interface does both steps together just like crypto_alloc_ablkcipher. As a side-effect this also allows us to be stronger on type enforcement for spawns. For now this is only done for ablkcipher but it's trivial to extend for other types. This patch also moves the type/mask logic for skcipher into the helpers crypto_skcipher_type and crypto_skcipher_mask. Finally this patch introduces the function crypto_require_sync to determine whether the user is specifically requesting a sync algorithm. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] api: Add crypto_attr_alg_nameHerbert Xu2008-01-101-0/+1
| | | | | | | | | This patch adds a new helper crypto_attr_alg_name which is basically the first half of crypto_attr_alg. That is, it returns an algorithm name parameter as a string without looking it up. The caller can then look it up immediately or defer it until later. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] api: Add crypto_inc and crypto_xorHerbert Xu2008-01-101-0/+4
| | | | | | | | | | | | With the addition of more stream ciphers we need to curb the proliferation of ad-hoc xor functions. This patch creates a generic pair of functions, crypto_inc and crypto_xor which does big-endian increment and exclusive or, respectively. For optimum performance, they both use u32 operations so alignment must be as that of u32 even though the arguments are of type u8 *. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] ablkcipher: Add distinct ABLKCIPHER typeHerbert Xu2008-01-101-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | Up until now we have ablkcipher algorithms have been identified as type BLKCIPHER with the ASYNC bit set. This is suboptimal because ablkcipher refers to two things. On the one hand it refers to the top-level ablkcipher interface with requests. On the other hand it refers to and algorithm type underneath. As it is you cannot request a synchronous block cipher algorithm with the ablkcipher interface on top. This is a problem because we want to be able to eventually phase out the blkcipher top-level interface. This patch fixes this by making ABLKCIPHER its own type, just as we have distinct types for HASH and DIGEST. The type it associated with the algorithm implementation only. Which top-level interface is used for synchronous block ciphers is then determined by the mask that's used. If it's a specific mask then the old blkcipher interface is given, otherwise we go with the new ablkcipher interface. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] blkcipher: Added blkcipher_walk_virt_blockHerbert Xu2007-10-111-0/+4
| | | | | | | | | This patch adds the helper blkcipher_walk_virt_block which is similar to blkcipher_walk_virt but uses a supplied block size instead of the block size of the block cipher. This is useful for CTR where the block size is 1 but we still want to walk by the block size of the underlying cipher. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>