summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
* crypto: x86/nhpoly1305 - add AVX2 accelerated NHPoly1305Eric Biggers2018-12-134-0/+245
| | | | | | | | | | Add a 64-bit AVX2 implementation of NHPoly1305, an ε-almost-∆-universal hash function used in the Adiantum encryption mode. For now, only the NH portion is actually AVX2-accelerated; the Poly1305 part is less performance-critical so is just implemented in C. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: x86/nhpoly1305 - add SSE2 accelerated NHPoly1305Eric Biggers2018-12-134-0/+211
| | | | | | | | | | Add a 64-bit SSE2 implementation of NHPoly1305, an ε-almost-∆-universal hash function used in the Adiantum encryption mode. For now, only the NH portion is actually SSE2-accelerated; the Poly1305 part is less performance-critical so is just implemented in C. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: adiantum - propagate CRYPTO_ALG_ASYNC flag to instanceEric Biggers2018-12-131-0/+2
| | | | | | | | | | | | | If the stream cipher implementation is asynchronous, then the Adiantum instance must be flagged as asynchronous as well. Otherwise someone asking for a synchronous algorithm can get an asynchronous algorithm. There are no asynchronous xchacha12 or xchacha20 implementations yet which makes this largely a theoretical issue, but it should be fixed. Fixes: 059c2a4d8e16 ("crypto: adiantum - add Adiantum support") Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arm64/chacha - use combined SIMD/ALU routine for more speedArd Biesheuvel2018-12-132-35/+239
| | | | | | | | | | | To some degree, most known AArch64 micro-architectures appear to be able to issue ALU instructions in parellel to SIMD instructions without affecting the SIMD throughput. This means we can use the ALU to process a fifth ChaCha block while the SIMD is processing four blocks in parallel. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arm64/chacha - optimize for arbitrary length inputsArd Biesheuvel2018-12-132-37/+184
| | | | | | | | | | | | | | | Update the 4-way NEON ChaCha routine so it can handle input of any length >64 bytes in its entirety, rather than having to call into the 1-way routine and/or memcpy()s via temp buffers to handle the tail of a ChaCha invocation that is not a multiple of 256 bytes. On inputs that are a multiple of 256 bytes (and thus in tcrypt benchmarks), performance drops by around 1% on Cortex-A57, while performance for inputs drawn randomly from the range [64, 1024) increases by around 30%. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: tcrypt - add block size of 1472 to skcipher templateArd Biesheuvel2018-12-131-1/+1
| | | | | | | | | In order to have better coverage of algorithms operating on block sizes that are in the ballpark of a VPN packet, add 1472 to the block_sizes array. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: cavium/nitrox - Enabled Mailbox supportSrikanth, Jampala2018-12-1311-54/+441
| | | | | | | | Enabled the PF->VF Mailbox support. Mailbox message are interpreted as {type, opcode, data}. Supported message types are REQ, ACK and NACK. Signed-off-by: Srikanth Jampala <Jampala.Srikanth@cavium.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arm64/chacha - add XChaCha12 supportEric Biggers2018-12-132-1/+19
| | | | | | | | | | | Now that the ARM64 NEON implementation of ChaCha20 and XChaCha20 has been refactored to support varying the number of rounds, add support for XChaCha12. This is identical to XChaCha20 except for the number of rounds, which is 12 instead of 20. This can be used by Adiantum. Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arm64/chacha20 - refactor to allow varying number of roundsEric Biggers2018-12-133-49/+57
| | | | | | | | | In preparation for adding XChaCha12 support, rename/refactor the ARM64 NEON implementation of ChaCha20 to support different numbers of rounds. Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arm64/chacha20 - add XChaCha20 supportEric Biggers2018-12-133-43/+125
| | | | | | | | | | | | | Add an XChaCha20 implementation that is hooked up to the ARM64 NEON implementation of ChaCha20. This can be used by Adiantum. A NEON implementation of single-block HChaCha20 is also added so that XChaCha20 can use it rather than the generic implementation. This required refactoring the ChaCha20 permutation into its own function. Signed-off-by: Eric Biggers <ebiggers@google.com> Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arm64/nhpoly1305 - add NEON-accelerated NHPoly1305Eric Biggers2018-12-134-0/+188
| | | | | | | | | | | | Add an ARM64 NEON implementation of NHPoly1305, an ε-almost-∆-universal hash function used in the Adiantum encryption mode. For now, only the NH portion is actually NEON-accelerated; the Poly1305 part is less performance-critical so is just implemented in C. Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> # big-endian Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: cavium/nitrox - convert to DEFINE_SHOW_ATTRIBUTEYangtao Li2018-12-071-39/+9Star
| | | | | | | Use DEFINE_SHOW_ATTRIBUTE macro to simplify the code. Signed-off-by: Yangtao Li <tiny.windzz@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: chcr - ESN for Inline IPSec TxAtul Gupta2018-12-072-36/+148
| | | | | | | | | | Send SPI, 64b seq nos and 64b IV with aadiv drop for inline crypto. This information is added in outgoing packet after the CPL TX PKT XT and removed by hardware. The aad, auth and cipher offsets are then adjusted for ESN enabled tunnel. Signed-off-by: Atul Gupta <atul.gupta@chelsio.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: chcr - small packet Tx stalls the queueAtul Gupta2018-12-071-1/+4
| | | | | | | | | | | Immediate packets sent to hardware should include the work request length in calculating the flits. WR occupy one flit and if not accounted result in invalid request which stalls the HW queue. Cc: stable@vger.kernel.org Signed-off-by: Atul Gupta <atul.gupta@chelsio.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: user - Add crypto_stats_initCorentin Labbe2018-12-072-3/+10
| | | | | | | | This patch add the crypto_stats_init() function. This will permit to remove some ifdef from __crypto_register_alg(). Signed-off-by: Corentin Labbe <clabbe@baylibre.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: user - rename err_cnt parameterCorentin Labbe2018-12-075-58/+58
| | | | | | | | Since now all crypto stats are on their own structures, it is now useless to have the algorithm name in the err_cnt member. Signed-off-by: Corentin Labbe <clabbe@baylibre.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: user - Split stats in multiple structuresCorentin Labbe2018-12-073-160/+210
| | | | | | | Like for userspace, this patch splits stats into multiple structures, one for each algorithm class. Signed-off-by: Corentin Labbe <clabbe@baylibre.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: user - remove intermediate variableCorentin Labbe2018-12-071-91/+41Star
| | | | | | | | The use of the v64 intermediate variable is useless, and removing it bring to much readable code. Signed-off-by: Corentin Labbe <clabbe@baylibre.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: user - Fix invalid stat reportingCorentin Labbe2018-12-071-3/+3
| | | | | | | | | Some error count use the wrong name for getting this data. But this had not caused any reporting problem, since all error count are shared in the same union. Signed-off-by: Corentin Labbe <clabbe@baylibre.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: user - fix use_after_free of struct xxx_requestCorentin Labbe2018-12-0711-276/+376
| | | | | | | | | | | | | All crypto_stats functions use the struct xxx_request for feeding stats, but in some case this structure could already be freed. For fixing this, the needed parameters (len and alg) will be stored before the request being executed. Fixes: cac5818c25d0 ("crypto: user - Implement a generic crypto statistics") Reported-by: syzbot <syzbot+6939a606a5305e9e9799@syzkaller.appspotmail.com> Signed-off-by: Corentin Labbe <clabbe@baylibre.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: tool: getstat: convert user space example to the new ↵Corentin Labbe2018-12-071-27/+27
| | | | | | | | | | | crypto_user_stat uapi This patch converts the getstat example tool to the recent changes done in crypto_user_stat - changed all stats to u64 - separated struct stats for each crypto alg Signed-off-by: Corentin Labbe <clabbe@baylibre.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: user - split user space crypto stat structuresCorentin Labbe2018-12-072-48/+72
| | | | | | | It is cleaner to have each stat in their own structures. Signed-off-by: Corentin Labbe <clabbe@baylibre.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: user - convert all stats from u32 to u64Corentin Labbe2018-12-0711-141/+133Star
| | | | | | | | | All the 32-bit fields need to be 64-bit. In some cases, UINT32_MAX crypto operations can be done in seconds. Reported-by: Eric Biggers <ebiggers@kernel.org> Signed-off-by: Corentin Labbe <clabbe@baylibre.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: user - CRYPTO_STATS should depend on CRYPTO_USERCorentin Labbe2018-12-071-0/+1
| | | | | | CRYPTO_STATS is using CRYPTO_USER stuff, so it should depends on it. Signed-off-by: Corentin Labbe <clabbe@baylibre.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: user - made crypto_user_stat optionalCorentin Labbe2018-12-074-1/+23
| | | | | | | | | Even if CRYPTO_STATS is set to n, some part of CRYPTO_STATS are compiled. This patch made all part of crypto_user_stat uncompiled in that case. Signed-off-by: Corentin Labbe <clabbe@baylibre.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* MAINTAINERS: change NX/VMX maintainersPaulo Flabiano Smorigo2018-12-071-4/+6
| | | | | | | | | Add Breno and Nayna as NX/VMX crypto driver maintainers. Also change my email address to my personal account and remove Leonidas since he's not working with the driver anymore. Signed-off-by: Paulo Flabiano Smorigo <pfsmorigo@linux.vnet.ibm.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* MAINTAINERS: ccree: add co-maintainerGilad Ben-Yossef2018-12-071-0/+1
| | | | | | | | Add Yael Chemla as co-maintainer of Arm TrustZone CryptoCell REE driver. Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* dt-bindings: crypto: ccree: add dt bindings for ccree 703Gilad Ben-Yossef2018-12-071-0/+1
| | | | | | | | | Add device tree bindings associating Arm TrustZone CryptoCell 703 with the ccree driver. Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com> Reviewed-by: Rob Herring <robh@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: ccree - add support for CryptoCell 703Gilad Ben-Yossef2018-12-076-9/+90
| | | | | | | | | | Add support for Arm TrustZone CryptoCell 703. The 703 is a variant of the CryptoCell 713 that supports only algorithms certified by the Chinesse Office of the State Commercial Cryptography Administration (OSCCA). Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* Merge git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6Herbert Xu2018-12-073-6/+12
|\ | | | | | | Merge crypto tree to pick up crypto stats API revert.
| * crypto: user - Disable statistics interfaceHerbert Xu2018-12-071-1/+1
| | | | | | | | | | | | | | Since this user-space API is still undergoing significant changes, this patch disables it for the current merge window. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * crypto: do not free algorithm before usingPan Bian2018-11-293-6/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | In multiple functions, the algorithm fields are read after its reference is dropped through crypto_mod_put. In this case, the algorithm memory may be freed, resulting in use-after-free bugs. This patch delays the put operation until the algorithm is never used. Fixes: 79c65d179a40 ("crypto: cbc - Convert to skcipher") Fixes: a7d85e06ed80 ("crypto: cfb - add support for Cipher FeedBack mode") Fixes: 043a44001b9e ("crypto: pcbc - Convert to skcipher") Cc: <stable@vger.kernel.org> Signed-off-by: Pan Bian <bianpan2016@163.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* | crypto: cavium/nitrox - Enable interrups for PF in SR-IOV mode.Srikanth, Jampala2018-11-294-10/+142
| | | | | | | | | | | | | | | | | | Enable the available interrupt vectors for PF in SR-IOV Mode. Only single vector entry 192 is valid of PF. This is used to notify any hardware errors and mailbox messages from VF(s). Signed-off-by: Srikanth Jampala <Jampala.Srikanth@cavium.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* | crypto: cavium/nitrox - crypto request format changesNagadheeraj, Rottela2018-11-293-244/+227Star
| | | | | | | | | | | | | | | | | | | | | | nitrox_skcipher_crypt() will do the necessary formatting/ordering of input and output sglists based on the algorithm requirements. It will also accommodate the mandatory output buffers required for NITROX hardware like Output request headers (ORH) and Completion headers. Signed-off-by: Nagadheeraj Rottela <rottela.nagadheeraj@cavium.com> Reviewed-by: Srikanth Jampala <Jampala.Srikanth@cavium.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* | crypto: x86/chacha20 - Add a 4-block AVX-512VL variantMartin Willi2018-11-292-0/+279
| | | | | | | | | | | | | | | | | | | | This version uses the same principle as the AVX2 version by scheduling the operations for two block pairs in parallel. It benefits from the AVX-512VL rotate instructions and the more efficient partial block handling using "vmovdqu8", resulting in a speedup of the raw block function of ~20%. Signed-off-by: Martin Willi <martin@strongswan.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* | crypto: x86/chacha20 - Add a 2-block AVX-512VL variantMartin Willi2018-11-292-0/+178
| | | | | | | | | | | | | | | | | | | | | | | | | | This version uses the same principle as the AVX2 version. It benefits from the AVX-512VL rotate instructions and the more efficient partial block handling using "vmovdqu8", resulting in a speedup of ~20%. Unlike the AVX2 version, it is faster than the single block SSSE3 version to process a single block. Hence we engage that function for (partial) single block lengths as well. Signed-off-by: Martin Willi <martin@strongswan.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* | crypto: x86/chacha20 - Add a 8-block AVX-512VL variantMartin Willi2018-11-293-0/+427
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This variant is similar to the AVX2 version, but benefits from the AVX-512 rotate instructions and the additional registers, so it can operate without any data on the stack. It uses ymm registers only to avoid the massive core throttling on Skylake-X platforms. Nontheless does it bring a ~30% speed improvement compared to the AVX2 variant for random encryption lengths. The AVX2 version uses "rep movsb" for partial block XORing via the stack. With AVX-512, the new "vmovdqu8" can do this much more efficiently. The associated "kmov" instructions to work with dynamic masks is not part of the AVX-512VL instruction set, hence we depend on AVX-512BW as well. Given that the major AVX-512VL architectures provide AVX-512BW and this extension does not affect core clocking, this seems to be no problem at least for now. Signed-off-by: Martin Willi <martin@strongswan.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* | crypto: adiantum - add Adiantum supportEric Biggers2018-11-206-0/+1167
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add support for the Adiantum encryption mode. Adiantum was designed by Paul Crowley and is specified by our paper: Adiantum: length-preserving encryption for entry-level processors (https://eprint.iacr.org/2018/720.pdf) See our paper for full details; this patch only provides an overview. Adiantum is a tweakable, length-preserving encryption mode designed for fast and secure disk encryption, especially on CPUs without dedicated crypto instructions. Adiantum encrypts each sector using the XChaCha12 stream cipher, two passes of an ε-almost-∆-universal (εA∆U) hash function, and an invocation of the AES-256 block cipher on a single 16-byte block. On CPUs without AES instructions, Adiantum is much faster than AES-XTS; for example, on ARM Cortex-A7, on 4096-byte sectors Adiantum encryption is about 4 times faster than AES-256-XTS encryption, and decryption about 5 times faster. Adiantum is a specialization of the more general HBSH construction. Our earlier proposal, HPolyC, was also a HBSH specialization, but it used a different εA∆U hash function, one based on Poly1305 only. Adiantum's εA∆U hash function, which is based primarily on the "NH" hash function like that used in UMAC (RFC4418), is about twice as fast as HPolyC's; consequently, Adiantum is about 20% faster than HPolyC. This speed comes with no loss of security: Adiantum is provably just as secure as HPolyC, in fact slightly *more* secure. Like HPolyC, Adiantum's security is reducible to that of XChaCha12 and AES-256, subject to a security bound. XChaCha12 itself has a security reduction to ChaCha12. Therefore, one need not "trust" Adiantum; one need only trust ChaCha12 and AES-256. Note that the εA∆U hash function is only used for its proven combinatorical properties so cannot be "broken". Adiantum is also a true wide-block encryption mode, so flipping any plaintext bit in the sector scrambles the entire ciphertext, and vice versa. No other such mode is available in the kernel currently; doing the same with XTS scrambles only 16 bytes. Adiantum also supports arbitrary-length tweaks and naturally supports any length input >= 16 bytes without needing "ciphertext stealing". For the stream cipher, Adiantum uses XChaCha12 rather than XChaCha20 in order to make encryption feasible on the widest range of devices. Although the 20-round variant is quite popular, the best known attacks on ChaCha are on only 7 rounds, so ChaCha12 still has a substantial security margin; in fact, larger than AES-256's. 12-round Salsa20 is also the eSTREAM recommendation. For the block cipher, Adiantum uses AES-256, despite it having a lower security margin than XChaCha12 and needing table lookups, due to AES's extensive adoption and analysis making it the obvious first choice. Nevertheless, for flexibility this patch also permits the "adiantum" template to be instantiated with XChaCha20 and/or with an alternate block cipher. We need Adiantum support in the kernel for use in dm-crypt and fscrypt, where currently the only other suitable options are block cipher modes such as AES-XTS. A big problem with this is that many low-end mobile devices (e.g. Android Go phones sold primarily in developing countries, as well as some smartwatches) still have CPUs that lack AES instructions, e.g. ARM Cortex-A7. Sadly, AES-XTS encryption is much too slow to be viable on these devices. We did find that some "lightweight" block ciphers are fast enough, but these suffer from problems such as not having much cryptanalysis or being too controversial. The ChaCha stream cipher has excellent performance but is insecure to use directly for disk encryption, since each sector's IV is reused each time it is overwritten. Even restricting the threat model to offline attacks only isn't enough, since modern flash storage devices don't guarantee that "overwrites" are really overwrites, due to wear-leveling. Adiantum avoids this problem by constructing a "tweakable super-pseudorandom permutation"; this is the strongest possible security model for length-preserving encryption. Of course, storing random nonces along with the ciphertext would be the ideal solution. But doing that with existing hardware and filesystems runs into major practical problems; in most cases it would require data journaling (like dm-integrity) which severely degrades performance. Thus, for now length-preserving encryption is still needed. Signed-off-by: Eric Biggers <ebiggers@google.com> Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* | crypto: arm/nhpoly1305 - add NEON-accelerated NHPoly1305Eric Biggers2018-11-204-0/+200
| | | | | | | | | | | | | | | | | | | | | | Add an ARM NEON implementation of NHPoly1305, an ε-almost-∆-universal hash function used in the Adiantum encryption mode. For now, only the NH portion is actually NEON-accelerated; the Poly1305 part is less performance-critical so is just implemented in C. Signed-off-by: Eric Biggers <ebiggers@google.com> Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* | crypto: nhpoly1305 - add NHPoly1305 supportEric Biggers2018-11-206-4/+1576
| | | | | | | | | | | | | | | | | | | | | | | | Add a generic implementation of NHPoly1305, an ε-almost-∆-universal hash function used in the Adiantum encryption mode. CONFIG_NHPOLY1305 is not selectable by itself since there won't be any real reason to enable it without also enabling Adiantum support. Signed-off-by: Eric Biggers <ebiggers@google.com> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* | crypto: poly1305 - add Poly1305 core APIEric Biggers2018-11-202-75/+115
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Expose a low-level Poly1305 API which implements the ε-almost-∆-universal (εA∆U) hash function underlying the Poly1305 MAC and supports block-aligned inputs only. This is needed for Adiantum hashing, which builds an εA∆U hash function from NH and a polynomial evaluation in GF(2^{130}-5); this polynomial evaluation is identical to the one the Poly1305 MAC does. However, the crypto_shash Poly1305 API isn't very appropriate for this because its calling convention assumes it is used as a MAC, with a 32-byte "one-time key" provided for every digest. But by design, in Adiantum hashing the performance of the polynomial evaluation isn't nearly as critical as NH. So it suffices to just have some C helper functions. Thus, this patch adds such functions. Acked-by: Martin Willi <martin@strongswan.org> Signed-off-by: Eric Biggers <ebiggers@google.com> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* | crypto: poly1305 - use structures for key and accumulatorEric Biggers2018-11-203-37/+47
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In preparation for exposing a low-level Poly1305 API which implements the ε-almost-∆-universal (εA∆U) hash function underlying the Poly1305 MAC and supports block-aligned inputs only, create structures poly1305_key and poly1305_state which hold the limbs of the Poly1305 "r" key and accumulator, respectively. These structures could actually have the same type (e.g. poly1305_val), but different types are preferable, to prevent misuse. Acked-by: Martin Willi <martin@strongswan.org> Signed-off-by: Eric Biggers <ebiggers@google.com> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* | crypto: arm/chacha - add XChaCha12 supportEric Biggers2018-11-202-2/+21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Now that the 32-bit ARM NEON implementation of ChaCha20 and XChaCha20 has been refactored to support varying the number of rounds, add support for XChaCha12. This is identical to XChaCha20 except for the number of rounds, which is 12 instead of 20. XChaCha12 is faster than XChaCha20 but has a lower security margin, though still greater than AES-256's since the best known attacks make it through only 7 rounds. See the patch "crypto: chacha - add XChaCha12 support" for more details about why we need XChaCha12 support. Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* | crypto: arm/chacha20 - refactor to allow varying number of roundsEric Biggers2018-11-203-48/+56
| | | | | | | | | | | | | | | | | | In preparation for adding XChaCha12 support, rename/refactor the NEON implementation of ChaCha20 to support different numbers of rounds. Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* | crypto: arm/chacha20 - add XChaCha20 supportEric Biggers2018-11-203-49/+126
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add an XChaCha20 implementation that is hooked up to the ARM NEON implementation of ChaCha20. This is needed for use in the Adiantum encryption mode; see the generic code patch, "crypto: chacha20-generic - add XChaCha20 support", for more details. We also update the NEON code to support HChaCha20 on one block, so we can use that in XChaCha20 rather than calling the generic HChaCha20. This required factoring the permutation out into its own macro. Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* | crypto: arm/chacha20 - limit the preemption-disabled sectionEric Biggers2018-11-201-3/+3
| | | | | | | | | | | | | | | | | | | | To improve responsivesess, disable preemption for each step of the walk (which is at most PAGE_SIZE) rather than for the entire encryption/decryption operation. Suggested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* | crypto: chacha - add XChaCha12 supportEric Biggers2018-11-206-6/+625
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Now that the generic implementation of ChaCha20 has been refactored to allow varying the number of rounds, add support for XChaCha12, which is the XSalsa construction applied to ChaCha12. ChaCha12 is one of the three ciphers specified by the original ChaCha paper (https://cr.yp.to/chacha/chacha-20080128.pdf: "ChaCha, a variant of Salsa20"), alongside ChaCha8 and ChaCha20. ChaCha12 is faster than ChaCha20 but has a lower, but still large, security margin. We need XChaCha12 support so that it can be used in the Adiantum encryption mode, which enables disk/file encryption on low-end mobile devices where AES-XTS is too slow as the CPUs lack AES instructions. We'd prefer XChaCha20 (the more popular variant), but it's too slow on some of our target devices, so at least in some cases we do need the XChaCha12-based version. In more detail, the problem is that Adiantum is still much slower than we're happy with, and encryption still has a quite noticeable effect on the feel of low-end devices. Users and vendors push back hard against encryption that degrades the user experience, which always risks encryption being disabled entirely. So we need to choose the fastest option that gives us a solid margin of security, and here that's XChaCha12. The best known attack on ChaCha breaks only 7 rounds and has 2^235 time complexity, so ChaCha12's security margin is still better than AES-256's. Much has been learned about cryptanalysis of ARX ciphers since Salsa20 was originally designed in 2005, and it now seems we can be comfortable with a smaller number of rounds. The eSTREAM project also suggests the 12-round version of Salsa20 as providing the best balance among the different variants: combining very good performance with a "comfortable margin of security". Note that it would be trivial to add vanilla ChaCha12 in addition to XChaCha12. However, it's unneeded for now and therefore is omitted. As discussed in the patch that introduced XChaCha20 support, I considered splitting the code into separate chacha-common, chacha20, xchacha20, and xchacha12 modules, so that these algorithms could be enabled/disabled independently. However, since nearly all the code is shared anyway, I ultimately decided there would have been little benefit to the added complexity. Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Martin Willi <martin@strongswan.org> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* | crypto: chacha20-generic - refactor to allow varying number of roundsEric Biggers2018-11-2014-215/+232
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In preparation for adding XChaCha12 support, rename/refactor chacha20-generic to support different numbers of rounds. The justification for needing XChaCha12 support is explained in more detail in the patch "crypto: chacha - add XChaCha12 support". The only difference between ChaCha{8,12,20} are the number of rounds itself; all other parts of the algorithm are the same. Therefore, remove the "20" from all definitions, structures, functions, files, etc. that will be shared by all ChaCha versions. Also make ->setkey() store the round count in the chacha_ctx (previously chacha20_ctx). The generic code then passes the round count through to chacha_block(). There will be a ->setkey() function for each explicitly allowed round count; the encrypt/decrypt functions will be the same. I decided not to do it the opposite way (same ->setkey() function for all round counts, with different encrypt/decrypt functions) because that would have required more boilerplate code in architecture-specific implementations of ChaCha and XChaCha. Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Martin Willi <martin@strongswan.org> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* | crypto: chacha20-generic - add XChaCha20 supportEric Biggers2018-11-205-42/+689
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add support for the XChaCha20 stream cipher. XChaCha20 is the application of the XSalsa20 construction (https://cr.yp.to/snuffle/xsalsa-20081128.pdf) to ChaCha20 rather than to Salsa20. XChaCha20 extends ChaCha20's nonce length from 64 bits (or 96 bits, depending on convention) to 192 bits, while provably retaining ChaCha20's security. XChaCha20 uses the ChaCha20 permutation to map the key and first 128 nonce bits to a 256-bit subkey. Then, it does the ChaCha20 stream cipher with the subkey and remaining 64 bits of nonce. We need XChaCha support in order to add support for the Adiantum encryption mode. Note that to meet our performance requirements, we actually plan to primarily use the variant XChaCha12. But we believe it's wise to first add XChaCha20 as a baseline with a higher security margin, in case there are any situations where it can be used. Supporting both variants is straightforward. Since XChaCha20's subkey differs for each request, XChaCha20 can't be a template that wraps ChaCha20; that would require re-keying the underlying ChaCha20 for every request, which wouldn't be thread-safe. Instead, we make XChaCha20 its own top-level algorithm which calls the ChaCha20 streaming implementation internally. Similar to the existing ChaCha20 implementation, we define the IV to be the nonce and stream position concatenated together. This allows users to seek to any position in the stream. I considered splitting the code into separate chacha20-common, chacha20, and xchacha20 modules, so that chacha20 and xchacha20 could be enabled/disabled independently. However, since nearly all the code is shared anyway, I ultimately decided there would have been little benefit to the added complexity of separate modules. Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Martin Willi <martin@strongswan.org> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* | crypto: chacha20-generic - don't unnecessarily use atomic walkEric Biggers2018-11-201-1/+1
| | | | | | | | | | | | | | | | | | | | | | chacha20-generic doesn't use SIMD instructions or otherwise disable preemption, so passing atomic=true to skcipher_walk_virt() is unnecessary. Suggested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Martin Willi <martin@strongswan.org> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>