summaryrefslogtreecommitdiffstats
path: root/fpu
Commit message (Expand)AuthorAgeFilesLines
* softfloat: Add scaling float-to-int routinesRichard Henderson2018-08-241-73/+316
* softfloat: Add scaling int-to-float routinesRichard Henderson2018-08-241-48/+136
* softfloat: Fix missing inexact for floating-point addRichard Henderson2018-08-161-1/+1
* fpu/softfloat: Define floatN_silence_nan in terms of parts_silence_nanRichard Henderson2018-05-182-77/+35Star
* fpu/softfloat: Clean up parts_default_nanRichard Henderson2018-05-181-7/+14
* fpu/softfloat: Define floatN_default_nan in terms of parts_default_nanRichard Henderson2018-05-182-99/+47Star
* fpu/softfloat: Pass FloatClass to pickNaNMulAddRichard Henderson2018-05-182-47/+28Star
* fpu/softfloat: Pass FloatClass to pickNaNRichard Henderson2018-05-182-93/+86Star
* fpu/softfloat: Make is_nan et al available to softfloat-specialize.hRichard Henderson2018-05-181-14/+16
* fpu/softfloat: Specialize on snan_bit_is_oneRichard Henderson2018-05-181-25/+43
* fpu/softfloat: Remove floatX_maybe_silence_nanRichard Henderson2018-05-181-63/+0Star
* fpu/softfloat: Use float*_silence_nan in propagateFloat*NaNRichard Henderson2018-05-181-10/+34
* fpu/softfloat: re-factor float to float conversionsAlex Bennée2018-05-182-410/+118Star
* fpu/softfloat: Partial support for ARM Alternative half-precisionAlex Bennée2018-05-181-3/+16
* fpu/softfloat: Replace float_class_msnan with parts_silence_nanRichard Henderson2018-05-182-30/+33
* fpu/softfloat: Replace float_class_dnan with parts_default_nanRichard Henderson2018-05-182-27/+48
* fpu/softfloat: Introduce parts_is_snan_fracRichard Henderson2018-05-182-10/+17
* fpu/softfloat: Canonicalize NaN fractionRichard Henderson2018-05-181-1/+6
* fpu/softfloat: Move softfloat-specialize.h below FloatParts definitionRichard Henderson2018-05-181-10/+10
* fpu/softfloat: Split floatXX_silence_nan from floatXX_maybe_silence_nanRichard Henderson2018-05-181-56/+118
* fpu/softfloat: Merge NO_SIGNALING_NANS definitionsRichard Henderson2018-05-181-60/+40Star
* fpu/softfloat: Fix conversion from uint64 to float128Petr Tesarik2018-05-181-1/+1
* fpu/softfloat: Don't set Invalid for float-to-int(MAXINT)Peter Maydell2018-05-151-2/+2
* fpu/softfloat: int_to_float ensure r fully initialisedAlex Bennée2018-05-151-1/+1
* softfloat: Handle default NaN mode after pickNaNMulAdd, not beforePeter Maydell2018-05-101-20/+28
* fpu: Bound increment for scalbnRichard Henderson2018-04-171-0/+6
* fpu/softfloat: check for Inf / x or 0 / x before /0Alex Bennée2018-04-161-5/+5
* fpu/softfloat: raise float_invalid for NaN/Inf in round_to_int_and_packAlex Bennée2018-04-161-0/+3
* softfloat: fix {min, max}nummag for same-abs-value inputsEmilio G. Cota2018-04-131-8/+9
* fpu: Fix rounding mode for floatN_to_uintM_round_to_zeroRichard Henderson2018-04-101-2/+2
* softfloat: fix crash on int conversion of SNaNStef O'Rear2018-03-091-0/+4
* RISC-V FPU SupportMichael Clark2018-03-061-3/+4
* softfloat: use floatx80_infinity in softfloatLaurent Vivier2018-03-042-14/+39
* softfloat: export some functionsLaurent Vivier2018-03-043-924/+11Star
* fpu/softfloat: re-factor sqrtAlex Bennée2018-02-211-111/+96Star
* fpu/softfloat: re-factor compareAlex Bennée2018-02-211-54/+80
* fpu/softfloat: re-factor minmaxAlex Bennée2018-02-211-107/+120
* fpu/softfloat: re-factor scalbnAlex Bennée2018-02-211-73/+33Star
* fpu/softfloat: re-factor int/uint to floatAlex Bennée2018-02-211-159/+163
* fpu/softfloat: re-factor float to int/uintAlex Bennée2018-02-211-755/+180Star
* fpu/softfloat: re-factor round_to_intAlex Bennée2018-02-211-174/+145Star
* fpu/softfloat: re-factor muladdAlex Bennée2018-02-212-575/+271Star
* fpu/softfloat: re-factor divAlex Bennée2018-02-212-148/+136Star
* fpu/softfloat: re-factor mulAlex Bennée2018-02-211-128/+81Star
* fpu/softfloat: re-factor add/subAlex Bennée2018-02-211-427/+465
* fpu/softfloat: define decompose structuresAlex Bennée2018-02-211-1/+85
* fpu/softfloat: move the extract functions to the top of the fileAlex Bennée2018-02-211-66/+54Star
* fpu/softfloat: improve comments on ARM NaN propagationAlex Bennée2018-02-211-2/+3
* fpu/softfloat: implement float16_squash_input_denormalAlex Bennée2018-02-211-0/+15
* softfloat: define floatx80_round()Laurent Vivier2017-06-291-0/+16