On 2/24/2024 2:52 PM, Lawrence D'Oliveiro wrote:[...]
On Sat, 24 Feb 2024 12:40:52 -0800, Chris M. Thomasson wrote:
think of the following interesting aspect:
https://forums.parallax.com/discussion/147522/dog-leg-hypotenuse-approximation
Vs "needing" to get much more accurate results, for say medical imaging
wrt volumetric renders...
On Thu, 21 Mar 2024 08:52:18 +01003 and is
Terje Mathisen <terje.mathisen@tmsw.no> wrote:
=20
MitchAlsup1 wrote:
Stefan Monnier wrote:
IIUC that was not the case before their work: it was "easy" to get
the correct result in 99% of the cases, but covering all 100% of
the cases used to be costly because those few cases needed a lot
more internal precision.
Muller indicates one typically need 2=C3=83=E2=80=94n+6 to 2=C3=83=E2= =80=94n+12 bits to get
correct roundings 100% of the time. FP128 only has 2=C3=83=E2=80=94n+=
=20insufficient by itself.
I agree with everything else you've written about this subject, but
afair, fp128 is using 1:15:112 while double is of course 1:10:53.
IEEE-754 binary64 is 1:11:52 :-)
=20
But anyway I am skeptical about Miller's rules of thumb.
I'd expect that different transcendental functions would exercise non-trivially different behaviors, mostly because they have different relationships between input and output ranges. Some of them compress
wider inputs into narrower output and some do the opposite.
Yet another factor is luck.
=20t
Besides, I see nothing special about binary128 as a helper format.
It is not supported on wast majority of HW, And even when it is
supported, like on IBM POWER, for majority of operations it is slower
than emulated 128-bit fixed-point. Fix-point is more work for coder, bu=
sounds like more sure path to success.
Sysop: | Tetrazocine |
---|---|
Location: | Melbourne, VIC, Australia |
Users: | 7 |
Nodes: | 8 (0 / 8) |
Uptime: | 125:27:51 |
Calls: | 46 |
Files: | 21,492 |
Messages: | 64,827 |