IMHO, a need for a common name for IEEE binary128 exists for quite some
time. For IEEE binary256 the real need didn't emerge yet. But it will
emerge in the hopefully near future.
C
no longer offers a great performance advantage over a higher-level
language, so you might as well use the higher-level language.
On Mon, 28 Apr 2025 16:27:38 +0300, Michael S wrote:
IMHO, a need for a common name for IEEE binary128 exists for quite some
time. For IEEE binary256 the real need didn't emerge yet. But it will
emerge in the hopefully near future.
A thought: the main advantage of binary types over decimal is supposed to
be speed. Once you get up to larger precisions like that, the speed advantage becomes less clear, particularly since hardware support doesn’t seem forthcoming any time soon. There are already variable-precision
decimal floating-point libraries available. And with such calculations, C
no longer offers a great performance advantage over a higher-level
language, so you might as well use the higher-level language.
<https://docs.python.org/3/library/decimal.html>
On Mon, 28 Apr 2025 16:27:38 +0300, Michael S wrote:
IMHO, a need for a common name for IEEE binary128 exists for quite some
time. For IEEE binary256 the real need didn't emerge yet. But it will
emerge in the hopefully near future.
A thought: the main advantage of binary types over decimal is supposed to
be speed. Once you get up to larger precisions like that, the speed
advantage becomes less clear, particularly since hardware support doesn’t seem forthcoming any time soon. There are already variable-precision
decimal floating-point libraries available. And with such calculations, C
no longer offers a great performance advantage over a higher-level
language, so you might as well use the higher-level language.
<https://docs.python.org/3/library/decimal.html>
(And at the risk of incurring Richard's wrath, I would suggest
C++ is an even better language choice in such cases.)
On Mon, 28 Apr 2025 16:27:38 +0300, Michael S wrote:
IMHO, a need for a common name for IEEE binary128 exists for quite some
time. For IEEE binary256 the real need didn't emerge yet. But it will
emerge in the hopefully near future.
A thought: the main advantage of binary types over decimal is supposed to
be speed. Once you get up to larger precisions like that, the speed advantage becomes less clear, particularly since hardware support doesn’t seem forthcoming any time soon. There are already variable-precision
decimal floating-point libraries available. And with such calculations, C
no longer offers a great performance advantage over a higher-level
language, so you might as well use the higher-level language.
<https://docs.python.org/3/library/decimal.html>
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Mon, 28 Apr 2025 16:27:38 +0300, Michael S wrote:
IMHO, a need for a common name for IEEE binary128 exists for quite some
time. For IEEE binary256 the real need didn't emerge yet. But it will
emerge in the hopefully near future.
A thought: the main advantage of binary types over decimal is supposed to
be speed. Once you get up to larger precisions like that, the speed
advantage becomes less clear, particularly since hardware support doesn’t >> seem forthcoming any time soon. There are already variable-precision
decimal floating-point libraries available. And with such calculations, C
no longer offers a great performance advantage over a higher-level
language, so you might as well use the higher-level language.
<https://docs.python.org/3/library/decimal.html>
When working with such (low for me) precisions dynamic allocation
of memory is major cost item, frequently more important than
calculation. To avoid this cost one needs stack allocatation.
That is one reason to make them built-in, as in this case
compiler presumably knows about them and can better manage
allocation and copies.
Also, when using binary underlying representation decimal rounding
is much more expensive than binary one, so with such representation
cost of decimal computation is significantly higher. Without
hardware support making representation decimal makes computations
for all sizes much more expensive.
Floating point computations naturally are approximate. In most
cases exact details of rounding do not matter much. It basically
that with round to even rule one get somewhat better error
propagation and people want to have a fixed rule to get
reproducible results. But insisting on decimal rounding
normally is not needed. To put it differently, decimal floating
point is a marketing stint by IBM. Bored coders may code
decimal software libraries for various languages, but it
does not mean that such libraries have much sense.
Lawrence D'Oliveiro <ldo@nz.invalid> writes:are
On Mon, 28 Apr 2025 16:27:38 +0300, Michael S wrote: =20
IMHO, a need for a common name for IEEE binary128 exists for quite
some time. For IEEE binary256 the real need didn't emerge yet. But
it will emerge in the hopefully near future. =20
A thought: the main advantage of binary types over decimal is
supposed to be speed. Once you get up to larger precisions like
that, the speed advantage becomes less clear, particularly since
hardware support doesn=E2=80=99t seem forthcoming any time soon. There =
already variable-precision decimal floating-point libraries
available. And with such calculations, C no longer offers a great performance advantage over a higher-level language, so you might as
well use the higher-level language.
<https://docs.python.org/3/library/decimal.html> =20=20
I think there's an implicit assumption that, all else being equal,
decimal is better than binary. That's true in some contexts,
but not in all.
=20
If you're performing calculations on physical quantities, decimal
probably has no particular advantages, and binary is likely to be
more efficient in both time and space.
=20
The advantagers of decimal show up if you're formatting a *lot*
of numbers in human-readable form (but nobody has time to read a
billion numbers), or if you're working with money. But for financial calculations, particularly compound interest, there are likely to
be precise regulations about how to round results. A given decimal floating-point format might or might not satisfy those regulations.
=20
On Thu, 26 Jun 2025 12:31:32 -0700
Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
Lawrence D'Oliveiro <ldo@nz.invalid> writes:
On Mon, 28 Apr 2025 16:27:38 +0300, Michael S wrote: =20=20
IMHO, a need for a common name for IEEE binary128 exists for quite
some time. For IEEE binary256 the real need didn't emerge yet. But
it will emerge in the hopefully near future. =20
A thought: the main advantage of binary types over decimal is
supposed to be speed. Once you get up to larger precisions like
that, the speed advantage becomes less clear, particularly since
hardware support doesn=E2=80=99t seem forthcoming any time soon. There = >are
already variable-precision decimal floating-point libraries
available. And with such calculations, C no longer offers a great
performance advantage over a higher-level language, so you might as
well use the higher-level language.
<https://docs.python.org/3/library/decimal.html> =20
I think there's an implicit assumption that, all else being equal,
decimal is better than binary. That's true in some contexts,
but not in all.
=20
My implicit assumption is that other sings being equal binary is
better than anything else because it has the lowest variation in ULP to
value ratio.=20
The fact that other things being equal binary fp also tends to be
faster is a nice secondary advantage. For example, it is easy to
imagine hardware that implements S/360 style hex floating point as fast
or a little faster than binary fp, but numerec properties of it are
much worse then sane implementations of binary fp.
When working with such (low for me) precisions dynamic allocation of
memory is major cost item, frequently more important than calculation.
To avoid this cost one needs stack allocatation.
Also, when using binary underlying representation decimal rounding
is much more expensive than binary one, so with such representation
cost of decimal computation is significantly higher.
Floating point computations naturally are approximate. In most cases
exact details of rounding do not matter much.
To put it differently, decimal floating point is a marketing stint by
IBM.
But not all decimal floating point implementations used "hex floating point".
Burroughs medium systems had BCD floating point - one of the advantages
was that it could exactly represent any floating point number that
could be specified with a 100 digit mantissa and a 2 digit exponent.
On 26/06/2025 10:01, Lawrence D'Oliveiro wrote:
C no longer offers a great performance advantage over a higher-level
language, so you might as well use the higher-level language.
Nothing is stopping you, but then comp.lang.c no longer offers you the facility to discuss your chosen language, so you might as well use the higher-level language's group.
[...]if C is going to become more suitable for such high-
precision calculations, it might need to become more Python-like.
scott@slp53.sl.home (Scott Lurndal) writes:
[...]
But not all decimal floating point implementations used "hex floating point".
Burroughs medium systems had BCD floating point - one of the advantages
was that it could exactly represent any floating point number that
could be specified with a 100 digit mantissa and a 2 digit exponent.
BCD uses 4 bits to represent values from 0 to 9. That's about 83%
efficent relative to pure binary. (And it still can't represent 1/3.)
[...]
On 27/06/2025 01:39, Lawrence D'Oliveiro wrote:
[...]if C is going to become more suitable for such high-
precision calculations, it might need to become more Python-like.
C is not in search of a reason to exist. If you want Python, you know
where to find it.
On Thu, 26 Jun 2025 12:51:19 -0000 (UTC), Waldek Hebisch wrote:
When working with such (low for me) precisions dynamic allocation of
memory is major cost item, frequently more important than calculation.
To avoid this cost one needs stack allocatation.
What you may not realize is that, on current machines, there is about a 100:1 speed difference between accessing CPU registers and accessing main memory.
Whether that main memory access is doing “stack allocation” or “heap allocation” is going to make very little difference to this.
Also, when using binary underlying representation decimal rounding
is much more expensive than binary one, so with such representation
cost of decimal computation is significantly higher.
This may take more computation, but if the calculation time is dominated
by memory access time to all those digits, how much difference is that
going to make, really?
Floating point computations naturally are approximate. In most cases
exact details of rounding do not matter much.
It often surprises you when they do. That’s why a handy rule of thumb is to test your calculation with all four IEEE 754 rounding modes, to ensure that the variation in the result remains minor. If it doesn’t ... then watch out.
To put it differently, decimal floating point is a marketing stint by
IBM.
Not sure IBM has any marketing power left to inflict their own ideas on
the computing industry. Decimal calculations just make sense because the results are less surprising to normal people.
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Thu, 26 Jun 2025 12:51:19 -0000 (UTC), Waldek Hebisch wrote:
When working with such (low for me) precisions dynamic allocation of
memory is major cost item, frequently more important than calculation.
To avoid this cost one needs stack allocatation.
What you may not realize is that, on current machines, there is about a
100:1 speed difference between accessing CPU registers and accessing main
memory.
Whether that main memory access is doing “stack allocation” or “heap >> allocation” is going to make very little difference to this.
Did you measure things? CPU has caches and cache friendly code
makes a difference. Avoiding dynamic allocation helps, that is
measurable. Rational explanation is that stack allocated things
do not move and have close to zero cost to manage. Moving stuff
leads to cache misses.
Michael S <already5chosen@yahoo.com> writes:
On Thu, 26 Jun 2025 12:31:32 -0700
Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
Lawrence D'Oliveiro <ldo@nz.invalid> writes:are
On Mon, 28 Apr 2025 16:27:38 +0300, Michael S wrote: =20
IMHO, a need for a common name for IEEE binary128 exists for
quite some time. For IEEE binary256 the real need didn't emerge
yet. But it will emerge in the hopefully near future. =20
A thought: the main advantage of binary types over decimal is
supposed to be speed. Once you get up to larger precisions like
that, the speed advantage becomes less clear, particularly since
hardware support doesn=E2=80=99t seem forthcoming any time soon.
There =
already variable-precision decimal floating-point libraries=20
available. And with such calculations, C no longer offers a great
performance advantage over a higher-level language, so you might
as well use the higher-level language.
<https://docs.python.org/3/library/decimal.html> =20
I think there's an implicit assumption that, all else being equal,
decimal is better than binary. That's true in some contexts,
but not in all.
=20
My implicit assumption is that other sings being equal binary is
better than anything else because it has the lowest variation in ULP
to value ratio.=20
The fact that other things being equal binary fp also tends to be
faster is a nice secondary advantage. For example, it is easy to
imagine hardware that implements S/360 style hex floating point as
fast or a little faster than binary fp, but numerec properties of it
are much worse then sane implementations of binary fp.
But not all decimal floating point implementations used "hex floating
point".
Burroughs medium systems had BCD floating point - one of the
advantages was that it could exactly represent any floating point
number that could be specified with a 100 digit mantissa and a 2
digit exponent.
This was a memory-to-memory architecture, so no floating point
registers to worry about.
For financial calculations, a fixed point format (up to 100 digits)
was used. Using an implicit decimal point, rounding was a matter of
where the implicit decimal point was located in the up to 100 digit
field; so do your calculations in mills and truncate the result field
to the desired precision.
On Thu, 26 Jun 2025 12:51:19 -0000 (UTC), Waldek Hebisch wrote:
When working with such (low for me) precisions dynamic allocation of
memory is major cost item, frequently more important than calculation.
To avoid this cost one needs stack allocatation.
What you may not realize is that, on current machines, there is about a >100:1 speed difference between accessing CPU registers and accessing main >memory.
On Thu, 26 Jun 2025 21:09:37 GMT
scott@slp53.sl.home (Scott Lurndal) wrote:
[..]
For fix point, anything "decimal" is even less useful than in floating
point. I can't find any good explanation for use of "decimal" things in
some early computers [...]
On 27.06.2025 02:10, Keith Thompson wrote:
scott@slp53.sl.home (Scott Lurndal) writes:
[...]
But not all decimal floating point implementations used "hex floating point".
Burroughs medium systems had BCD floating point - one of the advantages
was that it could exactly represent any floating point number that
could be specified with a 100 digit mantissa and a 2 digit exponent.
BCD uses 4 bits to represent values from 0 to 9. That's about 83%
efficent relative to pure binary. (And it still can't represent 1/3.)
That's a problem of where your numbers stem from. "1/3" is a formula!
Sysop: | Tetrazocine |
---|---|
Location: | Melbourne, VIC, Australia |
Users: | 9 |
Nodes: | 8 (0 / 8) |
Uptime: | 124:45:59 |
Calls: | 161 |
Files: | 21,502 |
Messages: | 78,972 |