• Re: "A diagram of C23 basic types"

    From Lawrence D'Oliveiro@3:633/280.2 to All on Thu Jun 26 19:01:20 2025
    On Mon, 28 Apr 2025 16:27:38 +0300, Michael S wrote:

    IMHO, a need for a common name for IEEE binary128 exists for quite some
    time. For IEEE binary256 the real need didn't emerge yet. But it will
    emerge in the hopefully near future.

    A thought: the main advantage of binary types over decimal is supposed to
    be speed. Once you get up to larger precisions like that, the speed
    advantage becomes less clear, particularly since hardware support doesn’t seem forthcoming any time soon. There are already variable-precision
    decimal floating-point libraries available. And with such calculations, C
    no longer offers a great performance advantage over a higher-level
    language, so you might as well use the higher-level language.

    <https://docs.python.org/3/library/decimal.html>

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Richard Heathfield@3:633/280.2 to All on Thu Jun 26 22:27:29 2025
    On 26/06/2025 10:01, Lawrence D'Oliveiro wrote:
    C
    no longer offers a great performance advantage over a higher-level
    language, so you might as well use the higher-level language.

    Nothing is stopping you, but then comp.lang.c no longer offers
    you the facility to discuss your chosen language, so you might as
    well use the higher-level language's group.

    --
    Richard Heathfield
    Email: rjh at cpax dot org dot uk
    "Usenet is a strange place" - dmr 29 July 1999
    Sig line 4 vacant - apply within


    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: Fix this later (3:633/280.2@fidonet)
  • From Waldek Hebisch@3:633/280.2 to All on Thu Jun 26 22:51:19 2025
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
    On Mon, 28 Apr 2025 16:27:38 +0300, Michael S wrote:

    IMHO, a need for a common name for IEEE binary128 exists for quite some
    time. For IEEE binary256 the real need didn't emerge yet. But it will
    emerge in the hopefully near future.

    A thought: the main advantage of binary types over decimal is supposed to
    be speed. Once you get up to larger precisions like that, the speed advantage becomes less clear, particularly since hardware support doesn’t seem forthcoming any time soon. There are already variable-precision
    decimal floating-point libraries available. And with such calculations, C
    no longer offers a great performance advantage over a higher-level
    language, so you might as well use the higher-level language.

    <https://docs.python.org/3/library/decimal.html>

    When working with such (low for me) precisions dynamic allocation
    of memory is major cost item, frequently more important than
    calculation. To avoid this cost one needs stack allocatation.
    That is one reason to make them built-in, as in this case
    compiler presumably knows about them and can better manage
    allocation and copies.

    Also, when using binary underlying representation decimal rounding
    is much more expensive than binary one, so with such representation
    cost of decimal computation is significantly higher. Without
    hardware support making representation decimal makes computations
    for all sizes much more expensive.

    Floating point computations naturally are approximate. In most
    cases exact details of rounding do not matter much. It basically
    that with round to even rule one get somewhat better error
    propagation and people want to have a fixed rule to get
    reproducible results. But insisting on decimal rounding
    normally is not needed. To put it differently, decimal floating
    point is a marketing stint by IBM. Bored coders may code
    decimal software libraries for various languages, but it
    does not mean that such libraries have much sense.

    --
    Waldek Hebisch

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: To protect and to server (3:633/280.2@fidonet)
  • From David Brown@3:633/280.2 to All on Thu Jun 26 23:57:04 2025
    On 26/06/2025 11:01, Lawrence D'Oliveiro wrote:
    On Mon, 28 Apr 2025 16:27:38 +0300, Michael S wrote:

    IMHO, a need for a common name for IEEE binary128 exists for quite some
    time. For IEEE binary256 the real need didn't emerge yet. But it will
    emerge in the hopefully near future.

    A thought: the main advantage of binary types over decimal is supposed to
    be speed. Once you get up to larger precisions like that, the speed
    advantage becomes less clear, particularly since hardware support doesn’t seem forthcoming any time soon. There are already variable-precision
    decimal floating-point libraries available. And with such calculations, C
    no longer offers a great performance advantage over a higher-level
    language, so you might as well use the higher-level language.

    <https://docs.python.org/3/library/decimal.html>

    That is certainly a valid viewpoint. Much of this depends on what you
    are doing, how big the types are, what you are doing with them, how much
    of your code is calculations, and what other things you are doing.

    If you are doing lots of calculations with big numbers of various sizes,
    then Python code using numpy will often be faster than writing C code
    directly - you can concentrate on writing better algorithms instead of
    all the low-level memory management and bureaucracy you have in a lower
    level language. (Of course the hard work in libraries like numpy is
    done in code written in C, Fortran, C++, or other low-level languages.)

    But if you are using a type that is small enough to fit sensibly on the
    stack, and to have a fixed size (rather than arbitrary sized number
    types), then it is likely to be more efficient to define them as structs
    in C and use them directly. Depending on what you are doing with them,
    you might be better off using decimal-based types rather than
    binary-based types. (And at the risk of incurring Richard's wrath, I
    would suggest C++ is an even better language choice in such cases.)


    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Richard Heathfield@3:633/280.2 to All on Fri Jun 27 01:10:55 2025
    On 26/06/2025 14:57, David Brown wrote:
    (And at the risk of incurring Richard's wrath, I would suggest
    C++ is an even better language choice in such cases.)

    As you know, David, I hate to agree with you...

    --
    Richard Heathfield
    Email: rjh at cpax dot org dot uk
    "Usenet is a strange place" - dmr 29 July 1999
    ....but operator overloading for the win.

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: Fix this later (3:633/280.2@fidonet)
  • From Keith Thompson@3:633/280.2 to All on Fri Jun 27 05:31:32 2025
    Lawrence D'Oliveiro <ldo@nz.invalid> writes:
    On Mon, 28 Apr 2025 16:27:38 +0300, Michael S wrote:
    IMHO, a need for a common name for IEEE binary128 exists for quite some
    time. For IEEE binary256 the real need didn't emerge yet. But it will
    emerge in the hopefully near future.

    A thought: the main advantage of binary types over decimal is supposed to
    be speed. Once you get up to larger precisions like that, the speed advantage becomes less clear, particularly since hardware support doesn’t seem forthcoming any time soon. There are already variable-precision
    decimal floating-point libraries available. And with such calculations, C
    no longer offers a great performance advantage over a higher-level
    language, so you might as well use the higher-level language.

    <https://docs.python.org/3/library/decimal.html>

    I think there's an implicit assumption that, all else being equal,
    decimal is better than binary. That's true in some contexts,
    but not in all.

    If you're performing calculations on physical quantities, decimal
    probably has no particular advantages, and binary is likely to be
    more efficient in both time and space.

    The advantagers of decimal show up if you're formatting a *lot*
    of numbers in human-readable form (but nobody has time to read a
    billion numbers), or if you're working with money. But for financial calculations, particularly compound interest, there are likely to
    be precise regulations about how to round results. A given decimal floating-point format might or might not satisfy those regulations.

    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    void Void(void) { Void(); } /* The recursive call of the void */

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: None to speak of (3:633/280.2@fidonet)
  • From Chris M. Thomasson@3:633/280.2 to All on Fri Jun 27 06:23:34 2025
    On 6/26/2025 5:51 AM, Waldek Hebisch wrote:
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
    On Mon, 28 Apr 2025 16:27:38 +0300, Michael S wrote:

    IMHO, a need for a common name for IEEE binary128 exists for quite some
    time. For IEEE binary256 the real need didn't emerge yet. But it will
    emerge in the hopefully near future.

    A thought: the main advantage of binary types over decimal is supposed to
    be speed. Once you get up to larger precisions like that, the speed
    advantage becomes less clear, particularly since hardware support doesn’t >> seem forthcoming any time soon. There are already variable-precision
    decimal floating-point libraries available. And with such calculations, C
    no longer offers a great performance advantage over a higher-level
    language, so you might as well use the higher-level language.

    <https://docs.python.org/3/library/decimal.html>

    When working with such (low for me) precisions dynamic allocation
    of memory is major cost item, frequently more important than
    calculation. To avoid this cost one needs stack allocatation.
    That is one reason to make them built-in, as in this case
    compiler presumably knows about them and can better manage
    allocation and copies.

    Speaking of stack allocation... Fwiw, here is an older stack based
    region allocator of mine:

    https://pastebin.com/raw/f37a23918



    Also, when using binary underlying representation decimal rounding
    is much more expensive than binary one, so with such representation
    cost of decimal computation is significantly higher. Without
    hardware support making representation decimal makes computations
    for all sizes much more expensive.

    Floating point computations naturally are approximate. In most
    cases exact details of rounding do not matter much. It basically
    that with round to even rule one get somewhat better error
    propagation and people want to have a fixed rule to get
    reproducible results. But insisting on decimal rounding
    normally is not needed. To put it differently, decimal floating
    point is a marketing stint by IBM. Bored coders may code
    decimal software libraries for various languages, but it
    does not mean that such libraries have much sense.



    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Michael S@3:633/280.2 to All on Fri Jun 27 06:59:16 2025
    On Thu, 26 Jun 2025 12:31:32 -0700
    Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:

    Lawrence D'Oliveiro <ldo@nz.invalid> writes:
    On Mon, 28 Apr 2025 16:27:38 +0300, Michael S wrote: =20
    IMHO, a need for a common name for IEEE binary128 exists for quite
    some time. For IEEE binary256 the real need didn't emerge yet. But
    it will emerge in the hopefully near future. =20

    A thought: the main advantage of binary types over decimal is
    supposed to be speed. Once you get up to larger precisions like
    that, the speed advantage becomes less clear, particularly since
    hardware support doesn=E2=80=99t seem forthcoming any time soon. There =
    are
    already variable-precision decimal floating-point libraries
    available. And with such calculations, C no longer offers a great performance advantage over a higher-level language, so you might as
    well use the higher-level language.

    <https://docs.python.org/3/library/decimal.html> =20
    =20
    I think there's an implicit assumption that, all else being equal,
    decimal is better than binary. That's true in some contexts,
    but not in all.
    =20

    My implicit assumption is that other sings being equal binary is
    better than anything else because it has the lowest variation in ULP to
    value ratio.=20
    The fact that other things being equal binary fp also tends to be
    faster is a nice secondary advantage. For example, it is easy to
    imagine hardware that implements S/360 style hex floating point as fast
    or a little faster than binary fp, but numerec properties of it are
    much worse then sane implementations of binary fp.

    Of course, historically there existed bad implementations of binary fp
    as weel, most notably on many CDC machines. But by now they are dead
    for eons.

    If you're performing calculations on physical quantities, decimal
    probably has no particular advantages, and binary is likely to be
    more efficient in both time and space.
    =20
    The advantagers of decimal show up if you're formatting a *lot*
    of numbers in human-readable form (but nobody has time to read a
    billion numbers), or if you're working with money. But for financial calculations, particularly compound interest, there are likely to
    be precise regulations about how to round results. A given decimal floating-point format might or might not satisfy those regulations.
    =20

    Exactly.






    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Scott Lurndal@3:633/280.2 to All on Fri Jun 27 07:09:37 2025
    Reply-To: slp53@pacbell.net

    Michael S <already5chosen@yahoo.com> writes:
    On Thu, 26 Jun 2025 12:31:32 -0700
    Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:

    Lawrence D'Oliveiro <ldo@nz.invalid> writes:
    On Mon, 28 Apr 2025 16:27:38 +0300, Michael S wrote: =20
    IMHO, a need for a common name for IEEE binary128 exists for quite
    some time. For IEEE binary256 the real need didn't emerge yet. But
    it will emerge in the hopefully near future. =20

    A thought: the main advantage of binary types over decimal is
    supposed to be speed. Once you get up to larger precisions like
    that, the speed advantage becomes less clear, particularly since
    hardware support doesn=E2=80=99t seem forthcoming any time soon. There = >are
    already variable-precision decimal floating-point libraries
    available. And with such calculations, C no longer offers a great
    performance advantage over a higher-level language, so you might as
    well use the higher-level language.

    <https://docs.python.org/3/library/decimal.html> =20
    =20
    I think there's an implicit assumption that, all else being equal,
    decimal is better than binary. That's true in some contexts,
    but not in all.
    =20

    My implicit assumption is that other sings being equal binary is
    better than anything else because it has the lowest variation in ULP to
    value ratio.=20
    The fact that other things being equal binary fp also tends to be
    faster is a nice secondary advantage. For example, it is easy to
    imagine hardware that implements S/360 style hex floating point as fast
    or a little faster than binary fp, but numerec properties of it are
    much worse then sane implementations of binary fp.

    But not all decimal floating point implementations used "hex floating point".

    Burroughs medium systems had BCD floating point - one of the advantages
    was that it could exactly represent any floating point number that
    could be specified with a 100 digit mantissa and a 2 digit exponent.

    This was a memory-to-memory architecture, so no floating point registers
    to worry about.

    For financial calculations, a fixed point format (up to 100 digits) was
    used. Using an implicit decimal point, rounding was a matter of where
    the implicit decimal point was located in the up to 100 digit field;
    so do your calculations in mills and truncate the result field to the
    desired precision.


    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: UsenetServer - www.usenetserver.com (3:633/280.2@fidonet)
  • From Lawrence D'Oliveiro@3:633/280.2 to All on Fri Jun 27 09:58:39 2025
    On Thu, 26 Jun 2025 12:51:19 -0000 (UTC), Waldek Hebisch wrote:

    When working with such (low for me) precisions dynamic allocation of
    memory is major cost item, frequently more important than calculation.
    To avoid this cost one needs stack allocatation.

    What you may not realize is that, on current machines, there is about a
    100:1 speed difference between accessing CPU registers and accessing main memory.

    Whether that main memory access is doing “stack allocation” or “heap allocation” is going to make very little difference to this.

    Also, when using binary underlying representation decimal rounding
    is much more expensive than binary one, so with such representation
    cost of decimal computation is significantly higher.

    This may take more computation, but if the calculation time is dominated
    by memory access time to all those digits, how much difference is that
    going to make, really?

    Floating point computations naturally are approximate. In most cases
    exact details of rounding do not matter much.

    It often surprises you when they do. That’s why a handy rule of thumb is
    to test your calculation with all four IEEE 754 rounding modes, to ensure
    that the variation in the result remains minor. If it doesn’t ... then
    watch out.

    To put it differently, decimal floating point is a marketing stint by
    IBM.

    Not sure IBM has any marketing power left to inflict their own ideas on
    the computing industry. Decimal calculations just make sense because the results are less surprising to normal people.

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Keith Thompson@3:633/280.2 to All on Fri Jun 27 10:10:48 2025
    scott@slp53.sl.home (Scott Lurndal) writes:
    [...]
    But not all decimal floating point implementations used "hex floating point".

    Burroughs medium systems had BCD floating point - one of the advantages
    was that it could exactly represent any floating point number that
    could be specified with a 100 digit mantissa and a 2 digit exponent.

    BCD uses 4 bits to represent values from 0 to 9. That's about 83%
    efficent relative to pure binary. (And it still can't represent 1/3.)

    Another option (I think IBM has implemented this) is to use 10 bits
    to represent values from 0 to 999, taking advantage of the nice
    coincidence that 2**10 is barely bigger than 10**3. That's more
    than 99.6% efficient relative to pure binary. Of course it's still
    more complicated to implement.

    [...]

    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    void Void(void) { Void(); } /* The recursive call of the void */

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: None to speak of (3:633/280.2@fidonet)
  • From Lawrence D'Oliveiro@3:633/280.2 to All on Fri Jun 27 10:39:29 2025
    On Thu, 26 Jun 2025 13:27:29 +0100, Richard Heathfield wrote:

    On 26/06/2025 10:01, Lawrence D'Oliveiro wrote:

    C no longer offers a great performance advantage over a higher-level
    language, so you might as well use the higher-level language.

    Nothing is stopping you, but then comp.lang.c no longer offers you the facility to discuss your chosen language, so you might as well use the higher-level language's group.

    Or conversely, if C is going to become more suitable for such high-
    precision calculations, it might need to become more Python-like.

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Richard Heathfield@3:633/280.2 to All on Fri Jun 27 11:40:58 2025
    On 27/06/2025 01:39, Lawrence D'Oliveiro wrote:
    [...]if C is going to become more suitable for such high-
    precision calculations, it might need to become more Python-like.

    C is not in search of a reason to exist. If you want Python, you
    know where to find it.

    --
    Richard Heathfield
    Email: rjh at cpax dot org dot uk
    "Usenet is a strange place" - dmr 29 July 1999
    Sig line 4 vacant - apply within


    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: Fix this later (3:633/280.2@fidonet)
  • From Janis Papanagnou@3:633/280.2 to All on Fri Jun 27 12:33:00 2025
    On 27.06.2025 02:10, Keith Thompson wrote:
    scott@slp53.sl.home (Scott Lurndal) writes:
    [...]
    But not all decimal floating point implementations used "hex floating point".

    Burroughs medium systems had BCD floating point - one of the advantages
    was that it could exactly represent any floating point number that
    could be specified with a 100 digit mantissa and a 2 digit exponent.

    BCD uses 4 bits to represent values from 0 to 9. That's about 83%
    efficent relative to pure binary. (And it still can't represent 1/3.)

    That's a problem of where your numbers stem from. "1/3" is a formula!
    A standard representation for a number may be "0.33" or "0.33333333",
    defined through the human-machine interface as text and representable
    (as depicted) as "exact number". The result of the formula "1/3" isn't representable as a finite decimal string. With a binary representation
    even a _finite_ [decimal] string might not be exactly representable in
    some cases; I've tried with 0.1 (for example). The fixed point decimal representation calculates that exact, but not the binary. I think that
    is the reason why especially in the financial sector using languages
    that are supporting the decimal encoding had been prevalent. (Don't
    know about contemporary financial systems.)

    Janis

    [...]



    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Chris M. Thomasson@3:633/280.2 to All on Fri Jun 27 12:33:48 2025
    On 6/26/2025 6:40 PM, Richard Heathfield wrote:
    On 27/06/2025 01:39, Lawrence D'Oliveiro wrote:
    [...]if C is going to become more suitable for such high-
    precision calculations, it might need to become more Python-like.

    C is not in search of a reason to exist. If you want Python, you know
    where to find it.


    ditto.

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Waldek Hebisch@3:633/280.2 to All on Fri Jun 27 13:51:21 2025
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
    On Thu, 26 Jun 2025 12:51:19 -0000 (UTC), Waldek Hebisch wrote:

    When working with such (low for me) precisions dynamic allocation of
    memory is major cost item, frequently more important than calculation.
    To avoid this cost one needs stack allocatation.

    What you may not realize is that, on current machines, there is about a 100:1 speed difference between accessing CPU registers and accessing main memory.

    Whether that main memory access is doing “stack allocation” or “heap allocation” is going to make very little difference to this.

    Did you measure things? CPU has caches and cache friendly code
    makes a difference. Avoiding dynamic allocation helps, that is
    measurable. Rational explanation is that stack allocated things
    do not move and have close to zero cost to manage. Moving stuff
    leads to cache misses.

    Also, when using binary underlying representation decimal rounding
    is much more expensive than binary one, so with such representation
    cost of decimal computation is significantly higher.

    This may take more computation, but if the calculation time is dominated
    by memory access time to all those digits, how much difference is that
    going to make, really?

    It makes a lot of difference for cache friendly code.

    Floating point computations naturally are approximate. In most cases
    exact details of rounding do not matter much.

    It often surprises you when they do. That’s why a handy rule of thumb is to test your calculation with all four IEEE 754 rounding modes, to ensure that the variation in the result remains minor. If it doesn’t ... then watch out.

    To put it differently, decimal floating point is a marketing stint by
    IBM.

    Not sure IBM has any marketing power left to inflict their own ideas on
    the computing industry. Decimal calculations just make sense because the results are less surprising to normal people.

    Inteligent people quickly realise that floating point arithmetic
    produces approximate results. With binary this realisation is
    slightly faster, this is a plus for binary. Once you realise that
    you should expect approximate results, cases when result happens
    to be exact are surprising.
    --
    Waldek Hebisch

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: To protect and to server (3:633/280.2@fidonet)
  • From David Brown@3:633/280.2 to All on Fri Jun 27 21:44:25 2025
    On 27/06/2025 05:51, Waldek Hebisch wrote:
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
    On Thu, 26 Jun 2025 12:51:19 -0000 (UTC), Waldek Hebisch wrote:

    When working with such (low for me) precisions dynamic allocation of
    memory is major cost item, frequently more important than calculation.
    To avoid this cost one needs stack allocatation.

    What you may not realize is that, on current machines, there is about a
    100:1 speed difference between accessing CPU registers and accessing main
    memory.

    Whether that main memory access is doing “stack allocation” or “heap >> allocation” is going to make very little difference to this.

    Did you measure things? CPU has caches and cache friendly code
    makes a difference. Avoiding dynamic allocation helps, that is
    measurable. Rational explanation is that stack allocated things
    do not move and have close to zero cost to manage. Moving stuff
    leads to cache misses.


    Yes. Main memory accesses are slow - access to memory in caches is a
    lot less slow, but still slower than registers. If you need to use
    dynamic memory, the allocator will have to access a lot of different
    memory locations to figure out where to allocate the memory. Most of
    those will be in cache (assuming you are doing a lot of dynamic
    allocations), but some might not be. And the memory you allocate in the
    end might force more cache allocations and deallocations.

    Stack space (near the top of the stack), on the other hand, is almost
    always in caches. So it is faster to access memory on the stack, as
    well as using far fewer instructions.

    You are of course correct to say that speeds need to be measured, but
    you are also correct that in general, stack data can be significantly
    more efficient than dynamic memory data - especially if that data is short-lived.



    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Michael S@3:633/280.2 to All on Fri Jun 27 21:52:42 2025
    On Thu, 26 Jun 2025 21:09:37 GMT
    scott@slp53.sl.home (Scott Lurndal) wrote:

    Michael S <already5chosen@yahoo.com> writes:
    On Thu, 26 Jun 2025 12:31:32 -0700
    Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:

    Lawrence D'Oliveiro <ldo@nz.invalid> writes:
    On Mon, 28 Apr 2025 16:27:38 +0300, Michael S wrote: =20
    IMHO, a need for a common name for IEEE binary128 exists for
    quite some time. For IEEE binary256 the real need didn't emerge
    yet. But it will emerge in the hopefully near future. =20

    A thought: the main advantage of binary types over decimal is
    supposed to be speed. Once you get up to larger precisions like
    that, the speed advantage becomes less clear, particularly since
    hardware support doesn=E2=80=99t seem forthcoming any time soon.
    There =
    are
    already variable-precision decimal floating-point libraries
    available. And with such calculations, C no longer offers a great
    performance advantage over a higher-level language, so you might
    as well use the higher-level language.

    <https://docs.python.org/3/library/decimal.html> =20
    =20
    I think there's an implicit assumption that, all else being equal,
    decimal is better than binary. That's true in some contexts,
    but not in all.
    =20

    My implicit assumption is that other sings being equal binary is
    better than anything else because it has the lowest variation in ULP
    to value ratio.=20
    The fact that other things being equal binary fp also tends to be
    faster is a nice secondary advantage. For example, it is easy to
    imagine hardware that implements S/360 style hex floating point as
    fast or a little faster than binary fp, but numerec properties of it
    are much worse then sane implementations of binary fp.

    But not all decimal floating point implementations used "hex floating
    point".


    IBM's Hex floating point is not decimal. It's hex (base 16).

    Burroughs medium systems had BCD floating point - one of the
    advantages was that it could exactly represent any floating point
    number that could be specified with a 100 digit mantissa and a 2
    digit exponent.

    This was a memory-to-memory architecture, so no floating point
    registers to worry about.

    For financial calculations, a fixed point format (up to 100 digits)
    was used. Using an implicit decimal point, rounding was a matter of
    where the implicit decimal point was located in the up to 100 digit
    field; so do your calculations in mills and truncate the result field
    to the desired precision.


    For fix point, anything "decimal" is even less useful than in floating
    point. I can't find any good explanation for use of "decimal" things in
    some early computers except that their designers were, may be, good
    engineers, but 2nd rate thinkers.









    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Scott Lurndal@3:633/280.2 to All on Sat Jun 28 00:01:10 2025
    Reply-To: slp53@pacbell.net

    Lawrence D'Oliveiro <ldo@nz.invalid> writes:
    On Thu, 26 Jun 2025 12:51:19 -0000 (UTC), Waldek Hebisch wrote:

    When working with such (low for me) precisions dynamic allocation of
    memory is major cost item, frequently more important than calculation.
    To avoid this cost one needs stack allocatation.

    What you may not realize is that, on current machines, there is about a >100:1 speed difference between accessing CPU registers and accessing main >memory.

    Depends on whether you're accessing cache (3 or 4 cycle latency for L1),
    and at what cache level. Even a DRAM access can complete in less than
    100 ns.

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: UsenetServer - www.usenetserver.com (3:633/280.2@fidonet)
  • From Janis Papanagnou@3:633/280.2 to All on Sat Jun 28 04:48:23 2025
    On 27.06.2025 13:52, Michael S wrote:
    On Thu, 26 Jun 2025 21:09:37 GMT
    scott@slp53.sl.home (Scott Lurndal) wrote:
    [..]

    For fix point, anything "decimal" is even less useful than in floating
    point. I can't find any good explanation for use of "decimal" things in
    some early computers [...]

    If not already obvious from the hints given in this thread you can
    search for the respective keywords.

    Janis


    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: A noiseless patient Spider (3:633/280.2@fidonet)
  • From Keith Thompson@3:633/280.2 to All on Sat Jun 28 10:56:33 2025
    Janis Papanagnou <janis_papanagnou+ng@hotmail.com> writes:
    On 27.06.2025 02:10, Keith Thompson wrote:
    scott@slp53.sl.home (Scott Lurndal) writes:
    [...]
    But not all decimal floating point implementations used "hex floating point".

    Burroughs medium systems had BCD floating point - one of the advantages
    was that it could exactly represent any floating point number that
    could be specified with a 100 digit mantissa and a 2 digit exponent.

    BCD uses 4 bits to represent values from 0 to 9. That's about 83%
    efficent relative to pure binary. (And it still can't represent 1/3.)

    That's a problem of where your numbers stem from. "1/3" is a formula!

    1/3 is also a C expression with the value 0. But what I was
    referring to was the real number 1/3, the unique real number that
    yields one when multiplied by three.

    My point is that any choice of radix in a floating-point format
    means that there are going to be some useful real numbers you
    can't represent. That's as true of decimal as it is of binary.
    (Trinary can represent 1/3, but can't represent 1/2.)

    Decimal can represent any number that can be exactly represented in
    binary *if* you have enough digits (because 10 is multiple of 2),
    and many numbers like 0.1 that can't be represented exactly in
    binary, but at a cost -- that is worth paying in some contexts.
    (Scaled integers might sometimes be a good alternative).

    I doubt that I'm saying anything you don't already know. I just
    wanted to clarify what I meant.

    [...]

    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    void Void(void) { Void(); } /* The recursive call of the void */

    --- MBSE BBS v1.1.1 (Linux-x86_64)
    * Origin: None to speak of (3:633/280.2@fidonet)