My curiosity is the following:
Given a constant expression, the result maybe be different in the
machine that compiles the code compared with the machine that runs the
code.
What happens in this case?
The solution I can think of is emulation when evaluating constant expressions. But I don't if any compiler is doing this way.
Thiago Adams <thiago.adams@gmail.com> writes:
My curiosity is the following:
Given a constant expression, the result maybe be different in the
machine that compiles the code compared with the machine that runs the
code.
What happens in this case?
The solution I can think of is emulation when evaluating constant
expressions. But I don't if any compiler is doing this way.
For example, 65535u + 1u will evaluate to 0u if the target system
has 16-bit int, 65536u otherwise. (I picked an example that
doesn't depend on UINT_MAX or any other macros defined in the
standard library.)
Any compiler, cross- or not, is required to evaluate constant
expressions correctly for the target system. Whether they do so
by some sort of emulation is an implementation detail.
Even a non-cross compiler might not be implemented in exactly
the same language and configuration as the code it's compiling,
so evaluating constant expressions locally might not work correctly.
My curiosity is the following:
Given a constant expression, the result maybe be different in the
machine that compiles the code compared with the machine that runs the code. What happens in this case?
The solution I can think of is emulation when evaluating constant expressions. But I don't if any compiler is doing this way.
On 2025-08-29, Thiago Adams <thiago.adams@gmail.com> wrote:
My curiosity is the following:
Given a constant expression, the result maybe be different in the
machine that compiles the code compared with the machine that runs the code. >> What happens in this case?
A constant expression must be evaluated in the way that would happen
if it were translated to code on the target machine.
Thus, if necessary, the features of the target machine's arithmetic must
be simulated on the build machine.
(Modulo issues not relevant to the debate, like if the expression
has ambiguous evaluation orders that affect the result, or undefined >behaviors, they don't have to play out the same way under different
modes of processing in the same implementation.)
The solution I can think of is emulation when evaluating constant
expressions. But I don't if any compiler is doing this way.
They have to; if a constant-folding optimization produces a different
result (in an expression which has no issue) that is then an incorrect >optimization.
GCC uses arbitrary-precision libraries (GNU GMP for integer, and GNU
MPFR for floating-point), which are in part for this issue, I think.
On 2025-08-29, Thiago Adams <thiago.adams@gmail.com> wrote:....>> The solution I can think of is emulation when evaluating constant
My curiosity is the following:
Given a constant expression, the result maybe be different in the
machine that compiles the code compared with the machine that runs the code. >> What happens in this case?
A constant expression must be evaluated in the way that would happen
if it were translated to code on the target machine.
Thus, if necessary, the features of the target machine's arithmetic must
be simulated on the build machine.
expressions. But I don't if any compiler is doing this way.
They have to; if a constant-folding optimization produces a different
result (in an expression which has no issue) that is then an incorrect optimization.
On 2025-08-29 16:19, Kaz Kylheku wrote:
On 2025-08-29, Thiago Adams <thiago.adams@gmail.com> wrote:
My curiosity is the following:
Given a constant expression, the result maybe be different in the
machine that compiles the code compared with the machine that runs the code.
What happens in this case?
A constant expression must be evaluated in the way that would happen
if it were translated to code on the target machine.
Thus, if necessary, the features of the target machine's arithmetic must
be simulated on the build machine.
The solution I can think of is emulation when evaluating constant
expressions. But I don't if any compiler is doing this way.
They have to; if a constant-folding optimization produces a different
result (in an expression which has no issue) that is then an incorrect
optimization.
Emulation is necessary only if the value of the constant expression
changes which code is generated. If the value is simply used by the calculations, then the value can be calculated at run time on the target machine, as if done before the start of main().
Em 29/08/2025 16:54, Keith Thompson escreveu:
Thiago Adams <thiago.adams@gmail.com> writes:yes this is the kind of sample I had in mind.
My curiosity is the following:
Given a constant expression, the result maybe be different in the
machine that compiles the code compared with the machine that runs the
code.
What happens in this case?
The solution I can think of is emulation when evaluating constant
expressions. But I don't if any compiler is doing this way.
For example, 65535u + 1u will evaluate to 0u if the target system
has 16-bit int, 65536u otherwise.ÿ (I picked an example that
doesn't depend on UINT_MAX or any other macros defined in the
standard library.)
Any compiler, cross- or not, is required to evaluate constantso in
expressions correctly for the target system.ÿ Whether they do so
by some sort of emulation is an implementation detail.
Even a non-cross compiler might not be implemented in exactly
the same language and configuration as the code it's compiling,
so evaluating constant expressions locally might not work correctly.
So in theory it has to be the same result. This may be hard do achieve.
On 29/08/2025 22:10, Thiago Adams wrote:
Em 29/08/2025 16:54, Keith Thompson escreveu:
Thiago Adams <thiago.adams@gmail.com> writes:yes this is the kind of sample I had in mind.
My curiosity is the following:
Given a constant expression, the result maybe be different in the
machine that compiles the code compared with the machine that runs the >>>> code.
What happens in this case?
The solution I can think of is emulation when evaluating constant
expressions. But I don't if any compiler is doing this way.
For example, 65535u + 1u will evaluate to 0u if the target system
has 16-bit int, 65536u otherwise.ÿ (I picked an example that
doesn't depend on UINT_MAX or any other macros defined in the
standard library.)
Any compiler, cross- or not, is required to evaluate constantso in
expressions correctly for the target system.ÿ Whether they do so
by some sort of emulation is an implementation detail.
Even a non-cross compiler might not be implemented in exactly
the same language and configuration as the code it's compiling,
so evaluating constant expressions locally might not work correctly.
So in theory it has to be the same result. This may be hard do achieve.
Yes, it can be hard to achieve in some cases.ÿ For things like integer arithmetic, it's no serious challenge - floating point is the biggie for
the challenge of getting the details correct when the host and the
target are different.ÿ (And even if the compiler is native, different floating point options can lead to significantly different results.)
Compilers have to make sure they can do compile-time evaluation that is bit-perfect to run-time evaluation before they can use it as an
optimisation - any risk of an error is a bug in the compiler.ÿ I don't
know about other compilers, but gcc has a /huge/ library that is used to simulate floating point on a wide range of targets and options,
precisely so that it can get this right.
On 9/1/2025 5:10 AM, David Brown wrote:
On 29/08/2025 22:10, Thiago Adams wrote:
Em 29/08/2025 16:54, Keith Thompson escreveu:
Thiago Adams <thiago.adams@gmail.com> writes:yes this is the kind of sample I had in mind.
My curiosity is the following:
Given a constant expression, the result maybe be different in the
machine that compiles the code compared with the machine that runs the >>>>> code.
What happens in this case?
The solution I can think of is emulation when evaluating constant
expressions. But I don't if any compiler is doing this way.
For example, 65535u + 1u will evaluate to 0u if the target system
has 16-bit int, 65536u otherwise.ÿ (I picked an example that
doesn't depend on UINT_MAX or any other macros defined in the
standard library.)
Any compiler, cross- or not, is required to evaluate constantso in
expressions correctly for the target system.ÿ Whether they do so
by some sort of emulation is an implementation detail.
Even a non-cross compiler might not be implemented in exactly
the same language and configuration as the code it's compiling,
so evaluating constant expressions locally might not work correctly.
So in theory it has to be the same result. This may be hard do achieve.
Yes, it can be hard to achieve in some cases.ÿ For things like integer
arithmetic, it's no serious challenge - floating point is the biggie
for the challenge of getting the details correct when the host and the
target are different.ÿ (And even if the compiler is native, different
floating point options can lead to significantly different results.)
Compilers have to make sure they can do compile-time evaluation that
is bit-perfect to run-time evaluation before they can use it as an
optimisation - any risk of an error is a bug in the compiler.ÿ I don't
know about other compilers, but gcc has a /huge/ library that is used
to simulate floating point on a wide range of targets and options,
precisely so that it can get this right.
Interesting.
Yes, I think for integers it is not so difficult.
if the compiler has the range int8_tÿ ...ÿ int64_tÿ then it is just a
matter of selection the of fixed size according with the
abstract type for that platform.
For floating points I think at least for "desktop" computers the result
may be the same.
Compilers have to make sure they can do compile-time evaluation that
is bit-perfect to run-time evaluation before they can use it as an optimisation - any risk of an error is a bug in the compiler. I don't
know about other compilers, but gcc has a /huge/ library that is used
to simulate floating point on a wide range of targets and options,
precisely so that it can get this right.
David Brown <david.brown@hesbynett.no> writes:
[...]
Compilers have to make sure they can do compile-time evaluation that
is bit-perfect to run-time evaluation before they can use it as an
optimisation - any risk of an error is a bug in the compiler. I don't
know about other compilers, but gcc has a /huge/ library that is used
to simulate floating point on a wide range of targets and options,
precisely so that it can get this right.
What library are you referring to? Is it something internal to gcc?
I know gcc uses GMP, but that's not really floating-point emulation.
David Brown <david.brown@hesbynett.no> writes:
[...]
Compilers have to make sure they can do compile-time evaluation that
is bit-perfect to run-time evaluation before they can use it as an
optimisation - any risk of an error is a bug in the compiler. I don't
know about other compilers, but gcc has a /huge/ library that is used
to simulate floating point on a wide range of targets and options,
precisely so that it can get this right.
What library are you referring to? Is it something internal to gcc?
I know gcc uses GMP, but that's not really floating-point emulation.
On 2025-09-01, Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
David Brown <david.brown@hesbynett.no> writes:
[...]
Compilers have to make sure they can do compile-time evaluation
that is bit-perfect to run-time evaluation before they can use it
as an optimisation - any risk of an error is a bug in the
compiler. I don't know about other compilers, but gcc has a
/huge/ library that is used to simulate floating point on a wide
range of targets and options, precisely so that it can get this
right.
What library are you referring to? Is it something internal to gcc?
I know gcc uses GMP, but that's not really floating-point
emulation.
GCC also uses not only GNU GMP but also GNU MPFR.
On Tue, 2 Sep 2025 06:40:43 -0000 (UTC)
Kaz Kylheku <643-408-1753@kylheku.com> wrote:
On 2025-09-01, Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
David Brown <david.brown@hesbynett.no> writes:
[...]
Compilers have to make sure they can do compile-time evaluation
that is bit-perfect to run-time evaluation before they can use it
as an optimisation - any risk of an error is a bug in the
compiler. I don't know about other compilers, but gcc has a
/huge/ library that is used to simulate floating point on a wide
range of targets and options, precisely so that it can get this
right.
What library are you referring to? Is it something internal to gcc?
I know gcc uses GMP, but that's not really floating-point
emulation.
GCC also uses not only GNU GMP but also GNU MPFR.
MPFR is of no help when you want to emulate an exact behavior of
particular hardware format.
On 29/08/2025 22:10, Thiago Adams wrote:
Em 29/08/2025 16:54, Keith Thompson escreveu:
Thiago Adams <thiago.adams@gmail.com> writes:yes this is the kind of sample I had in mind.
My curiosity is the following:
Given a constant expression, the result maybe be different in the
machine that compiles the code compared with the machine that runs the >>>> code.
What happens in this case?
The solution I can think of is emulation when evaluating constant
expressions. But I don't if any compiler is doing this way.
For example, 65535u + 1u will evaluate to 0u if the target system
has 16-bit int, 65536u otherwise.ÿ (I picked an example that
doesn't depend on UINT_MAX or any other macros defined in the
standard library.)
Any compiler, cross- or not, is required to evaluate constantso in
expressions correctly for the target system.ÿ Whether they do so
by some sort of emulation is an implementation detail.
Even a non-cross compiler might not be implemented in exactly
the same language and configuration as the code it's compiling,
so evaluating constant expressions locally might not work correctly.
So in theory it has to be the same result. This may be hard do achieve.
Yes, it can be hard to achieve in some cases. For things like integer arithmetic, it's no serious challenge - floating point is the biggie for
the challenge of getting the details correct when the host and the
target are different. (And even if the compiler is native, different floating point options can lead to significantly different results.)
Compilers have to make sure they can do compile-time evaluation that is bit-perfect to run-time evaluation before they can use it as an
optimisation - any risk of an error is a bug in the compiler. I don't
know about other compilers, but gcc has a /huge/ library that is used to simulate floating point on a wide range of targets and options,
precisely so that it can get this right.
On 29/08/2025 22:10, Thiago Adams wrote:....
Em 29/08/2025 16:54, Keith Thompson escreveu:
Thiago Adams writes:
My curiosity is the following:
For example, 65535u + 1u will evaluate to 0u if the target system
has 16-bit int, 65536u otherwise.ÿ (I picked an example that
doesn't depend on UINT_MAX or any other macros defined in the
standard library.)
Even a non-cross compiler might not be implemented in exactlyso in
the same language and configuration as the code it's compiling,
so evaluating constant expressions locally might not work correctly.
So in theory it has to be the same result. This may be hard do achieve.
Yes, it can be hard to achieve in some cases. For things like integer >arithmetic, it's no serious challenge - floating point is the biggie for
the challenge of getting the details correct when the host and the
target are different.
(And even if the compiler is native, different
floating point options can lead to significantly different results.)
Compilers have to make sure they can do compile-time evaluation that is >bit-perfect to run-time evaluation before they can use it as an
optimisation - any risk of an error is a bug in the compiler. I don't
know about other compilers, but gcc has a /huge/ library that is used to >simulate floating point on a wide range of targets and options,
precisely so that it can get this right.
On 2025-09-02, Michael S <already5chosen@yahoo.com> wrote:
On Tue, 2 Sep 2025 06:40:43 -0000 (UTC)
Kaz Kylheku <643-408-1753@kylheku.com> wrote:
On 2025-09-01, Keith Thompson <Keith.S.Thompson+u@gmail.com>
wrote:
David Brown <david.brown@hesbynett.no> writes:
[...]
Compilers have to make sure they can do compile-time evaluation
that is bit-perfect to run-time evaluation before they can use
it as an optimisation - any risk of an error is a bug in the
compiler. I don't know about other compilers, but gcc has a
/huge/ library that is used to simulate floating point on a wide
range of targets and options, precisely so that it can get this
right.
What library are you referring to? Is it something internal to
gcc? I know gcc uses GMP, but that's not really floating-point
emulation.
GCC also uses not only GNU GMP but also GNU MPFR.
MPFR is of no help when you want to emulate an exact behavior of
particular hardware format.
Then why is it there?
David Brown <david.brown@hesbynett.no> wrote:
On 29/08/2025 22:10, Thiago Adams wrote: =20=20
Em 29/08/2025 16:54, Keith Thompson escreveu: =20=20
Thiago Adams <thiago.adams@gmail.com> writes: =20yes this is the kind of sample I had in mind.
My curiosity is the following:
Given a constant expression, the result maybe be different in the
machine that compiles the code compared with the machine that
runs the code.
What happens in this case?
The solution I can think of is emulation when evaluating constant
expressions. But I don't if any compiler is doing this way. =20
For example, 65535u + 1u will evaluate to 0u if the target system
has 16-bit int, 65536u otherwise.=A0 (I picked an example that
doesn't depend on UINT_MAX or any other macros defined in the
standard library.)
=20
=20
Any compiler, cross- or not, is required to evaluate constantso in
expressions correctly for the target system.=A0 Whether they do so
by some sort of emulation is an implementation detail.
Even a non-cross compiler might not be implemented in exactly
the same language and configuration as the code it's compiling,
so evaluating constant expressions locally might not work
correctly.=20
So in theory it has to be the same result. This may be hard do
achieve.=20
Yes, it can be hard to achieve in some cases. For things like
integer arithmetic, it's no serious challenge - floating point is
the biggie for the challenge of getting the details correct when
the host and the target are different. (And even if the compiler
is native, different floating point options can lead to
significantly different results.)
=20
Compilers have to make sure they can do compile-time evaluation
that is bit-perfect to run-time evaluation before they can use it
as an optimisation - any risk of an error is a bug in the compiler.
I don't know about other compilers, but gcc has a /huge/ library
that is used to simulate floating point on a wide range of targets
and options, precisely so that it can get this right. =20
AFAIK in normal mode gcc do not consider differences between
compile-time and run-time evaluation of floating point constants as a
bug. And the may differ, with compile-time evaluation usually giving
more accuracy. OTOH they care vary much that cross-compiler and
native compiler produce the same results.=20
So they do not use native
floating point arithmetic to evaluate constants. Rather, both
native compiler and cross complier use the same partable library
(that is MPFR). One can probably request more strict mode.
If it is available, then I do not know how it is done. One
possible approach is to delay anything non-trivial to runtime.
Non-trivial for floating point is likely mean transcendental
functions: different libraries almost surely will produce different
results and for legal reasons alone compiler can not assume access
to target library.
=20
Ordinary four arithmetic operations for IEEE are easy: rounding is
handled by MPFR and things like overflow, infinities, etc, are just
a bunch of tediouis special cases. But tanscendental functions
usually do not have well specified rounding behaviour, so exact
rounding in MPFR is of no help when trying to reproduce results
from runtime libraries.
=20
Old (and possibly some new) embedded targets are in a sense more "interesting", as they implemented basic operations in software,
frequently taking some shortcuts to gain speed.
=20
On Tue, 2 Sep 2025 16:58:45 -0000 (UTC)
Kaz Kylheku <643-408-1753@kylheku.com> wrote:
On 2025-09-02, Michael S <already5chosen@yahoo.com> wrote:
On Tue, 2 Sep 2025 06:40:43 -0000 (UTC)
Kaz Kylheku <643-408-1753@kylheku.com> wrote:
On 2025-09-01, Keith Thompson <Keith.S.Thompson+u@gmail.com>
wrote:
David Brown <david.brown@hesbynett.no> writes:
[...]
Compilers have to make sure they can do compile-time evaluation
that is bit-perfect to run-time evaluation before they can use
it as an optimisation - any risk of an error is a bug in the
compiler. I don't know about other compilers, but gcc has a
/huge/ library that is used to simulate floating point on a wide
range of targets and options, precisely so that it can get this
right.
What library are you referring to? Is it something internal to
gcc? I know gcc uses GMP, but that's not really floating-point
emulation.
GCC also uses not only GNU GMP but also GNU MPFR.
MPFR is of no help when you want to emulate an exact behavior of
particular hardware format.
Then why is it there?
Most likely because people that think that compilers make extraordinary effort in order to match FP results evaluated at compile time with
those evaluated at run time do not know what they are talking about.
As suggested above by Waldek Hebisch, compilers are quite happy to do compile-time evaluation at higher (preferably much higher) precision
than at run time.
On 01/09/2025 23:11, Keith Thompson wrote:
David Brown <david.brown@hesbynett.no> writes:
[...]
Compilers have to make sure they can do compile-time evaluation that
is bit-perfect to run-time evaluation before they can use it as an
optimisation - any risk of an error is a bug in the compiler. I don't
know about other compilers, but gcc has a /huge/ library that is used
to simulate floating point on a wide range of targets and options,
precisely so that it can get this right.
What library are you referring to? Is it something internal to gcc?
I know gcc uses GMP, but that's not really floating-point emulation.
I am afraid I don't know the details here, and to what extent it is
internal to the GCC project or external. I /think/, but I could easily
be wrong, that general libraries like GMP are used for the actual calculations, while there is GCC-specific stuff to make sure things
match up with the target details.
On Tue, 2 Sep 2025 17:32:56 -0000 (UTC)
antispam@fricas.org (Waldek Hebisch) wrote:
David Brown <david.brown@hesbynett.no> wrote:
On 29/08/2025 22:10, Thiago Adams wrote:
Em 29/08/2025 16:54, Keith Thompson escreveu:
Thiago Adams <thiago.adams@gmail.com> writes:yes this is the kind of sample I had in mind.
My curiosity is the following:
Given a constant expression, the result maybe be different in the
machine that compiles the code compared with the machine that
runs the code.
What happens in this case?
The solution I can think of is emulation when evaluating constant
expressions. But I don't if any compiler is doing this way.
For example, 65535u + 1u will evaluate to 0u if the target system
has 16-bit int, 65536u otherwise.-a (I picked an example that
doesn't depend on UINT_MAX or any other macros defined in the
standard library.)
Any compiler, cross- or not, is required to evaluate constantso in
expressions correctly for the target system.-a Whether they do so
by some sort of emulation is an implementation detail.
Even a non-cross compiler might not be implemented in exactly
the same language and configuration as the code it's compiling,
so evaluating constant expressions locally might not work
correctly.
So in theory it has to be the same result. This may be hard do
achieve.
Yes, it can be hard to achieve in some cases. For things like
integer arithmetic, it's no serious challenge - floating point is
the biggie for the challenge of getting the details correct when
the host and the target are different. (And even if the compiler
is native, different floating point options can lead to
significantly different results.)
Compilers have to make sure they can do compile-time evaluation
that is bit-perfect to run-time evaluation before they can use it
as an optimisation - any risk of an error is a bug in the compiler.
I don't know about other compilers, but gcc has a /huge/ library
that is used to simulate floating point on a wide range of targets
and options, precisely so that it can get this right.
AFAIK in normal mode gcc do not consider differences between
compile-time and run-time evaluation of floating point constants as a
bug. And the may differ, with compile-time evaluation usually giving
more accuracy. OTOH they care vary much that cross-compiler and
native compiler produce the same results.
For majority of "interesting" targets native compilers do not exist.
So they do not use native
floating point arithmetic to evaluate constants. Rather, both
native compiler and cross complier use the same partable library
(that is MPFR). One can probably request more strict mode.
If it is available, then I do not know how it is done. One
possible approach is to delay anything non-trivial to runtime.
I certainly would not be happy if compiler that I am using for embedded targets that typically do not have hardware support for 'double' will
fail to evaluate DP constant expressions in compile time.
Luckily, it never happens.
Non-trivial for floating point is likely mean transcendental
functions: different libraries almost surely will produce different
results and for legal reasons alone compiler can not assume access
to target library.
Right now in C, including C23, transcendental functions can not be parts
of constant expression.
Ordinary four arithmetic operations for IEEE are easy: rounding is
handled by MPFR and things like overflow, infinities, etc, are just
a bunch of tediouis special cases. But tanscendental functions
usually do not have well specified rounding behaviour, so exact
rounding in MPFR is of no help when trying to reproduce results
from runtime libraries.
Old (and possibly some new) embedded targets are in a sense more
"interesting", as they implemented basic operations in software,
frequently taking some shortcuts to gain speed.
Why "some new"? Ovewhelming majority of microcontrollers, both old and
new, do not implement double precision FP math in hardware.
Michael S <already5chosen@yahoo.com> wrote:
On Tue, 2 Sep 2025 17:32:56 -0000 (UTC)
antispam@fricas.org (Waldek Hebisch) wrote:
David Brown <david.brown@hesbynett.no> wrote:
On 29/08/2025 22:10, Thiago Adams wrote:
Em 29/08/2025 16:54, Keith Thompson escreveu:
Thiago Adams <thiago.adams@gmail.com> writes:yes this is the kind of sample I had in mind.
My curiosity is the following:
Given a constant expression, the result maybe be different in
the machine that compiles the code compared with the machine
that runs the code.
What happens in this case?
The solution I can think of is emulation when evaluating
constant expressions. But I don't if any compiler is doing
this way.
For example, 65535u + 1u will evaluate to 0u if the target
system has 16-bit int, 65536u otherwise.a (I picked an example
that doesn't depend on UINT_MAX or any other macros defined in
the standard library.)
Any compiler, cross- or not, is required to evaluate constantso in
expressions correctly for the target system.a Whether they do
so by some sort of emulation is an implementation detail.
Even a non-cross compiler might not be implemented in exactly
the same language and configuration as the code it's compiling,
so evaluating constant expressions locally might not work
correctly.
So in theory it has to be the same result. This may be hard do
achieve.
Yes, it can be hard to achieve in some cases. For things like
integer arithmetic, it's no serious challenge - floating point is
the biggie for the challenge of getting the details correct when
the host and the target are different. (And even if the compiler
is native, different floating point options can lead to
significantly different results.)
Compilers have to make sure they can do compile-time evaluation
that is bit-perfect to run-time evaluation before they can use it
as an optimisation - any risk of an error is a bug in the
compiler. I don't know about other compilers, but gcc has a
/huge/ library that is used to simulate floating point on a wide
range of targets and options, precisely so that it can get this
right.
AFAIK in normal mode gcc do not consider differences between
compile-time and run-time evaluation of floating point constants
as a bug. And the may differ, with compile-time evaluation
usually giving more accuracy. OTOH they care vary much that
cross-compiler and native compiler produce the same results.
For majority of "interesting" targets native compilers do not exist.
So they do not use native
floating point arithmetic to evaluate constants. Rather, both
native compiler and cross complier use the same partable library
(that is MPFR). One can probably request more strict mode.
If it is available, then I do not know how it is done. One
possible approach is to delay anything non-trivial to runtime.
I certainly would not be happy if compiler that I am using for
embedded targets that typically do not have hardware support for
'double' will fail to evaluate DP constant expressions in compile
time. Luckily, it never happens.
Non-trivial for floating point is likely mean transcendental
functions: different libraries almost surely will produce different
results and for legal reasons alone compiler can not assume access
to target library.
Right now in C, including C23, transcendental functions can not be
parts of constant expression.
That is irrelevant for current question. Computing constants at
compile time is an optimization and compilers do this also for
transcendental constants.
Ordinary four arithmetic operations for IEEE are easy: rounding is
handled by MPFR and things like overflow, infinities, etc, are just
a bunch of tediouis special cases. But tanscendental functions
usually do not have well specified rounding behaviour, so exact
rounding in MPFR is of no help when trying to reproduce results
from runtime libraries.
Old (and possibly some new) embedded targets are in a sense more
"interesting", as they implemented basic operations in software,
frequently taking some shortcuts to gain speed.
Why "some new"? Ovewhelming majority of microcontrollers, both old
and new, do not implement double precision FP math in hardware.
Old libraries often took shortcuts to make them faster. For new
targets there is pressure to implement precise rounding, so I do
not know if new targets are doing "interesting" things. Also,
while minority of embedded targets has hardware floating point,
need for _fast_ floating point is limited, and when needed
application may use processor with hardware floating point.
Michael S <already5chosen@yahoo.com> wrote:
On Tue, 2 Sep 2025 17:32:56 -0000 (UTC)
antispam@fricas.org (Waldek Hebisch) wrote:
David Brown <david.brown@hesbynett.no> wrote:
On 29/08/2025 22:10, Thiago Adams wrote:
Em 29/08/2025 16:54, Keith Thompson escreveu:
Thiago Adams <thiago.adams@gmail.com> writes:yes this is the kind of sample I had in mind.
My curiosity is the following:
Given a constant expression, the result maybe be different in the >>>>>>> machine that compiles the code compared with the machine that
runs the code.
What happens in this case?
The solution I can think of is emulation when evaluating constant >>>>>>> expressions. But I don't if any compiler is doing this way.
For example, 65535u + 1u will evaluate to 0u if the target system
has 16-bit int, 65536u otherwise.-a (I picked an example that
doesn't depend on UINT_MAX or any other macros defined in the
standard library.)
Any compiler, cross- or not, is required to evaluate constantso in
expressions correctly for the target system.-a Whether they do so
by some sort of emulation is an implementation detail.
Even a non-cross compiler might not be implemented in exactly
the same language and configuration as the code it's compiling,
so evaluating constant expressions locally might not work
correctly.
So in theory it has to be the same result. This may be hard do
achieve.
Yes, it can be hard to achieve in some cases. For things like
integer arithmetic, it's no serious challenge - floating point is
the biggie for the challenge of getting the details correct when
the host and the target are different. (And even if the compiler
is native, different floating point options can lead to
significantly different results.)
Compilers have to make sure they can do compile-time evaluation
that is bit-perfect to run-time evaluation before they can use it
as an optimisation - any risk of an error is a bug in the compiler.
I don't know about other compilers, but gcc has a /huge/ library
that is used to simulate floating point on a wide range of targets
and options, precisely so that it can get this right.
AFAIK in normal mode gcc do not consider differences between
compile-time and run-time evaluation of floating point constants as a
bug. And the may differ, with compile-time evaluation usually giving
more accuracy. OTOH they care vary much that cross-compiler and
native compiler produce the same results.
For majority of "interesting" targets native compilers do not exist.
So they do not use native
floating point arithmetic to evaluate constants. Rather, both
native compiler and cross complier use the same partable library
(that is MPFR). One can probably request more strict mode.
If it is available, then I do not know how it is done. One
possible approach is to delay anything non-trivial to runtime.
I certainly would not be happy if compiler that I am using for embedded
targets that typically do not have hardware support for 'double' will
fail to evaluate DP constant expressions in compile time.
Luckily, it never happens.
Non-trivial for floating point is likely mean transcendental
functions: different libraries almost surely will produce different
results and for legal reasons alone compiler can not assume access
to target library.
Right now in C, including C23, transcendental functions can not be parts
of constant expression.
That is irrelevant for current question. Computing constants at
compile time is an optimization and compilers do this also for
transcendental constants.
Sysop: | Tetrazocine |
---|---|
Location: | Melbourne, VIC, Australia |
Users: | 12 |
Nodes: | 8 (0 / 8) |
Uptime: | 52:10:59 |
Calls: | 173 |
Files: | 21,502 |
Messages: | 80,020 |