Irregular reminder: floating point calculations are not exact.
Take your program to a different #hardware and your tests may start failing:
https://codeberg.org/libobscura/libobscura/src/branch/master/crates/conv/tests/convtest.rs#L40
https://codeberg.org/libobscura/libobscura/actions/runs/30#jobstep-7-431
@pavel As far as I understand, it's even worse than that: depending on how you calculate the same value even on the same hardware, it can differ.
--ffast-math, for example.
Then you have differing hardware support for functions, so it might be polyfilled in software, calculating a slightly different value.
I heard of fp being a problem for emulating consoles.
Then you have my case: GPUs, where the precision is not even exactly defined:
@pavel I just assume all floating point operations are approximations, and they may differ. Otherwise you're going to run into puzzles like this:
https://chaos.social/@epilys/113538172289599584
Exactness of floats is best handled with error analysis techniques.
That's why the shader case bothers me, it's supposed to be exact to 8 bits :/
@dcz Misleading as written. An arbitrary sequence of FP operations on IEEE compliant hardware produce bit-exact results on all of them. The list of non-compliant hardware is short(ish): older GPUs in the mainstream (not sure about the state of the embedded world). On compliant hardware you'll see different results only if difference sequences are being executed: different libm's and/or different compiler options (including some default choices).
@mbr Maybe if you're a hardware programmer who bangs out assembly.
For us, mortals who operate on sequences no more low-level than 0.2*0.9, uncertain is what floats are.
Even then, exact only if you avoid transcendental functions:
I think exactness is more of an exception, all things considered.
@dcz That's an example of something implemented in libm. If you want the same results everywhere you have to either choose and include a sin implementation in the software or fix a libm version used everywhere (that latter is certainly easier). And the same holds for all "software" function. The SO link is a different topic...about how accurate a specific implementation is.
@mbr That's what I mean: you can make it work exactly, but you have to pay extra attention to it. Taken to an extreme, you can emulate anything exactly, but on the level most people operate, it doesn't matter.
What different topic do you mean? To me, different hardware having different rounding is what my original post was about: different hardware, different results.
@dcz The short version of what I'm saying is: there's no black magic. The same sequence of hardware ops: multiply, divide, add, sub, sqrt, fma (assuming available) on all compliant hardware give bit-identical (used exact before...identical is more clear) results (including changing rounding modes...but pretty much only so-called experts goof around with stuff like that). Not getting bit-identical results (in a high level language) is a problem related to the language, using different
@dcz libraries and/or compiler option choices. Notably in C/C++ is they default to automatically using fma if the hardware target has a fast fma. This will cause targets with and with fast fma to produce difference sequences than therefore different results. Striving for bit exact results in either C or C++ (as an example) is a PITA but it's a language and tools problem.
@mbr I agree on one level. On another level, it's a mathematics problem. Nobody needs floats, people want real numbers. Floats are a compromise without the same properties, especially around commutativity.
To get useful results, you can either pay attention to the calculation, like you point out, or pay attention to the error bars on the result.
I made this post because I believe most people don't want to pay a lot of attention. To them, floats are inexact.