On 2020-05-13 11:44 -0700, Jeff Newmiller wrote:> Depending on reproducibility in the least > significant bits of floating point > calculations is a bad practice. Just > because you decide based on this one > example that one implementation of BLAS is > better than another does not mean that will > be true for all specific examples. IMO you > are drawing conclusions on data that is > effectively random and should change your > definition of "sufficient to the task".Dear Jeff, Right, so I really would have wanted OpenBLAS to be as reproducible as regular BLAS in this one random example, but my hands remains tied on this since I do not know anything about BLAS ... More interestingly, could you dream up any idea as to what might cause this difference? Best, Rasmus
In general, any time you deal with floating point numbers having different magnitudes, you risk pushing some low precision bits out of the result. Simply changing the sequence of calculations such as a literal polynomial evaluation versus Horner's method can obtain different results. Take a course in Numerical Analysis to learn more. [1] https://en.m.wikipedia.org/wiki/Horner%27s_method [2] https://en.m.wikipedia.org/wiki/Numerical_analysis On May 13, 2020 11:57:09 AM PDT, Rasmus Liland <jral at posteo.no> wrote:>On 2020-05-13 11:44 -0700, Jeff Newmiller wrote: >> Depending on reproducibility in the least >> significant bits of floating point >> calculations is a bad practice. Just >> because you decide based on this one >> example that one implementation of BLAS is >> better than another does not mean that will >> be true for all specific examples. IMO you >> are drawing conclusions on data that is >> effectively random and should change your >> definition of "sufficient to the task". > >Dear Jeff, > >Right, so I really would have wanted OpenBLAS >to be as reproducible as regular BLAS in this >one random example, but my hands remains tied >on this since I do not know anything about >BLAS ... > >More interestingly, could you dream up any >idea as to what might cause this difference? > >Best, >Rasmus-- Sent from my phone. Please excuse my brevity.
On 2020-05-13 13:13 -0700, Jeff Newmiller wrote:> In general, any time you deal with floating > point numbers having different magnitudes, > you risk pushing some low precision bits > out of the result. Simply changing the > sequence of calculations such as a literal > polynomial evaluation versus Horner's > method can obtain different results. Take a > course in Numerical Analysis to learn > more. > > [1] https://en.m.wikipedia.org/wiki/Horner%27s_method > [2] https://en.m.wikipedia.org/wiki/Numerical_analysisRight, it seems fairly interesting. I'll look into it at some point. /JR -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: <https://stat.ethz.ch/pipermail/r-help/attachments/20200513/8fa8a0f6/attachment.sig>