Undefined Behavior in C and C± is often exploited by compilers for optimization. Therefore, it is frequently believed that permitting Undefined Behaviour makes such programs faster, even if the architectures of today’s hardware are wildly different from the tiny and bare-bones CPUs these languages were once developed for, fifty years ago.
This study examines whether this is the case, by disabling many of such optimizations in clang and running a wide range of “optimized” and “unoptimized” benchmarks on Intel, AMD, and ARM architectures.
The general result is that while there are differences, the benchmarks do not run significantly faster with “optimizations”. A slowdown was observed on ARM - only if no link time optimization (LTO) was used. Averaged ifferences even with all “ub-based optimizations” enabled or disabled were typically below 2%, which is at the noise threshold.
There were also many cases where “UB-optimizations” made programs perform worse.
There were exceptions found to this general summary, and the causes for this were tracked down and explained.
Considering all that, the performance impact of permitting UB appears to be… a myth?
Great article, a pleasure to read and a fantastic work of really good science and critical thinking.