Some have, but it rarely goes beyond "try a few different approaches and pick the best one" (or rather, pick the one that somehow worked best for the chosen datasets, it could really be the worst of the options when dealing with live data.
that's exactly what i've been saying. it takes purpose-devoted hardware to provide fast-running compilers, capable to gen truly efficient output.
that's because that's not the job of an (optimizing) compiler. You expect the compiler to produce a bunch an object/binary in an acceptable amount of time and that will run the program with a reasonable amount of performance.
Under that ruling, compilers have become incredibly good at what they do. But in the end, they produce 'general purpose code', just like most libraries provide 'general purpose containers and routines'.
SPecialising Always makes code go faster, but compilers and libraries can't afford to go that route, unless you have compilers that are specialized at a particular task (and those do exist). There are compilers out there that will produce facinating blindingly fast code for math problems, but which won't do well on other problem domains.
as i said before, if-based version would be good too, but it can't beat 'if-reduced' up. let's see simple example:
and you will never know unless you write your fsort WITH if. Compile both with optimizations turned on. and run them side by side on the same computer with the same datasets.
I highly doubt removign a couple if's will produce improvements in the same order as the fsort/qsort comparison (which is comparing apples vs oranges).
microoptimizing doesn't provide that kind of percentage improvement and 'handholding the compiler' definately doesn't.