The Problem
We have a mid-sized program for a simulation task, that we need to optimize. We have already done our best optimizing the source to the limit of our programming skills, including profiling with Gprof and Valgrind.
When finally finished, we want to run the program on several systems probably for some months. Therefore we are really interested in pushing the optimization to the limits.
All systems will run Debian/Linux on relatively new hardware (Intel i5 or i7).
The Question
What are possible optimization options using a recent version of g++, that go beyond -O3/-Ofast?
We are also interested in costly minor optimization, that will payout in the long run.
What we use right now
Right now we use the following g++ optimization options:
-Ofast
: Highest "standard" optimization level. The included -ffast-math
did not cause any problems in our calculations, so we decided to go for it, despite of the non standard-compliance.-march=native
: Enabling the use of all CPU specific instructions.-flto
to allow link time optimization, across different compilation units.Most of the answers suggest alternative solutions, such as different compilers or external libraries, which would most likely bring a lot of rewriting or integration work. I will try to stick to what the question is asking, and focus on what can be done with GCC alone, by activating compiler flags or doing minimal changes to the code, as requested by the OP. This is not a "you must do this" answer, but more a collection of GCC tweaks that have worked out well for me and that you can give a try if they are relevant in your specific context.
Warnings regarding original question
Before going into the details, a few warning regarding the question, typically for people who will come along, read the question and say "the OP is optimising beyond O3, I should use the same flags than he does!".
-march=native
enables usage of instructions specific to a given CPU architecture, and that are not necessarily available on a different architecture. The program may not work at all if run on a system with a different CPU, or be significantly slower (as this also enables mtune=native
), so be aware of this if you decide to use it. More information here.-Ofast
, as you stated, enables some non standard compliant optimisations, so it should used with caution as well. More information here.Other GCC flags to try out
The details for the different flags are listed here.
-Ofast
enables -ffast-math
, which in turn enables -fno-math-errno
, -funsafe-math-optimizations
, -ffinite-math-only
, -fno-rounding-math
, -fno-signaling-nans
and -fcx-limited-range
. You can go even further on floating point calculation optimisations by selectively adding some extra flags such as -fno-signed-zeros
, -fno-trapping-math
and others. These are not included in -Ofast
and can give some additional performance increases on calculations, but you must check whether they actually benefit you and don't break any calculations.-frename-registers
, this option has never produced unwanted results for me and tends to give a noticeable performance increase (ie. can be measured when benchmarking). This is the type of flag that is very dependant on your processor though. -funroll-loops
also sometimes gives good results (and also implies -frename-registers
), but it is dependent on your actual code.PGO
GCC has Profile-Guided Optimisations features. There isn't a lot of precise GCC documentation about it, but nevertheless getting it to run is quite straightforward.
-fprofile-generate
.-fprofile-use
. If your application is multi-threaded also add the -fprofile-correction
flag.PGO with GCC can give amazing results and really significantly boost performance (I've seen a 15-20% speed increase on one of the projects I was recently working on). Obviously the issue here is to have some data that is sufficiently representative of your application's execution, which is not always available or easy to obtain.
GCC's Parallel Mode
GCC features a Parallel Mode, which was first released around the time where the GCC 4.2 compiler was out.
Basically, it provides you with parallel implementations of many of the algorithms in the C++ Standard Library. To enable them globally, you just have to add the -fopenmp
and the -D_GLIBCXX_PARALLEL
flags to the compiler. You can also selectively enable each algorithm when needed, but this will require some minor code changes.
All the information about this parallel mode can be found here.
If you frequently use these algorithms on large data structures, and have many hardware thread contexts available, these parallel implementations can give a huge performance boost. I have only made use of the parallel implementation of sort
so far, but to give a rough idea I managed to reduce the time for sorting from 14 to 4 seconds in one of my applications (testing environment: vector of 100 millions objects with custom comparator function and 8 cores machine).
Extra tricks
Unlike the previous points sections, this part does require some small changes in the code. They are also GCC specific (some of them work on Clang as well), so compile time macros should be used to keep the code portable on other compilers. This section contains some more advanced techniques, and should not be used if you don't have some assembly level understanding of what's going on. Also note that processors and compilers are pretty smart nowadays, so it may be tricky to get any noticeable benefit from the functions described here.
__builtin_expect
can help the compiler do better optimisations by providing it with branch prediction information. Other constructs such as __builtin_prefetch
brings data into a cache before it is accessed and can help reducing cache misses.hot
and cold
attributes; the former will indicate to the compiler that the function is a hotspot of the program and optimise the function more aggressively and place it in a special subsection of the text section, for better locality; the later will optimise the function for size and place it in another special subsection of the text section.I hope this answer will prove useful for some developers, and I will be glad to consider any edits or suggestions.