What does purpose use #ifdef and #if in C++

The meaning of #ifdef is that the code inside the block will be included in the compilation only if the mentioned preprocessor macro is defined. Similarily #if means that the block will be included only if the expression evaluates to true (when replacing undefined macros that appears in the expression with 0).

One important point here is that the preprocessor processes the source before it’s compiled and if the block is not included it will not be parsed at all by the actual compiler. This is an important feature of the construct.

Now for some reason C/C++ uses this. These languages processes the file in linear order so things that appears further down the source is not yet known and more important things that appears in other source files. This means that there is no (good) automatic way that a symbol in one source file could be referred to in another source file, especially if you want the type to be correct. This means that you will have to have prototypes and extern definitions to be able to refer to these. Also the case where two source files should share data types (structs and enums) this would have to be done.

In order to make it more practical one would put these inside a header file that each source file could #include (which basically means to insert the header file into what the actual compiler sees). This in turn easily leads to the situation where one header file includes another and you may run into the situation where the same file would be included twice. Since it’s invalid to repeat struct definitions one would need to make sure that the same header file is not defined twice – that’s where the #ifndef comes handy in the include-guard:

#ifndef HEADER_INCLUDED_
#define HEADER_INCLUDED_

// actual payload of the header file

#endif

In addition at a time where the parsing and compilation of the files would take long time this could result in speedup since the payload of the header could quickly be skipped (the preprocessing phase processes the source much faster than the actual compilation phase).

Another reason one “needed” macros are that early C-compilers were likely to just translate the code into assembler rather directly. You could avoid function calls by using macros instead which would result in the expansion of it would be inserted directly at the spot and would generate code right there instead of having to do a function call. The same thing applies to constants which would otherwise be variables that had to be fetched at some other place instead of being placed right into the generated code.

A third reason is the possibility of conditional compilation. Most compilers predefine a set of macros that are intended to give information about what system is being compiled for. We have for example the macro _WIN32 that is only defined if you’re compiling for windows. This would make it possible to have one code snippet that will be included only for windows and another that would be included instead if it’s another platform. Most compiler also has the possibility to set custom macros from command line which would mean that one could from command line (in visual studio you can change them in the project setting as well) alter which parts that will be compiled. The most striking such macro is the NDEBUG macro which if defined will disable all asserts – it’s normal to add a /DNDEBUG when compiling release builds.

Leave a Comment