Doesn't separate CPP files for every script bloat the EXE?

Last time I checked, the compiler replaces the line #include <somefile.h> with the actual code found in that file.

ScriptMgr.h = approx 1000 lines (give or take due to comments)

ScriptedCreature.h = approx 368 lines (give or take due to comments)

So that means for every “boss_xyz.cpp” that uses those two headers we’re including approx. 1368 lines of code over and over. Shouldn’t we really be putting more than one boss script in a CPP file to cut down on the bloat in the final compiled EXE?

You can always separate boss blocks with comments and then name the file something like “boss_a.cpp” for all bosses beginning with “A”, “boss_b.cpp” for all bosses beginning with “B”, etc.

that’s just for the compiler validation. You won’t see duplicated code in the compiled exe

I believe most modern C/C++ compilers, anything released since 2000, deal with this for you. The linked executable organization has very little resemblance to the way you collate your code.

Includes are preprocessor statements to import file contents into an object module. There is no rule that includes must be headers or anything for that matter.

So even if compilers didn’t deal with your proposed problem you could create a single .cpp file that looked like the following.

[FONT=‘courier new’]#include “header.h”[/FONT]

[FONT=‘courier new’]#include “boss_script.cpp”[/FONT]

[FONT=‘courier new’]#include “command_script.cpp”[/FONT]

You wouldn’t include boss_script.cpp, or the others, in cmake because the linker would complain about duplicate symbols. Historically many programs use the preprocessor to their advantage to do such things. Two tools that come to mind are lexx and yacc.

I don’t recommend abusing the preprocessor like this though because it makes debugging and static code analysis difficult. You’re going to be debugging the processed file instead of the actual source. That’s probably one of the reasons there has been a backlash against preprocessors and macros in recent programming language developments. They are very powerful, but prone to abuse.

Edit: Bonus - The duplication you mention used to be a major pain for compile times. Indeed each file must be processed multiple times. Today we have precompiled headers to bring compile times down again.

These are a few of the articles I found regarding #include before I posted:

http://www.cplusplus.com/forum/articles/10627/

The #include statement is basically like a copy/paste operation. The compiler will “replace” the #include line with the actual contents of the file you’re including when it compiles the file.

http://msdn.microsoft.com/en-us/library/36k2cdd4%28v=vs.71%29.aspx

#include <stdio.h>
This example adds the contents of the file named STDIO.H to the source program.

This article on PCH mentions they’re precompiled to make compiling faster but not necessarily that they aren’t inserted into the resulting code. Not all the headers in TrinityCore are precompiled though.

All I’m going to add to this is that if the C++ compiler did not optimize out redundant class definitions from header includes, their programmers would be laughed out of the industry.

Compilers will remove even code that is NOT redundant if it finds the code cannot be reached (i.e. a method that you write but do not actually call from anywhere).

Don’t forget that the compiler COMPILES the code into object code. The linker then comes through and turns it into an exe. During this process symbols are resolved, tables set and optimizations are done such as stripping out redundancies.

I guess my brain was just hung up on the “replace the # include”. In my mind I was picturing the compiler stuffing all the header file info into the top of each CPP file before it started compiling.