If two C++ files have different definitions of classes with the same name, then when they are compiled and linked, something is thrown out even without a warning. For example,
// a.cc
class Student {
public:
std::string foo() { return "A"; }
};
void foo_a()
{
Student stu;
std::cout << stu.foo() << std::endl;
}
// b.cc
class Student {
public:
std::string foo() { return "B"; }
};
void foo_b()
{
Student stu;
std::cout << stu.foo() << std::endl;
}
When compiled and linked together using g++, both will output "A" (if a.cc precedes b.cc in the command line order).
A similar topic is here. I see namespace will solve this problem but I don't know why the linker doesn't even shoot a warning. And if one definition of the class has extra function that isn't defined in another, say if b.cc is updated as:
// b.cc
class Student {
public:
std::string foo() { return "B"; }
std::string bar() { return "K"; }
};
void foo_b()
{
Student stu;
std::cout << stu.foo() << stu.bar() << std::endl;
}
Then stu.bar() works well. Thanks to anyone who can tell me how the compiler and linker work in such situation.
As an extra question, if classes are defined in header files, should they always be wrapped with unnamed namespace to avoid such situation? Is there any side effects?
This is a violation of the one definition rule (C++03, 3.2/5 "One definition rule"), which says (among other things):
There can be more than one definition of a class type (clause 9), ... in a program provided that each definition appears in a different translation unit, and provided the definitions satisfy the following requirements. Given such an entity named D defined in more than one translation unit, then
- each definition of D shall consist of the same sequence of tokens;
If you violate the one definition rule, the behavior is undefined (which means that strange things can happen).
The linker sees multiple definitions of Student::foo()
- one in a's object file and one in b's. However it doesn't complain about this; it just selects one of the two (as it happens, the first one it comes across). This 'soft' handling of duplicate functions apparently happens only for inline functions. For non-inline functions, the linker will complain about multiple definitions and will refuse to produce an executable (there may be options that relax this restriction). Both GNU ld
and MSVC's linker behave this way.
The behavior makes some sense; inline functions need to be available in every translation unit they're used in. And in the general case they need to have non-inline versions available (in case the call isn't inlined or if the function's address is taken). inline
is really just a free pass around the one-definition rule - but for it to work, all the inline definitions need to be the same.
When I look at dumps of the object files, I don't see anything obvious that explains to me how the linker knows that one function is permitted to have multiple definitions and others aren't, but I'm sure there's some flag or record which does just that. Unfortunately, I find that the workings of the linker and object file details aren't particularly well documented, so the precise mechanism will probably remain a mystery to me.
As for your second question:
As an extra question, if classes are defined in header files, should they always be wrapped with unnamed namespace to avoid such situation? Is there any side effects?
You almost certainly don't want to do this each class would be a distinct type in each translation unit, so technically instances of the class they couldn't be passed from one translation unit to another (by pointer, reference or copying). Also, you'd end up with multiple instances of any static members. That probably wouldn't work well.
Put them in different, named namespaces.