Warnings are one of the few weapons compiler has in its disposal to draw your attention to potential defects. It works well until there are too many of them and that’s why some developers eliminate them with extreme prejudice. If your team naturally gravitates to a warning-free codebase, good for you, but not many teams are like that. Let’s talk about a team/codebase where warnings got out of control and now there is a flood of them. At that point the one new warning that just popped up won’t get noticed at all. This effect can also be amplified by complex build where many warnings that pop up are not relevant to the code you are working with and/or when the build tool doesn’t distinguish warnings from other output well. And then someone gets fed up with the situation and decides to suppress majority of the warnings so that at least something is visible.
I see you have important warnings in your build output… It would be a shame if someone suppressed them.
You can’t even be mad at that person, because looking at build output is now arguably much more pleasant. It’s so easy to give up. But quitters never win. Over time more warnings pop up, and no one cares. Some of them turn into defects. And then you spend 20 hours fixing a defect that was caused by a line the compiler was complaining about for the past month. Or it would be complaining if it was not suppressed. Now you get it, quitters indeed never win. But how do we fix it all now? There are several options.
The end goal here is a warning-free codebase with “Treat Warnings as Errors”-like feature turned on. No one is going to introduce new warnings with that thing on. The problem is getting to that state.
Option 1: Naturally Gravitate to Warning-free Codebase
This works the best but if your current team is not like that, you have to replace it to make it work, which is a costly process. It can be a legitimate option if the team is just bad in general.
Option 2: Thermonuclear War against Warnings
You just flip the switches and start treating all warnings as errors. You have just stopped all development until all of that is fixed. And that will take a while. You’ll miss deadlines and then revenue. So that’s not a good approach for most projects but if you are in a position where that is not a problem, totally go for it. Alternatively new contractual obligations or state regulations may require you to do this, whether you like it or not. This can happen in an industry where a small error can lead to catastrophic damage, such as an explosion of orbital launch vehicle or a loss of human life.
Option 3: Divide and Conquer
This approach is less drastic but if your codebase is heavily infested, you’ll need to allocate development time for this, otherwise it just won’t happen. Also ever heard of the serpent Ouroboros? ‘Cause you might have just turned yourself into one. As you fix one section of codebase after another, warnings may get reintroduced into previously cleaned sections. I’ve seen our architect reintroduce warnings in a project I have cleaned before. Usually this can be prevented but let’s not forget this aspect when planning our approach. There are multiple ways to divide the codebase.
A: Clean individual projects
This should work in most languages/compilers since you can usually turn a “Treat warnings as errors”-like feature on a per-project basis.
B: Fix one class of warnings at a time
Some languages/compilers let you treat specific warnings as errors. Visual Studio does have this feature. If that is your case, you can turn into errors only the warnings you have already eliminated.
A&B: One class of warnings in one project at a time.
If both previous options lead to too heavily infested sections of codebase to be cleaned in a single swoop, you can perhaps combine them to get a finer division.
Option 4: Order No. 227
This the Stalin’s famous “Not one step back!” order. The point here is to stop introducing new warnings and let developers take care of existing ones opportunistically as they work on new features and defect fixes. This has the advantage that you don’t have to allocate development time outside the initial setup. How can we achieve that? We set up the project so that the newly introduced warnings are treated as errors but the legacy warnings are still only warnings. Bonus points if the legacy warning spam is reduced in the build log.
For example in Visual Studio/C++ we can enable treat warnings as errors but wrap lines containing legacy warnings with “just a warning once” macro. This way we prevent new warnings being introduced and cut the warning spam in compiler output to bare minimum – only the first warning encountered is reported. The legacy warnings are now very noticeable in code (you don’t have to look on build output to see it) so developers are more likely to fix them opportunistically. Of course wrapping all the lines is a job for a program, if it was doable by a human in reasonable time you would just fix the warnings, right?
#define LEGACY_WARNING(code) __pragma( warning( push ) ) __pragma( warning( once : 4244 ) ) code __pragma( warning( pop ) ) void test(void) { int a = 1; // we wrap existing instances: LEGACY_WARNING( a *= 2.1; ) // reported LEGACY_WARNING( a *= 2.1; ) // suppressed LEGACY_WARNING( a *= 2.1; ) // suppressed // later, a hasty developer introduces a bunch more insecure calls a *= 2.1; // error a *= 2.1; // error a *= 2.1; // error }
The blue pill or the red pill
So you have done it, your code base now compiles without warnings. But do you remember that guy in the beginning who has suppressed majority of the warnings? The clean build is likely just an illusion. Time to unsuppress them now and do it all again.
Deprecation
As software evolves the old is replaced by the new. But sometimes you can’t just remove the old API/method/class, because there’s too much old-but-still-used code depending on it. At the same time, you don’t want to keep using the old stuff in new code so what now? You mark the old API as deprecated. This causes warning whenever it is used, but it still compiles successfully.
Don’t treat deprecation as error
The point of deprecation is to not cause an error so don’t treat deprecation warnings as errors. If you want errors instead you just remove the old API. It’s your call whether you choose deprecation or removal. Use deprecation only if removal is not feasible (too much code would need update or it would affect third party etc.). The decision to deprecate is not necessarily permanent, you can and should remove the old API once it is feasible.
Just because deprecation is a warning doesn’t mean you are not supposed to fix it. Quite the opposite – by fixing deprecation warnings you are making actual removal more viable. Sometimes it may seem impossible to fix, for example the deprecated method is replaced by a new method with an extra parameter and you don’t have that parameter available at the call site. What now? One solution is to declare the call site method deprecated too and create a new one with that extra parameter. Then suppress the deprecation warning in the deprecated method since it is redundant. You will probably end up with more warnings than before, but it’s still a step in the right direction.
void foo( wchar_t * buffer ) { wcscpy( buffer, L"bar" ); // warning - unsafe method } // You can’t just fix the wcscpy() call since you do not know the size of the buffer. // What you can do is define new foo() and deprecate the old one: __declspec(deprecated("Unsafe, use the overload with bufferSize parameter.")) void foo( wchar_t * buffer ) { __pragma( warning( push ) ); __pragma( warning( disable : 4996 ) ); wcscpy( buffer, L"bar" ); // no need to emit deprecation warning here - the whole method is deprecated __pragma( warning( pop ) ); // alternatively you can avoid having to suppress the warning by using some dirty move :) // wcscpy( buffer, SIZE_MAX, L"bar" ); } void foo( wchar_t * buffer, size_t bufferSize ) { wcscpy_s( buffer, bufferSize, L"bar" ); }
BTW I am not advocating the use of (buffer, buferSize) style functions. Use stl::vector something like that 🙂