I disagree - if it's possible for something to be automatically checked, then ideally it should be done. Sure, it's possible in theory to never make a mistake in including the right headers, just like it's possible to write object-oriented code in C. A particular compiler may need certain header dependencies, but if the standard doesn't require them, there should (ideally) be checking to make sure they aren't hiding omission of generally required headers.
I think a simple way to check is to search the source file for strings associated with particular header files and then check whether the header is actually included - for example, if it contains "numeric_limits", then <limits> should be included. I'm not sure if this scheme could be made completely reliable, but programs like lint already flag a lot of perfectly good code so it shouldn't be any worse than that. This is basically what a programmer does - there's a mental map between tokens and header files. Less experienced ones just haven't committed the whole map to memory yet.
Edit: I think that if there exists an algorithm to determine for certain whether a particular type of mistake has been made, then ideally the compiler should implement it. Programs like lint should only be used to check for things that _may_ be mistakes.