It has little to do with your CPU and more to do with the size of your project. Most rebuilds I am used to take around 5 to 10 min. I have had some take 30 min but it depends on if pre-compiled headers are used and how many static libraries are being brought in.Quote:
Builds for me are otherwise an instant so I have almost no time to cancel. 3 seconds is about the longest I have for builds (it may be 2 seconds now with my higher-powered CPU).
In 2005 you had to enable threaded builds and in 2008 it is automatic. I'm assuming as much in 2010. Again this depends on the size of your project. When you are compiling 10K separate files it will choose to thread the build on its own and often times one of the threads exits or exits prematurely and leaves the file locked. After this happens all builds will fail after this point and MSVS must be restarted. In rare cases I have had to reboot b/c it simply never let go of the file...or MSVS would not shut down at all...even when trying to kill it from the Task Manager.Quote:
I haven't experienced the mt.exe problem.
In multiple DLL projects or projects that use 3rd party libs or source it is often nice to have them all in the same solution for debugging purposes. Even the simplest of my projects here at home contains at least 15 to 20 separate DLL projects. This is where the IDE gets confused about which include paths to use. 2008 tends to want to use the startup project include path in all of the projects in the solution...which is incorrect. Each project in the solution has path settings and each one is opened from a different location. It seems to forget the location and only uses the location of the startup project thus causing the auto-complete for the various dialogs for project settings to be incorrect. Changing them so they are correct in MSVS will actually cause the project you are changing to fail to build in its own isolated environment. Very bad mojo. So in short, the auto-complete for include paths in multiple project solutions or multiple solution projects appears to be broken.Quote:
To change projects, extremely rare that I ever do, I just open up another instance of MSVC.
On a side note, solution folders allow you to add solutions from other projects into your project and you can set the entire thing to build on a clean/rebuild. It also help you organize your environment when you have 50 to 100 projects for one build. These are not supported in the express editions of MSVS.
Filters are the source, include, make and whatever other folders you have in in your project. You can create more if you want and you can drag/drop files into them. It helps you organize your project. Having the ability to lock one so you could not remove / add files in it would be nice. Note that even though they appear to be folders in the IDE the filters in a project have nothing to do with the filesystem on your system and do not represent actual folders.Quote:
I don't use filters (didn't know they existed). I put all of my code in one file, with the exception of certain headers with a highly repetitive nature (a long list of defines for example). I don't use C++ and I otherwise haven't had much trouble with intellisense.
IntelliSense is quite iffy in 2003, 2005, and 2008. I've used it a bit in 2010 and it seems to be improved quite a bit as Elysia has stated.Quote:
The closest I get to troubles with intellisense is not seeing the variable type when I put the mouse over it, even if I have it correct (e.g. having SampleVar instead of SampleVars wouldn't cause anything to appear even changing to SampleVars sometimes still doesn't).
They are commands available from the toolbar in MSVS. If you click on one source file and go to a line and then another line in another source file and hit back...it will go back to where you were and vice versa. Very helpful in debugging and when code spelunking to figure out what is going on. Half the time it ends up in the weeds.Quote:
What do you mean by "forward and back"?
These are options so that the IDE can execute batch files, scripts, etc. when these events occur. Often it seems they fire at the wrong time or do not fire at all. Particularly troublesome in multiple project solutions.Quote:
I haven't encountered problems with pre-build, pre-link, and post-build events that I'm aware of (could this be why, by simply adding another variable, all drawing stops and adding still more variables makes drawing resume again?).
That one flag would silence all the warnings in 2005 but not in 2008 or 2010. They added more flags and changed the one you speak of to another flag. Deprecating the CRT by default is beyond annoying.Quote:
I only need to have the "CRT_SECURE_NO_DEPRECATE" type thing only once for sprintf and the related. Once added, I don't have to change it again.
They added a feature in 2005 that when you do an #ifdef with an else or else if it will grey out the portion that is not going to be compiled. This is a good way to understand what is going to be compiled and whether or not your pre-processor settings are correct for the build. Often times it fails to grey them out or greys it all out. Not reliable at all.Quote:
Defines work without problems for me that I've seen.
And now to the direct coding comment:
While it is true that heavily optimized code can be hard to read and therefore spot bugs...poorly optimized code can also have the same problem. In short you should never assume your coding style will prevent bugs. The only thing that prevents bugs is thorough testing of your code prior to submission and/or by using test driven development or other software development practices that are designed to limit bugs such as peer code reviews, group code reviews, peer programming, etc. Even with all of that bugs can and will still happen albeit much less frequently.
The hardest bugs to spot are the ones that do not have any obvious erros associated with them - IE: logic errors. These beasties can ruin your day pretty quick. These guys like to show their head when one specific code path is taken that may only be taken 1 out of a 1000 times...and that 1 time brings the system down. Code coverage tools such as Bullseye can definitely help you spot areas of your code that are not exercised that much and need to be tested. Most really nasty errors seem to occur in error handling code. This is b/c 9 times out of 10 the error never happens. So it is just as important to test the error handling of your code as it is to test the other more frequently exercised parts of the code. Many times it is the less frequently exercised parts of the code that cause the biggest problems.
Logic errors are great candidates for unit testing. The problem is making good tests that really cover them. I've been getting knee deep in unit tests (not test-driven development, as I simply don't appreciate the paradigm at all) for a while now, and it has been always a struggle for me to write good tests that require 0 (or close to 0) maintenance. The problem is made more complicated when one thinks most of these tests actually depend on external code (a database, a web service, ...). Stubs and mocks solve the problem but don't alleviate the complexity that is brought into unit testing and the requires to sometimes refactor our own code so we can better accommodate our tests (there must be something quite evil about, when one has to adapt their code to unit testing instead of doing it the other way around).
I'm however being stubborn about it and am still trying to learn how to write good tests. But more and more I'm convinced this is a science in itself. And one actually very hard to master.
I agree that writing good test code is difficult. Personally I have not used TDD much yet and am still on the fence with it. I work in graphics and rendering and using TDD for that just seems strange and/or improbable to do. I do like automated unit tests and have written many for various components. With those tests I used Bullseye to show me which paths were not taken which allowed me to tailor my tests to force those code paths to be taken and thus the code to be tested. A good unit test is where one particular piece of functionality is tested and no more. It is quite tedious and time consuming and it really makes you think about your code differently. I definitely believe in unit tests but that being said graphics is extremely hard to unit test and it is not automated. Direct3D can report back that everything is ok and yet nothing will render. The only type of unit test for graphics or renderers I can see being useful is one where dialogs are used or on-screen text is used (since dialogs do not work well in full-screen D3D or OGL) asking whether or not said functionality was observed to be functioning based on the results on the screen. Automated unit tests for graphics systems are probably next to impossible. I'm sure this is true for other types of components as well but 9 times of of 10 automated unit tests can be written for most software.
Yep. That is the same page that I read some time ago. I did get my facts confused. It is in 2008, not in 2010, and will be in the next version. Thanks for clearing that up. I will not be moving to 2010 and will probably skip it and go to 2012. It is customary for me to skip an MSVS version which means I upgrade compilers every 4 to 5 years depending on the release time frames.
Were you able to make portable programs with Visual C++ Express 2010? How do you do it? We have had no success to date.
@Bronx68 - STOP IT. GO TO YOUR OWN THREAD ON THE C++ BOARD.
Now, if Microsoft once and for all respects their own development guidelines and actually implements "mouse cursor hiding when typing" in VS for anyone who enables this system-wide setting, that will be the cherry on top. As is, it can quickly become a royal pain in the butt for anyone who uses their mouse as a navigation tool within source files.
Yes the I-beam cursor has a great way of hiding text and it is almost always over the text you are writing since you clicked on the line in order to write on it.