Ansi vs. Wide (std::string/std::wstring)
I have no specific question here, but I would like to know what others prefer/think/do etc. I follow that everyone on the earth should learn english and whenever create something I do it in English.
Since C++ does not care much about encodings I have always been using std::string, but - I have started considering wide chars recently and immediately got some troubles - as usual.
Because I think that support for ANSI is mandatory, only conditional compilance can be taken into consideration. No "std::wstring everywhere" solution.
The following are the major annoyances:
1. The first one is lack of generics. I would like to have tstring (and tchar) typedef'ed, which would be std::wstring in UNICODE release and std::string in ANSI one. Because I need to use C strings sometimes, I am forced to create conditional macros for strcpy/wcscpy strlen/wcslen etc. There are no predefined ones.
2. Lack of wchar_t support. Stream opening functions do not take const wchar_t*. So, if I want to open a windows file with unicode characters in path I still need to use CreateFileW (somehow...?). It also applies to std::exception.
3. UTF madness. One compiler might use UTF-8 (by default) for wchars while another one UTF-32 or yet something else. For chars they will all use regular ASCII codes. I do not know how it is related to english-only characters (whether they have the same codes), but I assume that if I decide to use wide chars I can expect everything. This is especially a problem when writing portable file formats.
4. I have no idea whether file streams will have problems with reading UNICODE/ANSI text files. Will they automatically convert it? (For example: reading 8-bit characters and putting them into 16-bit string).
I am a bit new to UNICODE and I think I will get back to std::string.