Your terminal must be able to decode UTF-8 as well. Take a look at this file:
In Linux (usualy uses UTF-8) and Windows, this is encoded as:
Code:
$ hd test-linux.txt
00000000 54 65 73 74 3a 20 c3 a1 c3 a3 c3 a7 c3 b4 0a |Test: .........|
0000000f
$ hd test-windows.txt
00000000 54 65 73 74 3a 20 e1 e3 e7 f4 0d 0a |Test: ......|
0000000c
Notice the "special" chars, with accents, are encoded differently. "á", in UTF-8, is "\xc3\xa1", but in WINDOWS-1252 (ISO-8859-1 or Latin-1) is "\xe1".
If you try to print the test-linux.txt on Windows, will get
"Test: ¡£§´". If you try to print test-windows.txt on linux (or any UTF-8 terminal) you'll get: "Test: ����".
PS: By the way, did you ever notice some sites showing these black diamond question marks? Usually the HTML is condifured as "UTF-8" (using "<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />") but encoded in ISO-8859-1 (or WINDOWS-1252) -- edited, of course, on Windows!
By default, on Linux systems, editors like VIM and Emacs, creates UTF-8 encoded files (the same encoding as the terminal):
If you are using Windows and saving your file in UTF-8 your chars will be encoded in UTF-8, but this doen't mean your terminal is capable to display UTF-8 encoded chars, hence the MultiByteToWideChar() Windows API call I showed before....