What does “Size in TCHARs” means?

So we all know that char is 8-bit, and wchar_t is 16-bit. (This isn’t always true, but it is on Windows with Microsoft compilers.)

Many (nearly all) of the Windows APIs are implemented under the hood in two versions: one supports Unicode (multi-byte characters) and the other supports 8-bit national character sets. These two functions actually have slightly different names — typically the 8-bit one ends in “A” and the 16-bit one ends in “W” — but your code typically doesn’t reference either. The function your code calls has no ending letter, and in the <windows.h> there’s a #define which points the symbol at the appropriate function name depending on whether the UNICODE symbols is defined. When you declare your strings, you can declare them as type TCHAR — which is #defined to be either char or wchar_t of those depending on whether the symbol UNICODE is defined or not.

The original purpose for this was to allow developers to offer two versions of their software, one which is unicode-compliant and calls the unicode APIs, and one which is not and just calls the 8-bit APIs, and build them from the same source code. This was important in the days when some widely-installed versions of Windows did not support the Unicode versions. [However, now nearly the entire installed base of Windows is Unicode-compliant, so you should be using the Unicode versions (wide character) everywhere.]

So, size in TCHARs is the same as strlen() (or sometimes strlen()+1, check the docs) if you’re working with only 8-bit, but it’s wcslen() (or wcslen()+1) if you’re using wide characters (unicode). In service to their “text unification” project Microsoft introduced _tcslen() which maps to the appropriate string length function.

https://msdn.microsoft.com/en-us/library/vstudio/78zh94ax%28v=vs.110%29.aspx

Leave a Comment