Why does MS make the deviation between text and binary File I/O?
is there any logical reasoning for this?
BTW: this is not a bash MS thread, this is a serious question please reserve irrelevant comments.
Why does MS make the deviation between text and binary File I/O?
is there any logical reasoning for this?
BTW: this is not a bash MS thread, this is a serious question please reserve irrelevant comments.
ADVISORY: This users posts are rated CP-MA, for Mature Audiences only.
Maybe for unicode implimentations? That might be part of it I guess.....hmm...good questionOriginally posted by no-one
Why does MS make the deviation between text and binary File I/O?
is there any logical reasoning for this?
BTW: this is not a bash MS thread, this is a serious question please reserve irrelevant comments.
but doesn't *nix support unicode, without this difference?
ADVISORY: This users posts are rated CP-MA, for Mature Audiences only.
Yeah true......Originally posted by no-one
but doesn't *nix support unicode, without this difference?
Interpretation of control characters.
also, Java seperates bit (byte?) streams and char streams.. which i find to be a pain in the rear.
>Interpretation of control characters.
this doesn't make sense to me, shouldn't this be left to the application to interpret, depending on the files data?
this is a retorical question, not aimed at you.
Not if you want to use a standard set of control characters to achieve something and they don't correspond to the binary values expected by an o/s. It'll need to know that you're using these characters to represent a control character rather than their actual binary value.
Perhaps I've misunderstood your point, but at a lower level (CreateFile, ReadFile, WriteFile, etc) Windows doesn't differentiate between text and binary file I/O anyway. It's just a runtime library feature.
i thought it was because of the carriage return/linefeed thing in dos. i don't know anything for sure but i know that i had to make some changes to my data compression program to fix some encoding problems.
>It'll need to know that you're using these characters to represent a control character rather than their actual binary value.
yes but shouldn't this be done by context?( not as in device contexts or anything like)
excuse me if im being a bit slow here, it just doesnt seem like enough to justify the feature. am i mistaken or is it just a mere convenience?
ADVISORY: This users posts are rated CP-MA, for Mature Audiences only.
>yes but shouldn't this be done by context?
It could be but you'd have to be able to dynamically change the control characters that applications/os are looking out for. If an application has been hard coded to look for 0xd 0xa to indicate a new line and you write your text file in binary just using the C control char '\n' (0xa) then it can't tell if you wanted a new line or not (0xa by itself could have a completely different native meaning).
You could probably add some runtime code into your own apps to check for either 0xd 0xa or 0xa by itself, but why bother when you can just set a compile time flag? Alternatively you could write all your Windows text I/O in binary using the native control chars, but this would break source code portability as compilers for other o/s's will still accept the flags for other i/o modes even if they have no affect.
i get it, but i still don't agree with it, looks to me to be pure convenience, though it might help with certain issues, i just don't think its a necessity, thats just my opinion...
thanks, for the answer(s) though.
The difference is irrelevant. Just treat everything as binary (it is anyway). I mean, most storage devices are not character based devices, they are block devices. Work in big chunks and use RAM for character-based work.
It is not the spoon that bends, it is you who bends around the spoon.