What? Preprocessor? How?
I think he means that MS could have gone the route that if you define a flag in the pre-processor that the standard CRT functions would become safe ones. However this is not possible since most of the 'safe' CRT functions require a specific count variable for copying strings/data.
Functions like this are not going to protect you from everything; there are far more subtle bugs that can appear, particularly relating to integers, that can bite you no matter what (in most all languages.) Stack overflows in particular are becoming harder to pull off with OS-level security constructs, and while people are finding smarter ways around them, things like W^X are particularly hard to beat, so people must search elsewhere for exploitation (and while we're on this subject, I personally believe language-enforced security is better than OS-enforced security, e.g. through a type system. You can encode security invariants in your types that *must* be checked by the compiler, and this ensures correctness. Techniques like this have been used in code that for example, statically checks that all pointers are on word-aligned boundaries, or that no array access can go beyond the bounds of the array, and these things are determined at compile time - this is something you can be sure is correct, thanks to some very important properties about type systems that I won't go into here.)
While I assume most of you are quite up-to-date on the standards, things like this can be easily overlooked:
(The fix is obviously to replace "i > MAX_LEN" with "i > sizeof(buf)".)Code:#define MAX_LEN 1024 int func(int i) { char buf[MAX_LEN]; if(i >= MAX_LEN) return 1; memcpy(buf,0,i); return 0; }
Of course, this is an extremely simple example, but very subtle bugs like this have crept up in e.g. OpenBSD at the kernel level. These are things that are not as easily checked statically, especially in a language such as C.
Because I don't use Windows at all, I really can't use these safe functions. Regardless of that though, I can't really see how much they buy you in general; while they may perform some 'extra' checks, and that's certainly useful, many things are still in your hands, and problems like the above can't be checked by the compiler and have it warn you. But they very well could help a lot - all I'm saying I guess is don't place too much of an investment in them (especially because they aren't standard.)
Red herring.And virtual machines are slow -_-
operating systems: mac os 10.6, debian 5.0, windows 7
editor: back to emacs because it's more awesomer!!
version control: git
website: http://0xff.ath.cx/~as/
and it is broken (at least in VS2005). from time to time the program database files gets corrupted while having /MP enabled. and creation of that pdb-files seems to be the thing that unsimplifies parallel builds.
under Tools/options/build settings one can find an documented switch to use more than one thread. but in opposite to /MP this applies only for compilation units from different projects which doesn't depend on each other.
here is a free VS plugin which promises to do better:
http://www.todobits.es/mpcl.html
Last edited by pheres; 12-11-2008 at 04:58 PM.
I partially agree. But language enforced security can only achieve this much on a programming language that one wants compiled, generic, and portable.
On the other hand a debug mode operating system is what we are missing.
...
Meanwhile, my gripe with MS "deprecated" features is the "deprecate" wording. the "_s" doesn't bother me. The functions themselves are mostly useless. Help catch a few errors that should be in the mind of the programmer anyways and not much else. Useful maybe if you like to become sloppy, or are in a hurry, and you are coding for Windows. Useless if you plan on using MVS for portable code.
I'd be very impressed indeed if these functions would make their way into the C++ Standard in anything more meaningful than a couple of additions.
Originally Posted by brewbuck:
Reimplementing a large system in another language to get a 25% performance boost is nonsense. It would be cheaper to just get a computer which is 25% faster.
But as the implementor of the C library, MS are allowed to use _x names - that is exactly who those names are for, so that they do not collide with user-provided codez.
--
Mats
Compilers can produce warnings - make the compiler programmers happy: Use them!
Please don't PM me for help - and no, I don't do help over instant messengers.
We agree on that. I think you misread, although I may not have been clear enough either.
What bothers me is the "deprecate" wording. As if Microsoft could deprecate the C++ standard on wich it must base its compiler. The "_s" prefix meanwhile doesn't bother me at all.
Originally Posted by brewbuck:
Reimplementing a large system in another language to get a 25% performance boost is nonsense. It would be cheaper to just get a computer which is 25% faster.
It's a suffix. As such, actually, it's not within MS's rights to use those names. But I can't blame them too much for not caring - it's not like the POSIX standard ever cared about adding to the C standard headers.
All the buzzt!
CornedBee
"There is not now, nor has there ever been, nor will there ever be, any programming language in which it is the least bit difficult to write bad code."
- Flon's Law
Okay, I admit, I do not get it. As far as I can tell MAX_LEN == sizeof(buf) since sizeof(char) is guaranteed to be 1, so what is wrong with the original code?Originally Posted by Mad_guy
Look up a C++ Reference and learn How To Ask Questions The Smart WayOriginally Posted by Bjarne Stroustrup (2000-10-14)
A type system and a garbage collected language can really go a long way to help a *lot* of things though - something I have never quite understood is why implementers of high level languages don't target something that's everywhere, like ISO C? You get the compiler-verified static-type checks and the resulting code works everywhere. Currently I'm working on a compiler for a HLL (haskell in particular) that does in fact target ISO C, meaning you can run applications it can compile just about anywhere. Having a defining feature like this is really nice, because this means it becomes far more feasible to write perhaps critical code and get good static guarantees at the same time since the code generated will run anywhere (think in embedded devices, or even a kernel!)
But this isn't some kind of wailing on C for having a fairly weak type system or anything, and I think this little rant is off topic anyway (I just mentioned it as an aside.)
By default ints are treated as signed integers - meaning they can be negative. If you pass a negative integer to the original code, it will bypass the >= MAX_LEN check, since that's changed to simply '1024' via the preprocessor. It will then call memcpy with i as the size parameter, and this will convert it to an unsigned integer (memcpy's 3rd parameter is a size_t,) and in the conversion the resulting value will be *huge* so memcpy copies way too much data (and yes, in practice things like this can be feasibly exploitable.)Okay, I admit, I do not get it. As far as I can tell MAX_LEN == sizeof(buf) since sizeof(char) is guaranteed to be 1, so what is wrong with the original code?
The fix works because sizeof(buf) returns a size_t which is unsigned - the standard defines that a comparison between a signed and an unsigned integer proceeds by converting them both to unsigned, and then comparing. The negative value will become unsigned (and huge) and thus the function will just 'return 1'.
operating systems: mac os 10.6, debian 5.0, windows 7
editor: back to emacs because it's more awesomer!!
version control: git
website: http://0xff.ath.cx/~as/
Ah yes. However, I think that the real fix is to change the parameter to be an unsigned int instead, or perhaps more appropriately, size_t.By default ints are treated as signed integers - meaning they can be negative. If you pass a negative integer to the original code, it will bypass the >= MAX_LEN check, since that's changed to simply '1024' via the preprocessor. It will then call memcpy with i as the size parameter, and this will convert it to an unsigned integer (memcpy's 3rd parameter is a size_t,) and in the conversion the resulting value will be *huge* so memcpy copies way too much data (and yes, in practice things like this can be feasibly exploitable.)
Look up a C++ Reference and learn How To Ask Questions The Smart WayOriginally Posted by Bjarne Stroustrup (2000-10-14)
What does C being a low level language have to do with a compiler that goes from HLL -> C? Assembly is pretty low level too, but it's not stopping anybody from targeting it. And why would C++ being higher level make it easier to write a compiler from HLL -> C++?
Yes, but that was simply an illustration of the point that really nasty bugs like this can crawl around and 'safe functions' like this can't do anything to help you, and in many cases, no analysis the compiler will implement can help you either (like I said, bugs of this vein have crept into the worst places, like the OpenBSD/linux kernel.)Ah yes. However, I think that the real fix is to change the parameter to be an unsigned int instead, or perhaps more appropriately, size_t.
operating systems: mac os 10.6, debian 5.0, windows 7
editor: back to emacs because it's more awesomer!!
version control: git
website: http://0xff.ath.cx/~as/