Thread: Java vs C to make an OS

  1. #31
    Disrupting the universe Mad_guy's Avatar
    Join Date
    Jun 2005
    Posts
    258
    Quote Originally Posted by DavidP View Post
    Regardless of what JVM you are using, and regardless of what platform you are developing for, Java does not let you have hands on access to memory in any way.

    Therefore, even if you compiled Java code to a flat binary, instead of interpreted byte code, the fact is that at the code level, when you wrote it, you still didn't have that hands on access to memory.

    So, in order to write any type of OS in Java, there would have to be a specialized library written which allows you to access those protected things that Java normally keeps from you.
    What point are you trying to argue here? I'm not saying it's practical or the best choice, I'm just saying it's not impossible. It's provable by the most fundamental theories and elements of computer science (see: Turing machines.)

    I would expect no less of having to bootstrap your own memory management libraries and the like in C, same with other parts of the OS. I'm not arguing any of that (I even said that almost no OS qualifies as an OS if you have to write it in one single language to fit that specification.) So what point are you trying to make? Or are you simply trying to make the task seem as masochistic as possible?
    operating systems: mac os 10.6, debian 5.0, windows 7
    editor: back to emacs because it's more awesomer!!
    version control: git

    website: http://0xff.ath.cx/~as/

  2. #32
    Registered User
    Join Date
    Feb 2006
    Posts
    65
    Quote Originally Posted by zacs7 View Post
    The Solaris front-end (Java Desktop System) is written entirely in Java.
    Actually the JDS is simply a branded version of the GNOME desktop which is written entirely in C, together with programs most of which are written in C (StarOffice, Evolution, Mozilla). Despite the name it has nothing to do with Java.
    http://www.sun.com/software/javadesk...em/details.xml

    Besides, if the professor that claims that you can make an operating system in Java shouldn't it be him/her who should prove it? He might aswell argue that unicorns exist and let you prove that they don't.
    Last edited by joni; 05-21-2007 at 02:18 AM. Reason: added link to JDS details

  3. #33
    Registered User
    Join Date
    May 2007
    Posts
    147
    Interesting debates running in this thread...just had to chime in myself

    There was an experiment in some Scottish university some years ago (sorry, references lacking, not much was ever really released about it) that was an object oriented OS based on an object oriented CPU. C couldn't have been used as an application development tool because the OS exposed it's API as a set of objects. C++ didn't exist at the time, as I recall.

    The project evaporated into obscurity, but the concepts were interesting, and demonstrated it could be done. Now, there's .NET, and one could argue the viability of C++ there (MS had to mangle C++ into C++/CLI after the failed 'managed C++' they produced).

    I had the great privilege of meeting and working with one of the handful of people involved in developing the Alto at Xerox (the first GUI based system from which both Jobs and Gates were inspired for their attempts). It seems that was largely based on and written in Lisp, an object oriented Lisp if this man's description is to be believed (I'll withhold his name as he's not consented to my discussion of him in public).

    A lot of this question about Java (a two part, here I address the question of Java applied to an OS development) really boils down to what you define for an operating system. The popular operating systems (including the now defunct OS/2, AmigaOS, BeOS, etc) expose their API as a set of C oriented functions (perhaps some used an alternate calling definition, like the Pascal convention, but still - a set of function calls). A Java OS would probably expose the API as a set of Java objects, meaning the only language suitable for development of applications would be Java or something compatible with Java. Attempting to fit Java with C like memory exposure would be sacrilege to Java programmers, and largely useless, violating one of the tenets that Java developers identify as the very reason of the languages superiority. That limit alone, however, deprives the world of applications developed in any other language approach, and while it's debatable if that would really matter, the result would be an abandonment of all legacy applications. The platform would be starting from 'under' the level of Linux today. It may prove an interesting experiment, but I doubt the result could escape academia.

    I worked on a few proprietary systems where the OS was written in Pascal. That was long before Windows was what it is, and since it competed with Unix of the day, there really wasn't much interest, but at least it's OS exposure was similar enough that tools from other systems could be used, even C compilers.

    To hear Torvalds say it, C++ is crap. I'm only repeating what I read. His opinion, drawn from interview, was that C++ couldn't be used for an OS, largely because, in his unsubstantiated opinion, it would be slow and buggy. He lamented that C++ programmers were, and I paraphrase for lack of memory of the exact word, snobs - that should admit they're really using C.

    So much for the presumed authority of the great Linus. He may know what he's doing in C, but it seems he really doesn't understand C++, because while I don't recall the rest of the nonsense he muttered in the interview, I do recall repeatedly wondering how on earth anyone could say such things with a straight face and conviction of meaning.

    It is from that attitude that most have thought Windows was written entirely in C. Be assured, the guts of Linux are in C, as Linus apparently would have it no other way. I suppose, too, it depends on what you call an operating system. Gates would have us believe IE was an integral part of the OS, and I have little doubt much of that is C++ code. Since the OS is exposed, until .NET, as a set of C function calls (given the caveat of the calling convention for a particular version), I submit those portions we generally regard as OS code within the kernel is largely C, and almost certainly the early versions of NT were all C, because the C++ compilers were so weak to that date, and code written in 1994/96 is quite different from the C++ we expect written today.

    .NET is an attempt, among a great many other things, to expose the operating system as a set of objects rather than a set of functions. Laudable, but better if the objects were C++ objects, IMO - or at least there should have been thin C++ wrappers for that kind of exposure. I'm with Stroustrup on this point.

    Now, I switch gears to the claim that Java is the best language. Ooooo, them's fightin' words in some quarters!

    There's little doubt that C++ is a tad messy, which is largely a relic of history (it's basis upon C, chosen to invite an existing population of developers into a new paradigm). The sentiment already expressed here regarding 'the best tool for the job' is up to you most certainly applies. However, a blanket claim that Java is the best language is hardly backed up by history, example or experiment.

    RAII is one prime point that Java missed entirely. As an OOP concept, I find it irreplaceable. It's absence from C# is lamentable, though there is some nearly satisfactory workaround - if I'm considered a member of the Jury, I'm still not voted for C# as a superior language.

    Aside from RAII, though, Java has much to offer. Indeed, the absence of pointers and memory management issues related to the machine, which is the C legacy imposed upon C++ (perhaps some might preferred shared with C++) is laudable. There are suitable C++ means by which similar effects are achieved, but as Java fans will no doubt counter, they're optional. Well, curse words are optional to spoken languages - would an insulting idea be poorly conveyed without them? I think it's still possible to sting one's opponent with language clear of the FCC rules, but there is a deliciousness to their use when the time is right, isn't it? I'm told Italian is the best for curses, but I'm not so educated. That famous line in the Matrix 2 about cursing in French certainly tickles my fancy (the villain claimed it was like wiping one's, ahem, behind with silk).

    The pejoratives of C++ aren't so nasty, but they do have side effects and seasoned C++ developers respect that, and avoid them at prudent cost. Smart pointers, STL containers and a good knowledge of development patterns goes a long way to making solid code. Bugs are of many kinds, though, and Java has no immunity.

    Java proponents also point to it's 'pure' object oriented nature as a superior point in its favor, but I submit that after an application object is opened, there's no reason a novice can't continue to write more procedural code than good design demands.

    I'd retort with a simple question. What ambitious, successful, popular applications were written in Java, and which ones were written in C++? That list alone should say something, though just what will still leave the professor's nose in the air in a pose of obstinate defiance as he fashions a retort backed by his authority as the one in charge of the current forum. It wouldn't be the first time a professor waxed for a career about something with which few others ever agree.

    Java is superior if your intent is to provide rapid development results from engineers less competent than would be required to do the same thing in C++. If the Java produced application were compared by consumers to that of the C++ version, I'd dare say that the consumer wouldn't be able to explain why they prefer the C++ version (assuming it was generated by competent developers), but the probably would. The lower the ambition of the product compared, the less likely the consumer could tell any difference. It's in the products of higher complexity where features and performance expectations are critical that the Java version would fail to shine. This is less due to the language itself than to the exposure it gives to the developer of the underlying operating system and the speed it runs for a given hardware spec, which itself is highly dependent on the quality of the JVM.

    In that way, C# is also superior to C++ - if your intent is to provide productive tools to less competent, and therefore less expensive, personnel. An entire industry depends on this sort of thing - I call them "bag of fields" applications. I did that work for years, years and years ago. I certainly don't profess to demean those thus currently employed - I was one of them for many years - but you must admit, it's quite different to be a developer on a team making products for the market of the ambition of, say, 3DS Max or Photoshop than it is to be a developer creating forms upon forms for a corporate IT department.

    C++ is for those that operate without a net (as I write that I'm also thinking without .NET, but I'm just making myself chuckle). C is for those walking without a tightrope and without a net - floating in air, so to speak. For that, I suppose, Torvalds is owed his due. I wrote in C for years before C++ appeared, and I personally would never go back. I've toiled in Java and in C#, but only when required. Once you develop a considerable expertise in an area, you just tend to dive in deeper, becoming even more specialized. It's like the difference between a surgeon and a physician, or the general physician and the hematologists that are curing my son. Specialties can make for fascination in one's career. Torvalds is an example, I'm sure.
    Last edited by JVene; 05-22-2007 at 04:55 PM.

  4. #34
    Lean Mean Coding Machine KONI's Avatar
    Join Date
    Mar 2007
    Location
    Luxembourg, Europe
    Posts
    444
    He probably meant that Java is compiled into ByteCode instead of Machine code and therefor needs a virtual machine (a software) to interpret and execute or compile it into machine code.

  5. #35
    Registered User
    Join Date
    Jan 2007
    Posts
    330
    Quote Originally Posted by JVene View Post
    Interesting debates running in this thread...just had to chime in myself

    There was an experiment in some Scottish university some years ago (sorry, references lacking, not much was ever really released about it) that was an object oriented OS based on an object oriented CPU. C couldn't have been used as an application development tool because the OS exposed it's API as a set of objects. C++ didn't exist at the time, as I recall.
    Would love to know what is meant by an object oriented CPU.
    Im pretty sure C could have been used though because you can write OO code in C easily

  6. #36
    Lean Mean Coding Machine KONI's Avatar
    Join Date
    Mar 2007
    Location
    Luxembourg, Europe
    Posts
    444
    Quote Originally Posted by KIBO View Post
    Im pretty sure C could have been used though because you can write OO code in C easily
    No. According to the definition,

    object-oriented programming language (also called an OO language) is one that allows or encourages, to some degree, object-oriented programming techniques such as encapsulation, inheritance, interfaces, and polymorphism.
    C is definitely NOT an object oriented language.

  7. #37
    Registered User
    Join Date
    May 2007
    Posts
    147
    Would love to know what is meant by an object oriented CPU.
    Since the project never reached out beyond Scotland that much, there's only scant material, and I don't think any of the terms and concepts were formalized; everything was experimental.

    From what I got out of the paper I read, the idea was like this.

    The CPU was implemented partly in a logic array, which means most of it was simulated by circuits designed to study example CPU configurations. The language of the CPU was grouped into objects, like machine I/O, math operations, string operations, etc.

    The paper didn't include much coding example, but I recall that each process or thread had it's own object held in a table within the CPU, such that what we currently think of as registers were never actually pushed out and popped back in a task switch. It's like what we think of as the registers of the x86 CPU were an object, and the circuits which operate upon them (ALU, for example) switched between these objects to service threads.

    The Power PC does something like this - perhaps we might shrug a shoulder and say it's just a machine instruction that 'does a task switch'.

    However, from the design of the CPU language, it's general plan for circuitry, out to the language used to create applications, objects were the key theme throughout.

    There was a section on a VM management object that described VM trackers that were part of each thread object. It was like making a CPU specifically for an operating system based on an object oriented language related to smalltalk - it might even work for Java.

    This project was probably started around or not long after John Cooke 'invented' RISC computing, which actually turns out to be more important in the short term than an object oriented CPU, at least at that time. It may be that the idea was so far ahead of what was possible (remember, these were days when 10 million transistors on a chip was either new or still in the future), that it is only now that such a notion would be practicable.

    The paper's conclusion did convey the sense that the project was deemed successful in that development reach a point of a functioning operating system, an application development language, the machine itself, and a handful of user applications - what I don't recall they listed.

    Cooke's idea was to consider the electronics of the CPU and the operation and intelligence of the compiler as two ends of a single engineering puzzle, which was the opposite of previous practice for decades (slow progress, too). So we currently have Cooke's notions taken out of IBM and moved into non RISC mainstream designs - the two most important aspects of which were the idea that increasing the circuit complexity such that one clock tick does more real work, and that of the compiler being an integral part of how that's operated to benefit and gain. When Cooke started, I think it was '78, the Z80 didn't have half a million transistors. By the time Cooke's idea became a product, the CPU wasn't a single chip - it was two (the CPU was in two parts, not dual CPU's), and chips had maybe 1 or 2 million, somewhere around the time this object oriented CPU was being experimented upon.

    That project died out after, presumably, the graduation of the doctoral candidate that drove the project about that time. 2 million transistors would never even come close for a 32 version of this design notion. Now, however, with the 400 million + we can put on a core, someone might, someday, prove the concept worth it - but then, the monopoly that is Microsoft/Intel would have to either be willing to change or let go, or it will have to wait for a 'revolution' of it's own self proving.

    Cooke's idea proved practical because it can be implemented within existing streams of industrial/technical practice.


    KIBO: Please pardon me for an opposing view, but....

    KONI: I second your opinion about

    Im pretty sure C could have been used though because you can write OO code in C easily
    I hear this now and then, and it has never made sense to me.

    Obviously the objects CAN be written out for C to digest, that's how C++ was first implemented (and there are still C++ compilers available that do this).

    The word easily is the part that I just can't swallow. I see examples of how some of the tenets of OOD and OOP may be implemented in C, but consider these two...

    There is without a doubt no possible way to implement RAII in C, unless the concept is trivialized down to an integer on the stack or something like that.

    There is no way to implement an equivalent to smart pointers that provides any leverage remotely analogous Java's references, which most C++ smart pointers do.

    These two alone are capable of magnifying the solidity of code.

    Now, I actually agree one can generate objects of a rudimentary kind, but as the complexity of design increases, it becomes a burden, not a leverage, to continue with it in C.

    I've yet to find anyone provide convincing examples that demonstrate the contrary.

    To that extent, there is a way to code objects in assembler - the way the compiler does it when it processes C++ code, for example


    Joni:

    Besides, if the professor that claims that you can make an operating system in Java shouldn't it be him/her who should prove it? He might aswell argue that unicorns exist and let you prove that they don't.

    You know, that's entirely true - and it never really occurred to me.


    Then, too, I read, here and there, that Java WAS a system, embedded within the host system. Think about it - Java presents to application code an API that's common to all platforms, and perhaps it parasitically depends upon that host to do the mundane work, but there's no reason a Java specific CPU or computer (which I recall was planned at one point), which only runs Java applications BTW, couldn't be devised (ironically, the JVM would probably be implemented in C++ in a commercial implementation, just like the best JVM's are now).

    Wasn't that the whole point all along though? Not just run on a virtual CPU, but run as if in a virtual operating system/environment, independent of the host? To me, making that into an operating system, depending on one's definition of that, is trivial - just take the current endpoints of the attachment to the host, like Linux, and write drivers from there to the real hardware. Done!

    Of course, it's Java. Whoopty do, nothing new - same as if it were in Linux or Windows, so what's that worth on the market?

    Now, in a smart phone, sure!

    Refrigerator controller, yeah!

    Don't they have these things already?

  8. #38
    Reverse Engineer maxorator's Avatar
    Join Date
    Aug 2005
    Location
    Estonia
    Posts
    2,318
    Quote Originally Posted by Mad_guy
    No, you just can't read:
    Oops!

    But still, critical kernel components shouldn't be made in high-level languages.
    "The Internet treats censorship as damage and routes around it." - John Gilmore

  9. #39
    Registered User
    Join Date
    Sep 2001
    Posts
    752
    Quote Originally Posted by JVene View Post
    I hear this now and then, and it has never made sense to me.

    Obviously the objects CAN be written out for C to digest, that's how C++ was first implemented (and there are still C++ compilers available that do this).

    The word easily is the part that I just can't swallow. I see examples of how some of the tenets of OOD and OOP may be implemented in C, but consider these two...

    There is without a doubt no possible way to implement RAII in C, unless the concept is trivialized down to an integer on the stack or something like that.

    There is no way to implement an equivalent to smart pointers that provides any leverage remotely analogous Java's references, which most C++ smart pointers do.

    These two alone are capable of magnifying the solidity of code.

    Now, I actually agree one can generate objects of a rudimentary kind, but as the complexity of design increases, it becomes a burden, not a leverage, to continue with it in C.

    I've yet to find anyone provide convincing examples that demonstrate the contrary.

    To that extent, there is a way to code objects in assembler - the way the compiler does it when it processes C++ code, for example
    This hardly seems fair. You're creating a definition of OO in terms of features based on
    1) Not being able to do them in C.
    2) Them being good.

    Why even mention RAII? You've crafted an argument one could use to say Java can not perform OO programming.

    The claim was that you can easily perform OO programming in C. OO programming boils down to
    [1] Abstraction -- providing some form of classes and objects.
    [2] Inheritance -- providing the ability to build new abstractions out of existing ones.
    [3] Runtime polymorphism -- providing some form of runtime binding.
    I pulled this definition from http://www.research.att.com/~bs/oopsla.pdf, it begins with a good discussion on defining OO, and the importance of having a historically reasonable definition of words if you are trying to communicate.

    All of these can be done in C. All three of these are easy to do. C code which does any of these properly is not by nature complicated or not self-explanatory. The reason C++ compilers could compile out to C is because it is for the most part a simple transformation.

    No, C is not an OO language. That is a straw man. It wasn't designed around these ideas. Polymorphism, for example, has no native means of being expressed in C. It is the programmer's job to build that sort of functionality out of what C does provide. I am not claiming C is an OO language. KIBO never made that claim either.
    Callou collei we'll code the way
    Of prime numbers and pings!

  10. #40
    Crazy Fool Perspective's Avatar
    Join Date
    Jan 2003
    Location
    Canada
    Posts
    2,640
    >All three of these are easy to do

    Well that certainly depends on your definition of easy. They are certainly easier to do in Java.

  11. #41
    Registered User
    Join Date
    Jan 2007
    Posts
    330
    Quote Originally Posted by QuestionC View Post
    This hardly seems fair. You're creating a definition of OO in terms of features based on
    1) Not being able to do them in C.
    2) Them being good.
    indeed, not being able to write smart pointers has nothing to do with OO, more with features like automatic destructors and copy constructors etc that C++ has and C doesnt.

    You can do data hiding, inheritance and polymorphism in C, in fact even low level driver code I've seen uses this through functions pointers

  12. #42
    Disrupting the universe Mad_guy's Avatar
    Join Date
    Jun 2005
    Posts
    258
    Quote Originally Posted by maxorator View Post
    But still, critical kernel components shouldn't be made in high-level languages.
    Aside from perhaps a speed difference (of generally negligable performance in most cases, I'd wager) why not? High level languages offer plenty in themselves; Haskell is probably one of the best examples of a high level, functional language that offers programming precision and correctness, while staying simple in essence. Yes, you may lose 2 seconds of run time, but you'd be willing to throw away benefits such as code correctness, reusability and modularity? For two seconds? I'm pretty sure the users wouldn't notice a difference anyway (also, in a microkernel architecture such as MINIX or an extremely simple yet flexible kernel such as Plan 9's, a lot of your code is in userspace anyway, so you don't have much to lose by writing your user-space system components in a higher level language.)


    Again, I'm not exactly saying things such as "operating systems should be written in Haskell" or anything like that; but I think you're looking at the subject too narrowly, or simply not looking or caring about the other side of the fence. It'd be worth it to look into your options, rather than disregard them just so you can go with the norm.
    operating systems: mac os 10.6, debian 5.0, windows 7
    editor: back to emacs because it's more awesomer!!
    version control: git

    website: http://0xff.ath.cx/~as/

  13. #43
    Registered User
    Join Date
    May 2007
    Posts
    147
    Why even mention RAII? You've crafted an argument one could use to say Java can not perform OO programming.
    I think you exaggerate the interpretation of my statements.

    I didn't say something along the lines that it's not OOP if there's no RAII.

    We're not writing a dissertation here, these are relatively short, causal posts compared to something like that.

    I think it safe to assert that RAII is an OOP concept. It's true that you can't use RAII in Java, and for that Java is lacking an important feature. That doesn't mean Java isn't object oriented. RAII is an illustration of a point I think you missed in my statement:

    The word easily is the part that I just can't swallow.
    I asserted several times that I recognize how object oriented techniques can be implemented in C (I've read the papers on OOPC too - I even found a few on OO assembler - really!), but I do not accept that it's easy to do in C, and since Java has become part of the comparison here, RAII is not easy to do in Java. For every situation in which RAII is used to advantage in C++, that same act can be performed in Java but without OOP leverage - it must be done explicitly, manually (i.e. the hard way).

    As the ironically named Perspective replied,

    Well that certainly depends on your definition of easy.
    I read in one of Stroustrups interviews (maybe it was just a reply to something on his website, I must admit I'm not sure where, and I'm liberally paraphrasing here from an old memory) that his design motivation is one of supporting the thought process of the programmer, providing tools that aid in the psychological act of thinking about programming. To that end, his aim was to provide leverage for expression of one's thought in code. Please don't look for a quote of him in that - I'm re-summarizing my memory, which itself is a summarized paraphrase of Stroustrup's point.

    My point is illustrated, not defined by, RAII. Here's a concept that can only be given leverage with an object that has a definite initialization and a definite destruction sequence. Java objects don't have that, and for that matter, C# objects can only provide this through a small extra bit of effort. As such, there's a lack of leverage on this one point when one writes in Java. That doesn't remove the OO moniker from Java (note, for example, I referred to them as Java objects), it just means that there's one example of something it doesn't support within the language.

    Obviously you can still initialize a resource and shut it down in Java, C or most any language. Without the leverage of a deterministic destructor, however, their is a mental effort required of the programmer, to manage the resource, which opens the possibility of an error.

    Obviously a C++ author is free to avoid using RAII, and as such can disregard this leverage and engage in the same mental effort of performing the destruction of a resource manually. It is an extra effort in Java, as it is in C. The option of gaining the leverage exists in C++, and it is a reduction of mental effort that allows one's attention (and time) to be devoted to other matters - making the management of those resources easier (and the resulting code more solid).

    Here again, I return to my previous self quote - it's the easy part I just can't accept.

    I assert the reverse. The more OOP techniques one attempts to deploy when writing in C, the more mental effort that programmer must exert in order to manage them because the language doesn't support those features.

    We are free to exert what mental effort we wish, and in my repeated disclaimer I agree it's certainly possible to implement objects when writing in C. My point is that these are manual efforts, requiring more attention from the programmer than counterparts would require if the programmers availed themselves of OOP features inherent in an OOP language.

    At some point along that continuum, from rudimentary object oriented concepts to the more advanced and subtle, easier become not so easy, and finally not easy at all.

    RAII is one point were easy no longer applies, though some may argue trivial for something like a file handle. As the notion of shutdown becomes more involved, like a threaded system that requires an epilogue of activity to properly conclude a concept, like a cache or something, the advantage of RAII returns more powerful benefits.

    Smart pointers are an example of RAII, from the viewpoint that an acquisition of memory is made that eventually must be released. While RAII and the use of smart pointers are not the definition of OOP programming (and I wasn't attempting to elucidate a definition), both can only be implemented using OOP techniques, and the features of OOD are lacking in C to implement them easily. I agree that one could create a struct in C that represents a reference counted pointer (in fact, I've seen them in a few C frameworks). However, there's nothing automatic about it - everything from the auto initialization to NULL to the automatic destruction at the last reference's release require manual interventions, which is to say mental effort on the part of the developer. In C frameworks where I've seen this, it is entirely transparent to the C programmer, which does exhibit the concept of encapsulation, but here again - I never said it wasn't possible, and I didn't say it couldn't be done. What I said was (or at least a reasonable re-interpretation of my quote), it isn't easy. In C++, it can be implemented so as to be automatic. There is nothing easier than something that's automatic.

    I do think it's fair, when analyzing the ease with which OOD techniques are implemented, to use examples like RAII. If I were attempting to define a language as OOP or non OOP, then your complaint would be quite fair. I wasn't.

    I'm talking about support within the language, the leverage the language features provide, which, by example, illustrates the ease or lack of ease with which a concept that's dependent upon OOP can be implemented.

    To re-illustrate what I said:

    Now, I actually agree one can generate objects of a rudimentary kind, but as the complexity of design increases, it becomes a burden, not a leverage, to continue with it in C.

    Burden, as in, not easy. Leverage, as in making it easy.

    I'm measuring the mental effort required to implement the OOP concept, which itself is a difficult notion, but that ease probably relates more directly to how much time is absorbed implementing a concept in one manner as opposed to another, the probability of reducing in increasing error in the results, and a host of potential metrics.

    I can use a wrench to drive a nail into wood - but that doesn't make the wrench a hammer, even though I may be able to use it as one. Even when we all agree the wrench isn't a hammer, many of us have had an occasion to use a wrench, or some other non-hammer object, to drive a nail - using the wrench AS a hammer.

    My point wasn't about the wrench being a hammer. My point was that using a wrench as a hammer isn't easy, but sometimes nails are not that hard to drive into soft wood.

    As the force required to drive the nail increase, the wrench becomes a less suitable hammer, and this is my point.

    OOPC is possible, and I never asserted it wasn't. Simple OOP in C may be simple to illustrate and deploy. The more examples of OOD you attempt to implement in OOPC, the more work you'll have to exert to make things work, and that to me means its not AS easy, and there comes a point in that comparison were it simply is NOT easy.


    Let me extend this discussion with yet another assertion about OOP in C++ vs C. I take this again from a Stroustrup interview, but from a more recent memory.

    A matrix library was written as a template class in C++. A counterpart was written in C. These are doctoral candidates or other PhD's like Stroustrup, so we must assume they were written by competent hands.

    The C++ version outperforms the C version, and that might be surprising. Stroustrup explained why. It appears that when creating generic code, it's possible to describe a problem such that there's greater opportunity for the compiler to 'understand' the code, and optimize it. Without the example in hand I can't say why, but his assertion was the C compiler didn't get as much information as the C++ compiler did, by virtue of the way the problem is expressed. As such, the compiler optimized the code better in the C++ version.

    Here again, I think it safe to say, it was easier for the compiler to optimize - because the expression of the problem was better. Certainly, the C version was possible.

    Another illustration that's much clearer, sorting. The C runtime qsort was compared to a sort in the STL. The STL sort is faster in some situations, and here again I rely on Stroustrup's explanation. qsort requires a function pointer as the comparison predicate. Sometimes that's the only way to describe the predicate. However, there are many sort situations that depend on primitive types, in which case the comparison is trivial. The STL provides a means of describing the sort predicate such that the compiler can optimize that into a primitive comparison without generating a function call - inlining the comparison. qsort's dependence on a function pointer makes this optimization impossible.

    Even Stroustrup pointed out that it's certainly possible to write a qsort the specifically optimized the comparison. Certainly a CFront output would have done so from the template example, too. The point is, with the STL, the optimization is easy. To make one's one qsort optimization is not.

    The expression of the problem in the STL gives the compiler information unavailable to qsort. It was the summary point of Stroustrup's response that the there's a richer expression potential in C++, which gives more information to the compiler for optimization, and I submit that makes things a little easier, too.
    Last edited by JVene; 05-23-2007 at 04:25 PM.

  14. #44
    Registered User
    Join Date
    Jan 2007
    Posts
    330
    Quote Originally Posted by JVene View Post
    Let me extend this discussion with yet another assertion about OOP in C++ vs C. I take this again from a Stroustrup interview, but from a more recent memory.

    A matrix library was written as a template class in C++. A counterpart was written in C. These are doctoral candidates or other PhD's like Stroustrup, so we must assume they were written by competent hands.

    The C++ version outperforms the C version, and that might be surprising. Stroustrup explained why. It appears that when creating generic code, it's possible to describe a problem such that there's greater opportunity for the compiler to 'understand' the code, and optimize it. Without the example in hand I can't say why, but his assertion was the C compiler didn't get as much information as the C++ compiler did, by virtue of the way the problem is expressed. As such, the compiler optimized the code better in the C++ version.

    Here again, I think it safe to say, it was easier for the compiler to optimize - because the expression of the problem was better. Certainly, the C version was possible.

    Another illustration that's much clearer, sorting. The C runtime qsort was compared to a sort in the STL. The STL sort is faster in some situations, and here again I rely on Stroustrup's explanation. qsort requires a function pointer as the comparison predicate. Sometimes that's the only way to describe the predicate. However, there are many sort situations that depend on primitive types, in which case the comparison is trivial. The STL provides a means of describing the sort predicate such that the compiler can optimize that into a primitive comparison without generating a function call - inlining the comparison. qsort's dependence on a function pointer makes this optimization impossible.

    Even Stroustrup pointed out that it's certainly possible to write a qsort the specifically optimized the comparison. Certainly a CFront output would have done so from the template example, too. The point is, with the STL, the optimization is easy. To make one's one qsort optimization is not.

    The expression of the problem in the STL gives the compiler information unavailable to qsort. It was the summary point of Stroustrup's response that the there's a richer expression potential in C++, which gives more information to the compiler for optimization, and I submit that makes things a little easier, too.
    Well, first of all in the games programming world STL is avoided like the plague because of its inability to do things by reference and always by value.
    Yes I believe right away that in some cases inlining and templates can generate faster code but real world application dont end there.
    In a real C++ app someone will at some point introduce exception handling around the matrix code and generate unnecessary copy constructor calls because STL only works by value. And changing everything from raw pointers into smartpointers wont help performance either, many many calls are generated that in C will never be. But you never see that in those examples because they just want to make a point that it *can* be faster. And if you want fast matrix math, use fortran which is what is still used a lot in the scientific world

  15. #45
    Registered User VirtualAce's Avatar
    Join Date
    Aug 2001
    Posts
    9,607
    There is so much misinformation in this post it is absurd.

    Well, first of all in the games programming world STL is avoided like the plague because of its inability to do things by reference and always by value.
    And what would make you an authority on this topic? So instead of using a pre-built, pre-tested, and pre-optimized library a game company would code their own container classes? Even when the more time they take the more money they spend? I highly doubt it. Reinventing the wheel is not what game companies will do and nearly every single job application for game programming mentions familiarity with the STL.

    Fact is you absolutely 100% cannot code an OS in any high level language. It is not technically possible, feasible, nor worth debating about. Before talking about operating systems perhaps it would do some good to actually research them and how they are coded. This is not simple programming and it most certainly goes far beyond simple items such as how a language allocates memory or does not allocate memory. There is so much to an operating system I could not even begin to scratch the surface here.

    I recommend buying a lot of books about operating system design and downloading the Intel/AMD tech refs (if that is your target platform) and read the sections on systems programming and different ways to go about handling process address spaces, memory management, device management, hard drive management, process control, multitasking approaches, task management, and about a ton of other concepts that are ...well far beyond my experiences thus far.

    There are those here that have tried to code small OS's with some degree of success. I, myself, never got beyond the bootstrap process because loading the kernel into memory would have only been compatible with FAT 32. Since then more information has become available about different file systems so one could feasibly support NTFS or some other type of file system. Microsoft WILL charge royalties, however, if your system can read FAT32 and/or NTFS and you attempt to release or sell the OS to the public. This was announced by MS sometime ago and is probably just one more way they ensure they have a monopoloy in the retail PC OS market.

    If you really want to see how difficult and low-level an OS is I recommend looking at Linux and the Linux kernel. The goal in any OS is to get from assembly to C as fast as possible. Usually after you load your kernel from the drive into some portion of memory you would then begin to load OS drivers followed by user drivers, etc, etc. You would need to provide some type of way to call those drivers and the driver manufacturers would need to know what function you would be calling or what interface to use so that their drivers can be called at the appropriate time by the OS. There is the topic of process and thread privileges, protected mode BIOS support, protected mode interrupt support (it is possible and at times needed), setting up the IDT, LDT, and GDT, and about a ton of other 'things' most of this thread has completely skimmed over.

    No amount of waxing eloquent or hogwash is ever going to change the fact that this is simply not possible in ANY mid to high level language much less Java. You use the right tool for the job or you don't get the job done. You could have Java as the GUI but you most certainly would not be able to use Java as is to get any type of OS code to do anything close to managing the entire system. In order to do that would require a nearly complete re-write of Java and it would also need to run without a virtual machine. How can you run a virtual machine when your OS is the 'machine'? Cannnot be done. Java would run on top of your code and interface with your kernel, but more likely with the higher portions of your system like your underlying API and not the kernel at all. You need the power of assembly and C to create an OS. Good luck on even getting to C since this would require a completely new C compiler to even get it to work with your OS.
    Last edited by VirtualAce; 05-24-2007 at 09:09 AM.

Popular pages Recent additions subscribe to a feed

Similar Threads

  1. Java for real-time applications
    By zacs7 in forum A Brief History of Cprogramming.com
    Replies: 4
    Last Post: 08-26-2008, 06:34 AM
  2. Mats, the java answers
    By Jaqui in forum A Brief History of Cprogramming.com
    Replies: 1
    Last Post: 04-22-2008, 02:12 AM
  3. Why C Matters
    By DavidP in forum A Brief History of Cprogramming.com
    Replies: 136
    Last Post: 01-16-2008, 09:09 AM
  4. Problem using java programs within C code
    By lemania in forum Linux Programming
    Replies: 1
    Last Post: 05-08-2005, 02:02 AM
  5. What to make?
    By Caldus in forum C++ Programming
    Replies: 4
    Last Post: 04-06-2005, 01:12 PM