And, of course, if you're a good coder, you'd figure that errors such as syntax errors would be avoided altogether.
But error is human...it doesn't exist for machines.
And, of course, if you're a good coder, you'd figure that errors such as syntax errors would be avoided altogether.
But error is human...it doesn't exist for machines.
Teacher: "You connect with Internet Explorer, but what is your browser? You know, Yahoo, Webcrawler...?" It's great to see the educational system moving in the right direction
Hi
It's interesting that the one person here who has used a similar system says it's not annoying. And, yes, I would love to have the computer telling me all the time if I have got something wrong.
Humans make mistakes. You make it sound like a really good prgrammer never makes any. I have been programming for 20 years now, and I make mistakes all the time. Everything from simple spelling errors , and missing semicolons, to stupid things like if (a=b).If you are a competent programmer and know what you're doing
Why should the computer wait until I compile to tell me about things like that? Why not just be helpful and tell me now?
Hugo
I wouldn't be surprised as new heuristics are developed, processing power is increased and smarter people code compilers, that the future is in dynamic compilation.
One of the problems of modern major systems is exactly compilation times. Sometimes almost prohibitive to the point of coders trying to hack around the need to have to recompile, or deciding to delay maintenance tasks.
Somewhat similar techniques are used already like JIT compiling that puts a part of the compilation burden on the application user. Although the stress is in improving application performance, not minimizing compilation times, there is no denying compilation into bytecode is simpler than into machine-code. For one, some optimization techniques can be left out of the initial compilation process.
If you ask me, you bet I would like to have my files being compiled "as I type" (loosely speaking of course. Some in here seem to be taking this too literaly).
I think it's obvious to anyone who gives a little thought to this that it would be a wonderful weapon (and a job savior on the extreme cases).
Does anyone really believe the opposite? and Why?
Last edited by Mario F.; 11-26-2006 at 07:35 AM.
Originally Posted by brewbuck:
Reimplementing a large system in another language to get a 25% performance boost is nonsense. It would be cheaper to just get a computer which is 25% faster.
> One of the problems of modern major systems is exactly compilation times
Compiling individual source files takes almost no time at all if you've made a decent enough job of modularising your code and setting up your project.
If you have a
#include <everything.h>
and it keeps changing, causing the whole system to be recompiled (like what can happen with poorly setup precompiled headers), then you have a problem with your development strategy which software isn't going to fix.
I'll just stick with pressing ctrl-f7 every so often to save and compile the current file, and deal with any issues which might be present.
Syntax highlighting - OK I suppose, it just about works reliably enough after about 10 years of development.
Auto completion - I can type faster than that even on a fast machine. It's about half way between annoying and useful. That's if it has enough sense to pick the right context. By the time I've finished with the alt-ctrl-shift magic to make it do the right thing, I could have typed it in.
When you're writing in Word, do you slavishly pay any attention at all to all the spelling and style nags whilst you're writing, or do you just forge ahead to get everything written "first draft", then go back and start fixing things up.
I can hardly wait for the "My IDE won't let me type in any more code" support calls
> I reckon there's a bit of fuddyduddyness going on here. The fear of the new
Or more likely, just see the smoke and mirrors for what it is.
Looks pretty, but is it actually solving any real problems?
No ones stopping you from implementing it yourself. Isn't that why you're here, to validate your big idea?
You asked for opinions, you got 'em.
To me, it will be nothing more than another set of training wheels on a bicycle. Yes I suppose showing syntax errors to "hunt and peck" noobs might be worth something.
But for the pro's, it had better do something pretty darn useful otherwise it will just get turned off!.
If you dance barefoot on the broken glass of undefined behaviour, you've got to expect the occasional cut.
If at first you don't succeed, try writing your phone number on the exam paper.
I've used Java and Eclipse, and I love the compile-as-you-code feature. It really is a time-saver. For best results, use a good development machine.
Hi Salem,
Thanks for the opinions
Huh? I saw it working just fine many years ago on our old 086 in Turbo Pascal.Syntax highlighting - OK I suppose, it just about works reliably enough after about 10 years of development.
Agreed.Auto completion - I can type faster than that even on a fast machine. It's about half way between annoying and useful.
A: Yes. AndWhen you're writing in Word, do you slavishly pay any attention at all to all the spelling and style nags whilst you're writing, or do you just forge ahead to get everything written "first draft", then go back and start fixing things up.
B: It's a completely different context. I can personally understand a misspelt word, but my compiler will not tolerate it in the slightest.
I don't understand how you get from Compile As You Type to that. But it's really not what I had in mind. The idea is that the computer should help you in any way it can, and in a non-irritating way. So, I'm not talking about pop-up messages you you have to click on, and not talking about the irritating paperclip.I can hardly wait for the "My IDE won't let me type in any more code" support calls
What I do mean is that, if the computer could feasibly see an error, which I have missed, why should it sit there smug as a bug, waiting until I compile? Why not tell me now? Especially if it's wasting billions of instructions by the second doing nothing.
At the moment, the computer is a little bit like one of those horrible jobs-worth people at an embassy, who watches you fill in a form, then sends you a letter two weeks later saying you've filled it in incorrectly and will have to come back to try again.
It's something I would like to try. But I'm in the middle of validating several of my other big ideas, and I shouldn't really be taking on any new ones. Especially ones outside of my expertise. What I was hoping was for someone else to exclaim "What a good idea, I'll implement it myself" - then I could reap the benefits of someone else's hard work.No ones stopping you from implementing it yourself. Isn't that why you're here, to validate your big idea?
Well, as someone who does this professionally, I do find myself wanting this every time I see this:But for the pro's, it had better do something pretty darn useful otherwise it will just get turned off!.
Syntax error: (373) missing semicolon
Build Failed, 1 errors, 0 warnings
(Seriously, you dumb computer. You couldn't have told me that ages ago??)
Hugo
yeah, rather than spitting out a block full of errors you get non-intrusive underlines where your code is incomplete. Works with the CDT as well.Originally Posted by Wraithan
> Huh? I saw it working just fine many years ago on our old 086 in Turbo Pascal.
Well, for one I would like for code completion to go one step further and start coloring based on scope. swap() and MyClass::swap() are two very different things.
Originally Posted by brewbuck:
Reimplementing a large system in another language to get a 25% performance boost is nonsense. It would be cheaper to just get a computer which is 25% faster.
> Compiling individual source files takes almost no time at all if you've made a decent enough job of modularising your code and setting up your project.
The easy argument. The "if you had done it right you wouldn't this" type of argument. Unfortunately, the real world out there is more full of "I did it wrong and now its either too late or too expensive to fix it" argument.
Originally Posted by brewbuck:
Reimplementing a large system in another language to get a 25% performance boost is nonsense. It would be cheaper to just get a computer which is 25% faster.
I agree. There's too much fantasy idealism in the computer science world. Too much talk about Big-O, when in the real world, it's frequently irrelevant. And too much about "doing it properly", when in the real world, you're dealing with a 15 year old system that you didn't start, and are now having to deal with.Unfortunately, the real world out there is more full of "I did it wrong and now its either too late or too expensive to fix it" argument.
The answer from some programmers to many problems seems to be: "well you should just be a better person".
Great, this is exactly what I'm talking about. Now, who fancies implementing it?yeah, rather than spitting out a block full of errors you get non-intrusive underlines where your code is incomplete.
Hugo
Last edited by Rocketmagnet; 11-27-2006 at 09:59 AM.
> Great, this is exactly what I'm talking about. Now, who fancies implementing it?
But that has little to do with a possible compile-as-you-type feature. I thought (and that has been my understanding from the start of this thread) that the general idea was to speed up the compilation process.
What is being discussed instead is a context-sensitive parser that can detect code errors as you type. Not that I don't see anything wrong with that. I just don't see how that fits on a compilation discussion.
It would also be highly tied to the editor interface. I'm not sure how it could be easily integrated into an existing IDE without the makers of the IDE implementing it themselves.
Originally Posted by brewbuck:
Reimplementing a large system in another language to get a 25% performance boost is nonsense. It would be cheaper to just get a computer which is 25% faster.
Well, the original idea of Compile-As-You-Type (and also Lint-as-you-type) was twofold. One, to speed up the process, but also to try to provide the user with the information gained from this process as soon as possible, using things like syntax colouring, underlining, and un-obtrusive messages relating to the code under the cursor.But that has little to do with a possible compile-as-you-type feature. I thought (and that has been my understanding from the start of this thread) that the general idea was to speed up the compilation process.
For example, the user might as well be told sooner, rather than later that they have missed a semicolon, written if(a=b), or done something else stupid.
As well as code errors, the user can also be told about other fruits of the compilation process. Admittedly, these may only apply to people writing very time-critical code, e.g. on microcontrollers. But I think they would provide other valuable insights to what's going on under the hood, especially in C++, where there a lot of gotchas. I'm often not 100% sure if my compiler is creating a tempoary or not, and I have to go digging in the asm output.
I think it would be very useful to see the results of the first pass. For example, exactly what sequence of events happens when I do:
a = b->f() + c;
What temporaries were created? Were any constructors called etc? And what about the optimisation? Did that line optimise nicely? what if I try this instead:
a = b->f();
a += c;
Does that look better?
I know, I know. If I were a super-geek, I'd just know all this stuff off the top of my head. Perhaps 5% of programmers are good enough to really know this stuff. Maybe 20% are pretty good, but have a lot to learn about the results of their code. There are probably loads of mediocre programmers around. As well as being useful, compile-as-you-type could help us programmers learn what's really going on when we program. I keep bothering my guru friends with questions that could easily have been answered by the compiler, if it just knew how to tell me.
Hugo
A good point. I'd wondered about this, and I think it's a hard question. Here are two possibilities:It would also be highly tied to the editor interface. I'm not sure how it could be easily integrated into an existing IDE without the makers of the IDE implementing it themselves
1: The compilers and IDEs could agree on a standard file format for communicating.
2: Compilers could be loaded into memory, and passed code through standardised API calls.
But, yes, there are hurdles to overcome.
Hugo
> What temporaries were created? Were any constructors called etc? And what about the optimisation? Did that line optimise nicely?
Hmm... the language specification answer all those questions except the last. And the problem of the last is that you can't expect to have trustworthy results. On the other hand, compilers these days are smart enough to not give you many options regarding optimization. I understood the point of your example. It was meant just as an illustration. But the problem is that, the compiler will not care with such small changes. It will optimize equally. Most optimization decisions are taken at a more complex level involving a larger array of code and the context in which it is being evaluated.
What I was expecting this conversation was all about, was dynamic compilation models for C++ (akin to those existing for other programming languages). I found interesting the concept of my file being compiled as I type (which I don't think any language supports) into perhaps bytecode, considerably speeding the building process once I hit the button.
Originally Posted by brewbuck:
Reimplementing a large system in another language to get a 25% performance boost is nonsense. It would be cheaper to just get a computer which is 25% faster.
Aah, the old "RTFM". In this case, the manual is more than 400 pages long. I'd be pretty impressed with anyone who could answer any such question I had about the language.Hmm... the language specification answer all those questions except the last.
It's best to ask an expert, and what better expert than the compiler itself? Compare:
"compiler what happens if I do this ... or this ... thanks"
with
"hmm, what do I have to look up, read and interpret to find out if this ... might be better than this ... ?"
I partly agree about the compiler being so good at optimising. The day I threw away my Pentium assembler datasheet and stopped writing Pentium optimised asm was the day my compiled totally shamed me in a speed contest. I spent the week perfecting my inner loop, re-ordering code, calculating cycle times, and measuring them to check. For a laugh, I decided to see how much faster it was than the original C code. Guess what, the C was faster .
I don't have an intimate knowledge of compilers, so I could easily be wrong about this, but I was under the impression that the compiler can't always do the optimisation you might expect. For example when working with pointers. Sometimes the compiler simply can't know you're not doing something daft with the pointers which would mess up its optimisation.
Hugo