What it the meaning of stable applications anyway?
E.g. AppXXX v999.009 STABLE
How about stable algorithm?
E.g. Someone said, "merge-sort is fast, but unstable"...
What it the meaning of stable applications anyway?
E.g. AppXXX v999.009 STABLE
How about stable algorithm?
E.g. Someone said, "merge-sort is fast, but unstable"...
Last edited by audinue; 11-13-2008 at 09:40 PM.
Stability for applications means that the application has (mostly) been thoroughly tested before releasing it and has no known issues.
Stability with regard to sorting algorithms means algorithms that preserve the ordering of elements with equal values.
If you are sorting objects on the basis of one of their parameters, say sorting students on the basis of height. Now, if you have 2 students with the same height, an unstable sort may change their positions. If A comes after B in your original data and both have the same height, your final sorted list might have B coming after A. A stable sort on the other hand ensures that this ordering will be preserved. That is, A will come after B.
Last edited by PING; 11-13-2008 at 09:47 PM. Reason: Typo
Code:>+++++++++[<++++++++>-]<.>+++++++[<++++>-]<+.+++++++..+++.[-]>++++++++[<++++>-] <.>+++++++++++[<++++++++>-]<-.--------.+++.------.--------.[-]>++++++++[<++++>- ]<+.[-]++++++++++.
And remember version numbers are usually just a marketing ploy
Bump it to version 2.0 and wow, I've got to have that -- It's only $200 extra.
I'd have to disagree, at least as far as libraries/SDKs are concerned. The major version number usually indicates a consistent interface which developers can rely on not to change between minor releases. In other words, version 2.1 (for instance) should be link-compatible with version 2.0. Makers of such libraries will usually roll the major version number when the changes to the library are so extensive that code linking against that library will no longer work properly with the newer version.
In systems with a major.minor.revision scheme, rolling the revision number usually indicates only bug fixes or performance improvements. Rolling the minor number might add new interfaces but leave existing interfaces backward compatible. Major number changes would indicate linkage-breaking changes.
Particularly in the open source community, major versions smaller than 1 indicate experimental code whose interfaces can't be relied on for stability even between minor releases.
Code://try //{ if (a) do { f( b); } while(1); else do { f(!b); } while(1); //}
Yes, hence the "usually". I'd say that in a lot of cases, say "DVD Burner Pro 3.0" (I made it up) they're used for marketing (mainly). I'm mainly talking about where marketing is "large", so commercial software not OpenSource software / interfaces / APIs etc where version numbers (usually) mean something
Last edited by zacs7; 11-14-2008 at 01:00 AM.
This makes most sense when it comes to lists along the lines of a school class list: List them in alphabetical order first, then, if you want to list them by birthday order, two children with the same birthday will still be listed in alphabetical order. In an unstable sort, the order between "equals" is undefined. Of course, you may not have two birthdays with the same date.
--
Mats
Compilers can produce warnings - make the compiler programmers happy: Use them!
Please don't PM me for help - and no, I don't do help over instant messengers.
The definition of stable application is in a grey area, to say the least.
There's an unwritten rule of thumb that defines a stable application as an application that remains usable for an extended period of time and also across several start and close situations. As such, a stable application is one that you fire up on your computer and can use for hours without any noticeable degradation of performance, crashing or halting. It is also an application that you can fire up and close down several times during the course of a operating system session without any of the same side effects.
Applications that are self-contained and make only minimal usage of an operating system functionality are good candidates to be stable. Certainly, applications that are well developed too -- code that contains memory leaks, or that eventually accesses memory that doesn't belong to it, is going to soon or later crash or degrade the operating system, and from there itself.
...
So we come to stable as seen on many software releases like the one you pointed out and wonder about the validity of such claim (I sometimes do, at least). Here the intent is as PING described. But many of these releases are anything but stable. Just the authors admission that said version of their software is a mark on the development process and is... ok for general use.
The point being that stable here is not to define the version as a good finished product, but instead as a version the authors feel should be available for consumption, because maybe the nasty bugs have been ironed out, the application performs well and, more likely, new features added since the last stable version are now usable and mostly (if not entirely) bug-free and there is the need to show it off.
During the development cycle of many applications (and you can witness this on Open Source software) markers are often placed ahead of time; "we will release a new version every 6 months", or "with this new feature we will release a new version". Of the two, I personally argue against the first as it is a clear capitulation to Marketing and is particularly confusing in an Open Source environment where you don't have absolute control of the development cycle -- You, the project manager, are dependent on the goodwill and free time of the programmers. More often than not the so-called stable version is anything but. And it's version X.01 that is. I mean, you look for the next version after the release version for a truly stable version. No matter what they say.
Take Ubuntu. They have this strange release scheme (not even mentioning the names they give to each release) in that they force themselves to provide a new release every 6 months. Looking at the SVN these releases are called stable. And yet, it's only after a couple of months on most circumstances that all updates have shown that will make me feel safe to finally update. I don't actually, because I prefer to stick to their LTS releases (Long Time Support, meaning this particular release will have a 5 year support period)... but the same happen to these. Because it's obvious they cannot provide stable releases by forcing themselves on a schedule, the best release is always the next or the next after. Or, on the case of Ubuntu, a month or two of updates.
Another example (this one already a mantra) is KDE releases. Stable x.0 releases in KDE are never stable. A smart KDE user will wait for version x.1.
...
So... why this long post? Because I took a shower, got dressed and I'm waiting for my wife coming home from her mom's visit and she's taking forever. We are supposed to go to the movies and she'll make me miss this session and the next one is 4 hours from now. *annoyed*
And to let you know that the meaning of stable in release cycles is 0, nothing, nada. And the meaning of stable application is probably more interesting and open for debate.
Last edited by Mario F.; 11-14-2008 at 01:17 PM.
I kind of agree with brewbuck except for when it comes to microsoft. If they used a normal versioning system they would have few production pieces of software that actually made it to full version number upon release. DOS was stable enough. Windows 95 was too. Encarta... From there on out it became kind of blurry. Some versions of office seem worthy of being considered their own version while most versions of everything else are quite incomplete. MSVS 2008 is fine in my opinion though. And so is Office 2007, though the need to move everything into an entirely unfindable location was a tad unnecessary.
There's also a distinction between internal and external versions. It's not uncommon that a company with a commercial product employ external version numbers for public appeal, but internal (more consistent ones) to keep developers on the same page. I've worked at such a company and, so long as you don't have to communicate much with groups outside of R&D (i.e. marketing or customer support) it's a pretty effective solution.
Office is buggy, buggy, buggy and has been so for at least all versions I can remember. Perhaps it is stable, in sense that it works fine to use on a daily basis without problems, but certainly it is not of quality. Some things will still crash it, and I can reproduce those every time. And, of course, there are some things that triggers a bug, and I can reproduce those, too, every time.
It seems to me like Microsoft is ignoring them because they are such obvious issues.
Stop it. Most huge software solutions have hundreds or even thousands of little bugs no matter who develops them. If an application consists of many different parts, they must all be compatible with each other. Think of it like this - if 5 teams all have to play with every other team, the amount of games is pretty small, but if 500 teams all have to do it, the amount of games grows exponentially. That is why the amount of problems and bugs grows exponentially too when the project is big.
If it is usable, it's stable, that's my point of view...
"The Internet treats censorship as damage and routes around it." - John Gilmore
Is it stable? Is it quality?
And I seriously don't care if they are small fry bugs, because they can take their little time and release a minor release to fix those small bugs.
And I don't know about you, but I wouldn't call an application that crashes stable. No matter what the cause.
And it is one thing about small, unreproducible bugs, but minor bugs that can be reproduced is something that can be fixed. If there is not enough time in the current major release, then it should be fixed until the next major release or in a minor release.
Nobody will pay for release that fixes some misspelled words on the page 186 of the User Guide.
People will pay for the new release if it contains some new highly needed features. So Product Manager always has to choose - if the time is spent on developing new features or reproducing/fixing/testing of the old known bugs that add nothing to the functionality of the product... And probably will be unrelevant after two more major releases anyway...
All problems in computer science can be solved by another level of indirection,
except for the problem of too many layers of indirection.
– David J. Wheeler
Talking about bugs, I just curious to either console or flash games, yeah almost of them for flash...
When I play NES, Sega, PSX, PS2, PSP, XBOX, etc...
I never find something strange (bugs) in their games.
How they made something bugless?
And I think there is no big difference between games and application, they are all the same, programmed.