Your views on computer specs

This is a discussion on Your views on computer specs within the General Discussions forums, part of the Community Boards category; Currently doing some pointless internet arguing with two idiots on Autodesk discussion groups about computer specs. Among the retardations that ...

  1. #1
    Epy
    Epy is offline
    Fortran lover Epy's Avatar
    Join Date
    Sep 2009
    Location
    California, USA
    Posts
    960

    Your views on computer specs

    Currently doing some pointless internet arguing with two idiots on Autodesk discussion groups about computer specs. Among the retardations that I'm hearing are gems like "XP is low-end, Vista is faster" (which has been proven to be false) and "Vista uses 1.5GB of RAM just from booting to the desktop" (yeah right).

    What is your philosophy on computer specs? Are you one to try and squeeze every dollar and get by on as little as possible? Do you like to get some of the "necessities" and call it good? Do you go all out and buy the newest processor that will no doubt be obsolete and 1/4th the price in a year?

  2. #2
    Registered User whiteflags's Avatar
    Join Date
    Apr 2006
    Location
    United States
    Posts
    7,673
    What is your philosophy on computer specs? Are you one to try and squeeze every dollar and get by on as little as possible?
    Absolutely. Buying your own hardware is supposed to be cheap, and it is, but I don't feel good dropping $1000 or more on many things, including computers. It's not like I make a lot to begin with.

    As far as processors go, I'd like to get off single core soon, but that is the extent of my thoughts. I looked at some AMD processors which are like $60 so that's pretty good. They're pretty power efficient as well.

  3. #3
    Malum in se abachler's Avatar
    Join Date
    Apr 2007
    Posts
    3,189
    I'd switch to AMD if they would stop using a different socket from Intel. I'd stay with Intel if they stopped charging $300 for $5 worth of silicon. I used to use Cyrix processors specifically because they were cheaper, but both AMD and Intel now use proprietary sockets to ensure vendor lock in, which is bad for the customer, and by extension, bad for AMD and Intel. As it is, I don't think I'll be buying a new computer for a couple more years and the system I'm on now is already several years old. Back when I was working steady, I used to buy a new computer every 9-12 months, that just isn't economically feasible with the inflated prices for components. In many cases you can't build a computer cheaper than you can buy a prefab anymore, especially for the higher end equipment. Much of this is due to Intel and AMD's illegal tiered pricing.
    Last edited by abachler; 12-16-2009 at 01:10 PM.
    Until you can build a working general purpose reprogrammable computer out of basic components from radio shack, you are not fit to call yourself a programmer in my presence. This is cwhizard, signing off.

  4. #4
    Registered User
    Join Date
    Dec 2006
    Location
    Canada
    Posts
    3,183
    I buy $50 CPUs and overclock them to the level of $300 CPUs. My current C2D E6300 (stock 1.8ghz) is running at 3.3ghz, with a $15 aftermarket heatsink/fan.

  5. #5
    Epy
    Epy is offline
    Fortran lover Epy's Avatar
    Join Date
    Sep 2009
    Location
    California, USA
    Posts
    960
    Quote Originally Posted by cyberfish View Post
    I buy $50 CPUs and overclock them to the level of $300 CPUs. My current C2D E6300 (stock 1.8ghz) is running at 3.3ghz, with a $15 aftermarket heatsink/fan.
    You can really overclock it that much with just an aftermarket sink and fan? Would think you would need liquid-cooling for that much.

  6. #6
    Registered User jeffcobb's Avatar
    Join Date
    Dec 2009
    Location
    Henderson, NV
    Posts
    875

    My IMHO

    As a programmer I would suggest dual-core at a minimum. Getting practice writing and debugging multithreaded/multiprocess systems is crucial in this day and age. Use this to get experience that is the next major frontier in computing and software engineering...

    my 0.02

    Peace
    Last edited by jeffcobb; 12-16-2009 at 06:35 PM. Reason: s/code/core
    C/C++ Environment: GNU CC/Emacs
    Make system: CMake
    Debuggers: Valgrind/GDB

  7. #7
    Malum in se abachler's Avatar
    Join Date
    Apr 2007
    Posts
    3,189
    Quote Originally Posted by jeffcobb View Post
    As a programmer I would suggest dual-code at a minimum. Getting practice writing and debugging multithreaded/multiprocess systems is crucial in this day and age. Use this to get experience that is the next major frontier in computing and software engineering...

    my 0.02

    Peace
    Heh, I've been doing multithreaded/multiprocessor code since the 486 days. I still have a copy of Windows NT floating around here somewhere. Nowadays the skill to learn is GPGPU programming. Massively parallel processors are the way to go, to bad Intel and AMD haven't caught on to that fact yet. Something needs to be done about the lackadaisical attitude the memory manufacturer's have towards innovation as well. Memory needs to be faster than any single processor, not slower.

    I'd rather have 256 x 486's on a chip than 4 Core2's, and it would use the same amount of silicon and power and perform at 25GHz equivalent. All they would have to do is update the masks to implement them in 65nm. I'm sure there would be some glue logic necessary for coordinating all those processors access to e.g. L2/L3 cache, but its nothing that can't be done, I even built a prototype a few years ago using old parts for the 486's and new memory. It only had 12 cores, but it still managed to saturate the 400MHz memory bandwidth.
    Last edited by abachler; 12-16-2009 at 01:37 PM.
    Until you can build a working general purpose reprogrammable computer out of basic components from radio shack, you are not fit to call yourself a programmer in my presence. This is cwhizard, signing off.

  8. #8
    Registered User jeffcobb's Avatar
    Join Date
    Dec 2009
    Location
    Henderson, NV
    Posts
    875

    Parallel coding

    Dude, you would LOVE coding for the PS3....honest...a single proc for mainstream execution and 7 usable secondary processors that are so fast that they can execute a whole block of code in less time than it takes the primary processor to execute a single instruction.
    C/C++ Environment: GNU CC/Emacs
    Make system: CMake
    Debuggers: Valgrind/GDB

  9. #9
    Woof, woof! zacs7's Avatar
    Join Date
    Mar 2007
    Location
    Australia
    Posts
    3,459
    There's a reason why $50 processors are $50. No longer being produced, and intel wants to get rid of them quick smart to push the next line.

    Quote Originally Posted by abachler
    but both AMD and Intel now use proprietary sockets to ensure vendor lock in, which is bad for the customer, and by extension, bad for AMD and Intel.
    That's ridiculous. Socket design has an enormous amount to do with processor design. If you restricted them to the same pin pattern, you'd be restricting the design of processors. You really expect Intel and AMD to work together? Considering the benefits to either of them is close to nil.

    Consider this... A universal 1024-pin socket exists that AMD and Intel both use for their quad-core processors. However, both AMD and Intel are designing oct-core processors (both in secret, they are rivals after all). You really expect them to stay with the 1024-pin socket? Even if the design permitted, it wouldn't be optimal. You can't expect AMD to contact Intel (or vice-versa) and say "we want to develop a oct-core processor, we need a new socket".

    Even if you did that, all it would take is MIPS to strike a deal with <some motherboard manufacturer> and undercut the whole market.

    That being said... go MIPS .

  10. #10
    Registered User
    Join Date
    Dec 2006
    Location
    Canada
    Posts
    3,183
    You can really overclock it that much with just an aftermarket sink and fan? Would think you would need liquid-cooling for that much.
    Not usually, but the Core 2 series proves to be extremely overclockable. And unlike the Netburst CPUs (Pentium 4, Pentium D), they put out very little heat for the same speed.

    The fact that my CPU is bottom-of-line helps, too. With a stock 2.8ghz CPU you can probably only push it up to 3.6ghz or so. The gain is always greatest for the lowest end stuff.

    There's a reason why $50 processors are $50. No longer being produced, and intel wants to get rid of them quick smart to push the next line.
    They need something to fill in the budget segment, too. That's where the most money is.

    That's why they clock 3.3ghz chips at 1.8ghz and sell them cheap.

    For example, if they want to sell chips at 2.0, 2.4, and 2.8 ghz, they would produce all those chips on the same line, and find their maximum speeds (it's different for each chip). And then, those that fail to run at 2.8 will be sold at 2.4. Those that fail to run at 2.4 will be sold as 2.0. Those that fail at 2.0 would be thrown out of the window. That's cheaper than trying to produce chips that run at different speeds (more manufacturing lines, with minimal savings on lower quality silicon).

    The problem is, as the manufacturing process gets perfected, almost all chips can run at 2.8ghz. But they cannot sell all chips as 2.8ghz chips, because people still need cheap 2.0ghz chips. That's where most of the money is, so they certainly don't want to lose that segment. But at the same time, they want to make big bucks from those fast chips. So what they do is, they clock some of those chips at 2.0ghz, and sell them cheap, even though they run perfectly well at 2.8ghz. Overclocking is just trying to clock it back up to the "real" speed limit.

    It's a common practice in video cards, too. They intentionally disable working shaders in video cards to sell them as lower end cards.

  11. #11
    l'Anziano DavidP's Avatar
    Join Date
    Aug 2001
    Location
    Plano, Texas, United States
    Posts
    2,738
    Something needs to be done about the lackadaisical attitude the memory manufacturer's have towards innovation as well. Memory needs to be faster than any single processor, not slower.
    Agreed!!! What's the deal with memory being so much slower than processors? We can't fully utilize the true potential of the processor with this big bottleneck on the speed of memory, even with optimizations such as caching, pipelining, etc., etc., etc.
    My Website

    "Circular logic is good because it is."

  12. #12
    (?<!re)tired Mario F.'s Avatar
    Join Date
    May 2006
    Location
    Portugal
    Posts
    7,439
    Quote Originally Posted by zacs7 View Post
    That's ridiculous. Socket design has an enormous amount to do with processor design. If you restricted them to the same pin pattern, you'd be restricting the design of processors. You really expect Intel and AMD to work together? Considering the benefits to either of them is close to nil.
    What we need is processor interfaces to processor-neutral motherboards.

    ...

    Oh! As for the initial question, I go for longevity. I do not follow a need to have the highest speed, capacity, whatever. Instead I concentrate myself on equipment that is known to last and that will give me an expected 2 years of usage before noticing performance issues. That's the extent of my speed concerns. I have no special use for a computer that launches Visual Studio in 2 seconds over a computer that takes 10 seconds. I don't mind waiting.
    Last edited by Mario F.; 12-16-2009 at 04:22 PM.
    The programmer’s wife tells him: “Run to the store and pick up a loaf of bread. If they have eggs, get a dozen.”
    The programmer comes home with 12 loaves of bread.


    Originally Posted by brewbuck:
    Reimplementing a large system in another language to get a 25% performance boost is nonsense. It would be cheaper to just get a computer which is 25% faster.

  13. #13
    Malum in se abachler's Avatar
    Join Date
    Apr 2007
    Posts
    3,189
    Quote Originally Posted by DavidP View Post
    Agreed!!! What's the deal with memory being so much slower than processors? We can't fully utilize the true potential of the processor with this big bottleneck on the speed of memory, even with optimizations such as caching, pipelining, etc., etc., etc.
    Because there is only one company in the world that manufactures memory, so there is no competition to stay ahead of. Typical DRAM in 1982 was 120ns (8MHz), Typical Memory today is 800MHz (typical does not mean cutting edge performance). Thats a mere 100 times faster in 27 years, while cpu's have improved over 1000 times. That works out to a mere 18% increase per year, well below the 70% for processors.

    Quote Originally Posted by zacs7 View Post
    That's ridiculous. Socket design has an enormous amount to do with processor design. If you restricted them to the same pin pattern, you'd be restricting the design of processors. You really expect Intel and AMD to work together? Considering the benefits to either of them is close to nil.
    I hate to break the news to you but all processor manufacturer's used the same sockets until the Pentium II when Intel decided to patent Slot 1 and consequently lost 30% of their customers. Up to that point it was common for end users to switch between companies when upgrading. With cheaper processors due to competition, users upgraded more often, driving innovation. With the advent of vendor lock in users upgrade less frequently due to the higher prices and inability to switch manufacturers, which reduces overall profit with a loss of economies of scale from higher production rates. When processors where $80-$100 and they all worked in the same motherboards, it was quite easy to upgrade 3-4 times a year. Now with upgrading entailing a $300 processor and also buying a new motherboard, upgrading is less frequent.

    Quote Originally Posted by jeffcobb View Post
    Dude, you would LOVE coding for the PS3....honest...a single proc for mainstream execution and 7 usable secondary processors that are so fast that they can execute a whole block of code in less time than it takes the primary processor to execute a single instruction.
    Yeah, cell processors are pretty nifty, but coding for them is a PITA. It doesn't seem to be an efficiently scalable architecture, although its performance is very nice. One of the hospitals here in St. Louis uses a cluster of PS3's to process its 3D MRI data in real time.
    Last edited by abachler; 12-17-2009 at 02:25 AM.
    Until you can build a working general purpose reprogrammable computer out of basic components from radio shack, you are not fit to call yourself a programmer in my presence. This is cwhizard, signing off.

  14. #14
    Epy
    Epy is offline
    Fortran lover Epy's Avatar
    Join Date
    Sep 2009
    Location
    California, USA
    Posts
    960
    Because there is only one company in the world that manufactures memory, so there is no competition to stay ahead of.
    Wha?

  15. #15
    Malum in se abachler's Avatar
    Join Date
    Apr 2007
    Posts
    3,189
    Quote Originally Posted by Epy View Post
    Wha?
    One manufacturer, several 'brands'. It's very common nowadays for a company to outsource its own production, There is really only one conglomerate to deal with when having DRAM made in large quantities. It's technically several companies to avoid the anti-trust laws in various countries, but they are all owned and run by the same group of major investors (the ones with enough ownership to set pricing policies).
    Attached Images Attached Images  
    Last edited by abachler; 12-17-2009 at 07:33 AM.
    Until you can build a working general purpose reprogrammable computer out of basic components from radio shack, you are not fit to call yourself a programmer in my presence. This is cwhizard, signing off.

Page 1 of 3 123 LastLast
Popular pages Recent additions subscribe to a feed

Similar Threads

  1. Can anyone help?
    By javalurnin in forum C Programming
    Replies: 11
    Last Post: 12-02-2009, 05:02 AM
  2. Replies: 34
    Last Post: 02-26-2006, 12:16 PM
  3. Major Computer Problem
    By Olidivera in forum Tech Board
    Replies: 10
    Last Post: 07-15-2005, 11:15 AM
  4. Tabbed Windows with MDI?
    By willc0de4food in forum Windows Programming
    Replies: 25
    Last Post: 05-19-2005, 10:58 PM
  5. Computer will not boot.
    By RealityFusion in forum Tech Board
    Replies: 25
    Last Post: 09-10-2004, 04:05 PM

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21