Like Tree8Likes

How do you know if an algorithm uses the cache heavily?

This is a discussion on How do you know if an algorithm uses the cache heavily? within the Tech Board forums, part of the Community Boards category; Incidentally, until C11 "volatile" didn't guarantee a proper memory barrier execution. It does now, so your analogy has a hidden ...

  1. #16
    (?<!re)tired Mario F.'s Avatar
    Join Date
    May 2006
    Location
    Portugal
    Posts
    7,410
    Incidentally, until C11 "volatile" didn't guarantee a proper memory barrier execution. It does now, so your analogy has a hidden truth to it.

    I'll just counter-argue though, following your analogy, that the same way we see the C standard as an exposition of the language strengths and limitations, so we should look at the ISA. I look at your use of the word 'contract' as being a bit naive, or perhaps too convenient, since we are discussing the limitations of modern hardware.

    But I won't argue we would be living in a crazy world indeed if our processors could run unchecked unpredictable code. For every optimization that takes away execution correctness, a safeguard has been implemented somewhere. But I don't think anyone argued against that. It was instead your denial of the existence of this problem that spurred the discussion.
    The programmer’s wife tells him: “Run to the store and pick up a loaf of bread. If they have eggs, get a dozen.”
    The programmer comes home with 12 loaves of bread.


    Originally Posted by brewbuck:
    Reimplementing a large system in another language to get a 25% performance boost is nonsense. It would be cheaper to just get a computer which is 25% faster.

  2. #17
    Registered User Codeplug's Avatar
    Join Date
    Mar 2003
    Posts
    4,643
    >> Incidentally, until C11 "volatile" didn't guarantee a proper memory barrier execution. It does now ...
    No. volatile didn't get any new semantics in C[++]11. volatile still has nothing to do with multi-threading.

    You've been doing too much .net

    gg

  3. #18
    Registered User
    Join Date
    Dec 2006
    Location
    Canada
    Posts
    3,167
    Ah sorry for the confusion. My example with volatile has nothing to do with memory barriers. I just meant in general (eg. variables that can be changed from interrupt service routines).

    I'll just counter-argue though, following your analogy, that the same way we see the C standard as an exposition of the language strengths and limitations, so we should look at the ISA. I look at your use of the word 'contract' as being a bit naive, or perhaps too convenient, since we are discussing the limitations of modern hardware.
    That I do agree. I just don't agree with grumpy's comment that modern CPUs sacrifice correctness for performance. Yes, modern CPUs are a lot more difficult to write correct code for, but if the rules are clearly defined, and all code written correctly according to those rules get executed correctly, CPUs are correct.

    My definition of a "correct CPU" is one that works according to the "contract". The contract may be very difficult to work with, but as long as all code written according to the contract is executed correctly, the CPU is correct.

    Issues of terminology aside, I do believe it is in general a good idea to sacrifice ISA simplicity for performance in modern times, since the vast majority of code nowadays is written in high level languages (C and above), and it's really just the compiler people (and people writing very low level boilerplate code) that is inconvenienced. This is not true "back in the days" when everyone writes assembly, and a complex ISA can easily significantly reduce programmer efficiency and make writing correct code more difficult.

  4. #19
    (?<!re)tired Mario F.'s Avatar
    Join Date
    May 2006
    Location
    Portugal
    Posts
    7,410
    Nah. It was my mistake. As Codeplug says I've been spending too much time around C#. C and C++ both standardized the memory model which allows high level programmers to deal with out-of-order execution. But of course the volatile keyword has nothing to do with it, whereas in C# volatile is a field-level construct used in multithread code.

    ...

    I think the criticism was perhaps a bit too harsh. I too agree. But, at least on my case, it comes from the fact that modern architectures indicate an evolutionary trend I'm not particularly ok with. There's some truth in the statement that modern CPUs sacrifice performance for correctness because, well, this is the trend you can observe if you trace the processor history.

    Of course this does not mean we are allowing unpredictable behavior in our processors. But hardware optimizations (in the name of performance) come at the expense of correctness (predictable execution), because instead of investing in a new architecture we are sticking to the old instruction set and just adding layer above layer of complexity as the number of optimizations increase. Optimizations that must come with the necessary safeguards that handle the inevitable sacrifices.

    Incidentally this complexity tends to spill out to even higher level languages, as the new C and C++ standards illustrate. It's actually -- for me -- a bit enervating, since I'm already a critic of the threading model we use in many languages nowadays (Threads are evil, pdf link). So seeing what I already consider an overly complicated model becoming even more complicated because of hardware imposed restraints, is a bit besides the point but does get under the skin.
    The programmer’s wife tells him: “Run to the store and pick up a loaf of bread. If they have eggs, get a dozen.”
    The programmer comes home with 12 loaves of bread.


    Originally Posted by brewbuck:
    Reimplementing a large system in another language to get a 25% performance boost is nonsense. It would be cheaper to just get a computer which is 25% faster.

  5. #20
    Registered User MutantJohn's Avatar
    Join Date
    Feb 2013
    Posts
    1,006
    Is it a silly question to ask how you must feel about quantum computing then?

  6. #21
    Registered User
    Join Date
    Nov 2010
    Location
    Long Beach, CA
    Posts
    5,438
    Yes, it is a silly question. The uncertainty principle dictates you can not accurately measure someone's feelings about both quantum and transistor-based computing in the same thread.

  7. #22
    (?<!re)tired Mario F.'s Avatar
    Join Date
    May 2006
    Location
    Portugal
    Posts
    7,410
    Quantum computers won't remove any of the predictability associated with processor calculations. One of the most common mistakes associated with quantum computers is to think that their inherent quantum characteristics will be reflected in the way we look or work with data. Quantum computers are being a matter of study for their capacity to increase computational power by several magnitudes. The fact there are non deterministic characteristic to the quantum world is what makes this increase in performance available. But these same characteristics will not be used to change our current computational paradigms. In other words, the quantum world will be hidden from us out of reach and deep inside the quantum processor.

    Even if these computers came to be built based on a probabilistic automata, as opposed to the finite state machines we use in modern computers, predictable behavior would still be achieved. This is so because non deterministic state machines can be interpreted with finite state machines. And we will want to use the latter in almost all of our computation. It's worth to be mentioned that while quantum computers are a matter of research, no one is interested in building a quantum probabilistic automata. That's not a matter of research (to my knowledge).

    Curiously enough -- to the topic at hand -- the superimposition property of qubits makes quantum processor natural born parallel processing devices that could greatly simplify our current instruction sets and remove much of the issues of unpredictable behavior we have been discussing. Again, while it is acceptable that we explore to exhaustion the current architecture for the lack of a better one, we are building only better processors. A new architecture will build Better processors.
    The programmer’s wife tells him: “Run to the store and pick up a loaf of bread. If they have eggs, get a dozen.”
    The programmer comes home with 12 loaves of bread.


    Originally Posted by brewbuck:
    Reimplementing a large system in another language to get a 25% performance boost is nonsense. It would be cheaper to just get a computer which is 25% faster.

  8. #23
    Registered User MutantJohn's Avatar
    Join Date
    Feb 2013
    Posts
    1,006
    Man, I started reading about quantum computing and apparently, it's a really big thing in physics right now as to what even constitutes a 'measurement' or whatever will decohere the superposed system. 'Measuring' energy or position usually seems to do the trick though.

    It was funny because when I was still in school I was totally trolling a physics class of mine and I was taking intro CS at the time and I was in a professor's office hours talking about how cool it would be if quantum computers wrote threads out of the up and down spins of electrons. You could write an entire 64 bit thread from a single atom. How cool is that? And there's even heavier elements than that. I was thinking that it'd preserver our notion of binary but be so much more efficient and you're right, Mario, about the possibility of multithreading like never before because you can have a lot of atoms with more than 64 electrons.

    Man, maybe I actually should've gone to graduate school.

Page 2 of 2 FirstFirst 12
Popular pages Recent additions subscribe to a feed

Similar Threads

  1. D-Cache/I Cache Simulator
    By husslela2 in forum C Programming
    Replies: 7
    Last Post: 04-27-2010, 08:41 AM
  2. Difference between ARP Cache and DNS cache?
    By namasteall2000 in forum Networking/Device Communication
    Replies: 9
    Last Post: 06-26-2009, 08:49 AM
  3. HTTP Cache
    By PING in forum Tech Board
    Replies: 0
    Last Post: 03-01-2009, 03:01 PM
  4. Cache using c/c++
    By m_kk in forum C Programming
    Replies: 0
    Last Post: 01-28-2008, 08:36 PM
  5. cache tag calculation
    By xddxogm3 in forum Tech Board
    Replies: 1
    Last Post: 05-07-2007, 11:01 PM

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21