# How to get familliar with an algorithm

This is a discussion on How to get familliar with an algorithm within the Tech Board forums, part of the Community Boards category; What does it mean to know an algorithm? I thought I had known quicksort by years, but then I realized ...

1. ## How to get familliar with an algorithm

What does it mean to know an algorithm? I thought I had known quicksort by years, but then I realized I couldn't implement it without looking to other sources. I then found that there are quite a few different implementations, for example (IMO) quicksort is a composite algorithm and what I mean by that is there are different partitioning algorithms that can be used. The only two I know of are Hoare and Lomuto. Out of curiosity does anyone know any others?

How do you actually learn a full implementation of an algorithm? For example so well that you can reproduce it on an interview. I worked at tracing variables by hand on a piece of paper and running the program in my head. What I found is I was memorizing the lines of code, however I don't feel like I have an intuitive understanding of the algorithm.

2. Originally Posted by c_weed
What does it mean to know an algorithm? I thought I had known quicksort by years, but then I realized I couldn't implement it without looking to other sources. I then found that there are quite a few different implementations, for example (IMO) quicksort is a composite algorithm and what I mean by that is there are different partitioning algorithms that can be used. The only two I know of are Hoare and Lomuto. Out of curiosity does anyone know any others?

How do you actually learn a full implementation of an algorithm? For example so well that you can reproduce it on an interview. I worked at tracing variables by hand on a piece of paper and running the program in my head. What I found is I was memorizing the lines of code, however I don't feel like I have an intuitive understanding of the algorithm.
Just analyze as many implementations as you can (there are often many different ways to approach a problem, so don't let that throw you off) until at least one of them makes perfect sense to you. Visualize each step as it needs to happen over and over again and eventually it will stick. Just don't forget to revisit and review periodically, of course - memory fades over time!

3. There are actually two aspects in getting familiar with an algorithm: knowing the principles and theory by which an algorithm works, and understanding one or more implementations.

A lot of developers (like Sebastiani, apparently) are satisfied with the second. That's not necessarily wrong, but can be limiting.

The principles and theory by which an algorithm work are important because they are applicable regardless of implementation - they apply whether you are doing calculations by hand, designing a hardware solution, or creating software implementation in a high level or a low level language. Learning the principles and theory means the practice of human learning without a computer - learning the underlying mathematics, become familiar with a process description, etc.

Looking at implementations is fine, but also means your learning can be swamped by specific artifacts from the implementations and not the algorithm itself. An implementation of an algorithm in VHDL or C - particularly a good quality one - will have particular features that are specific to the language used. That's okay if you want to learn VHDL or C but, if you are required to create a native implementation in some other language, it is necessary to unlearn specifics in one language and specifics of another to create your new implementation. If you hunt through forums here, you'll see numerous examples of folks getting in trouble translating simple code from one programming language to another because they don't change technique to suit the new language - and such problems are more likely to bite when translating implementations of significant algorithms between languages. Implementations of really new algorithms might not even exist - someone, somewhere, somewhen has to be the first to implement any algorithm - and then you get into novelty aspects of software (software is often used to implement things that have never been implemented in some other ways).

The thing is, any aspect of human learning is an individual thing. Different people learn in different manners, and are interested in different aspects. People with a "practical" bent will eschew learning theory and principles, and might refuse to take on work that is truly new. Theoreticians (who are often the originators of new algorithms) might not care about implementations at all because it is not necessary to implement an algorithm in order to analyse its properties. Most of us are somewhere between those extremes.

4. An algorithm is a path of execution. What you need in order to reproduce an algorithm is to memorize the path, not the actual code. In that way, you can reproduce an algorithm in any programming language, once you gain the necessary knowledge of that language in order to implement it.

5. Originally Posted by Mario F.
An algorithm is a path of execution.
Algorithms may consist of multiple paths of execution, that an implementation may execute concurrently or sequentially and sometimes even reorder. This is one reason that some algorithms have variants with different ordering of some paths, that have different properties (parallel versus sequential execution, different numerical stability properties, etc).

6. Originally Posted by grumpy
There are actually two aspects in getting familiar with an algorithm: knowing the principles and theory by which an algorithm works, and understanding one or more implementations.
True, but often enough the distinction is somewhat superficial. Even theories, after all, are subject to reformulations and simplifications (which in turn must be reflected in some implementation, of course). Unless you're talking about the more abstract, result-based kind of theory which can't really (or normally) be tied to any particular implementation, in which case I agree.

7. You're missing the point that principles and theory are independent of implementation.
Development of theory can inform implementation strategies, and vice versa, but neither is always necessary for the other.

It is also possible to have multiple implementations - with different balance of properties or specialisations - that are all based on one common algorithm.

8. Originally Posted by grumpy
You're missing the point that principles and theory are independent of implementation.
Development of theory can inform implementation strategies, and vice versa, but neither is always necessary for the other.

It is also possible to have multiple implementations - with different balance of properties or specialisations - that are all based on one common algorithm.
Fair enough, maybe I was just being too pedantic...

9. You're missing the point that principles and theory are independent of implementation. Development of theory can inform implementation strategies, and vice versa, but neither is always necessary for the other.
O_o

I see what you are saying, but I'd argue that implementation without application of existing theory is just impromptu rediscovery and in as much as "implementation" exists without code also the opposite making both always necessary.

Soma

10. Originally Posted by c_weed
How do you actually learn a full implementation of an algorithm? For example so well that you can reproduce it on an interview. I worked at tracing variables by hand on a piece of paper and running the program in my head. What I found is I was memorizing the lines of code, however I don't feel like I have an intuitive understanding of the algorithm.
The couple of times I was asked to write code in an interview, I said "This is probably going to contain mistakes, since I have no way of testing things when writing code on a whiteboard, but with that in mind, here is my approach..." Anyone who expects you to write perfect code on a whiteboard in a high-pressure interview situation is not somebody you should be working for.

As far as memorizing the algorithms, I find it best to memorize the basic principle of the thing and work out the specific details at implementation time. Using quick sort as an example, I remember "Move all the small things to the left, move all the big things to the right, then sort the left and right parts." The details about how to select the pivot and how to actually move elements around are situational and not worth memorizing.

For algorithms that are significantly more complicated, I will rely on someone else's implementation if possible. I don't need, want, or have the capacity, to know everything in the universe.