1. ## bilinear interpolation

Well I took it upon myself to read about linear interpolation today, since I found some 'lerp' code in the Irrlicht engine and realised how useful it is in the domain of computer graphics.

I almost completely understand it too

Anyway, I read wikipedia's article on bilinear interpolation and sort of understood that too... but what can you use it for in computer graphics? Can you use it in computer graphics? There's a pretty picture in the article.

I googled 'bilinear interpolation in computer graphics'... what the hell is a 'pyramid filter'? Wikipedia has no answers and the google results for that confuse me.

2. The subject of interpolation (in any form) in computer graphics is huge, so I'll try to give a general overview and if you want to know something specific, feel free to ask.

In general, interpolation helps us to find the value of a data set or a function, between two points, using those points as indicators. In the case of linear interpolation, if you have two points (x1, y1) and (x2, y2), you can calculate the interpolation coefficient alpha which varies between 0 and 1. If the coefficient is 0, you are at the starting point, if the coefficient is 1, you are at the ending point.

Let's try to find a very basic example for linear interpolation (although the result would be horrific with the linear approach): you want to move an object in 2D along a predefined path, that you can't or don't want to describe at a function. So you go ahead and define a limited number of points, each point has additionally to its coordinates a time value (to see where the object is at what time.
If you are running your program and the animation begins, each cycle of the loop you'll increment the time value by the time that passed since the last frame and you'll try to calculate the position of the object from there. Let's say we have the following:

Point1: x1,y1 and t1
Point2: x2,y2 and t2

Now during your animation cycle you end up with a time value t, with t1 < t < t2 and you have to find the position of the object. If you calculate for each direction the distance the object has to travel, you'll find:

distX = x2 - x1;
distY = y2 - y1;
or your linear coefficient is between 0 and 1 and its: (t - t1)

You then see that the object is at:
objectX = x1 + (t-t1) * distX
objectY = y1 + (t-t1) * distY

Of course there are several other ways to calculate this and way more elaborate methods but I hope this is enough as a general idea.

Now, let's apply this to the wonderful world of pixels. In nearly every operation, resize, rotation, affine transformation, zoom or perspective transformation, you end up with a different "Point of View" as in your last image. In fact, you transformed the camera position from which you are looking at the image. Anyway, there exists a relation R between your source and your destination image (and a relation R-inverse between the destination and the source image).

Let's take a very easy example, a rotation:
If you rotate by n*(Pi/2), it is rather simple to find the source pixel value for each destination pixel value in the new image. But what happens if you have another rotation value. If you take your destination coords and you apply the R-inverse relation to find the source pixel value in the source image, you will most of the time end up with a comma value, so instead of finding (2, 3) you will find (2.534, 3.15).

You have several options:

nearest neighbor:
You can forget the values behind the comma and you assume that the pixel you are looking for is (2,3) and not the more exact value. In that case, you take an approximation and the result is of worse quality than the source image.

Attachment 7162

bilinear interpolation:
You assume (correctly) that the more exact values have their justified meaning. In fact, it's telling you that the pixel value you are looking for is in between the pixels somewhere, which is the main problem in digital imaging: regardless of how many megapixels we have, we only sample the continuous 2D color function into an array and discard the information in between. We are able to partly reconstruct that missing information using interpolation by saying that the fraction corresponds to a percentage, the percentage of color information we need to take from that pixel.

Attachment 7163

The formular for bilinear interpolation is the following:

S = (1-p)(1-q) a + (1-p) q c + p (1-q) b + p q d

where (p,q) are the coordinates you find by taking the R-inverse transformation of your destination image pixel and a, b, c and d are the color information of the respective pixels (you have to apply red, green and blue seperately).

As you can see, you use the exact values to create a percentage for the respective surrounding pixels, which is way more exact than the nearest neighbor method.

A pyramid filter:

Now this is something completely different. A filter is the iterative application of a filter kernel on a source image (also called convolution). Take for example the blur operation which uses an N*N kernel depending on how much you want to blur. We'll use the gaussian filter for reference:

Attachment 7164

Your create an N*N array and fill it with values using the gaussian formula:

Attachment 7165

In this kernel, values further away from the center have a smaller value which means that when you apply them to a pixel, their influence on the result is smaller. usually, the largest value is in the center, since you want to keep the highest percentage of information from the pixel the kernel is applied on.

Now, if you take not only 1 filter of a fixed size N, but several versions of the same filter, you can construct a pyramid. Probably the most common example is the Laplacian Pyramid where you encode an image by only storing the laplacian difference between the original version and the blurred version. The blurring removes sharp edges and makes it easier to downsample the image, while the several layers of laplacian differences create a pyramid.
Afterwards, we only have to transmit the lowest sampled image of what we want to reconstruct and the laplacian pyramid and we are able to reconstruct the high-frequency, full-detailled image by iteratively doubling the size and applying the laplacian difference on the blurred image to get the original image back.

3. Bubba moved to Luxembourg?

4. Originally Posted by Perspective
Bubba moved to Luxembourg?
I take that as a compliment

5. Whoa. I didn't see this coming, thanks a lot!

6. Bubba moved to Luxembourg?
Hehe. Nope still in the USA and still using interpolation everyday.