Are you going to include basic vector algebra, too?
One of my favourite problems has to do with spheres and horizons and rendering spheres:
From Pythagorean theorem we know that
h2 = d2 - r2
Because we have a right-angled triangle,
β = 180° - α
and
cos β = r / d
cos α = h / d
The interesting bit:
When rendering images that contains lots and lots of spheres, we really want to know if our view ray n̅ intersects sphere (p̅, r), p̅ being the location of its center, and r radius.
To make everything really simple and fast, we rotate and translate the universe so that the observer eye (camera) is at origin. (It is usually also oriented so that X increases to the right, Y down, and positive Z is visible.)
First, we calculate the unit vectors, i.e. scale the two vectors to length 1:
ñ = n̅ / ∥n̅∥ = n̅ / sqrt( n̅ · n̅ )
p̃ = p̅ / ∥p̅∥ = p̅ / sqrt( p̅ · p̅ )
We can now apply the dot product:
ñ · p̃ = cos φ
If φ ≤ α, ray n̅ intersects sphere.
Since φ ≤ 90° and α ≤ 90°, their cosines are monotonically decreasing and nonnegative, and we can write the above as
If (cos φ)2 ≥ (cos α)2, ray n̅ intersects sphere.
Substituting stuff back, we get
(ñ · p̃)2 ≥ h2 / d2
(ñ · p̅)2 / ( p̅ · p̅ ) ≥ h2 / d2
(ñ · p̅)2 ≥ ( p̅ · p̅ ) h2 / d2
(ñ · p̅)2 ≥ ( p̅ · p̅ ) (d2 - r2 ) / d2
Since d2 = p̅ · p̅, we have
(ñ · p̅)2 ≥ d2 - r2
(ñ · p̅)2 ≥ p̅ · p̅ - r2
You can write it also as
ñ · p̅ ≥ sqrt( p̅ · p̅ - r2 )
but computers are much faster at multiplication than they are in square roots, so the squared one computes faster.
Note that the angles implicitly assume that you only consider spheres in the positive direction, i.e. that
ñ · p̅ ≥ 0
Otherwise you're basically considering spheres behind you.
We can now go back to Cartesian coordinates. If we use
ñ = (nX, nY, nZ)
p̅ = (pX, pY, pZ)
then we can simplify the ray-intersects-sphere test to
(nXpX + nYpY + nZpZ)2 ≥ pX2 + pY2 + pZ2 - r2
The total cost of the test is 8 multiplications, 4 additions and 1 subtraction, and one comparison.
The right hand side is invariant in rotations, and only changes at translations. (Since this requires nX2 + nY2 + nZ2 = 1, we cannot cheat by translating ñ instead.)
(The form of the inequality is such that we can vectorize two (at double precision) or four (at single precision) sphere tests at once using Intel/AMD SSE vector extensions, or four (at double precision) or eight (at single precision) sphere tests using AVX. However, for optimum vectorization, each component -- say, sphere radius, or center X coordinate -- must be in their own array, with different spheres consecutive in that array.)
You can even modify your transformation matrix calculations so that in addition to the transformed sphere center coordinates, it also calculates the right hand side of above at the same time. (This means four floating-point variables per sphere, which happens to also vectorize really well.) The ray check cost then drops to 4 multiplications, 2 additions, and one comparison.
There are similar tricks that are derived using vector algebra and geometry, that yield very simple expressions using Cartesian coordinates, for calculating the surface normal vector at the intersection point, the distance from eye/camera to the intersection point, and other interesting stuff needed to either ray-trace or apply one of the shading techniques to decide what color to paint each pixel.
Descriptive geometry is really fun, in my opinion.
I wonder if making a simple Python program that renders such sphere images, but lets the user tweak the calculations any way they wish, might engage the students? You know, instant gratification and all, but requires understanding and effort to get anything really neat done.
For example, you could let the users rotate the view using a virtual ball. However, you use cumulative transformation to a single matrix, deliberately left unnormalized. When they notice the image is getting .. squished?, you can explain what happens and how to fix it. ("Add transform = transform.normalize() after the rotation transformation, there.")
I prefer to convert the rotation part to a quaternion, normalize the quaternion, and then convert that back to matrix form, just to treat all directions equally. It's also better to "blend" between two rotation matrixes using quaternions, as you get more reasonable transitions that way.