Note: I'm in computer science 2 and am using java, but that shouldn't make a difference. This is more of a math question then a programming question.
I've started programming a 3d wire-frame renderer from scratch. So far it's going well. I have 3d translation, perspective, and rotation working, and can even read from .obj files. (these things are working, not necessarily working as efficiently as they could. Right now I use matrices for rotation, and vectors for translation. Do you recommend using matrices for everything, making transform matrices from translation and rotation matrices? How is this more efficient then translating by adding 3 vector values to 3 coordinate values then rotating with a matrix?).
So right now I can load a mesh, scale it, move it, rotate it, and render it. I have controls working for moving the camera and changing it's view.
But now I'm stuck on the last remaining step to making a complete 3d wire frame world (well, I have yet to make anything to cull non-visible stuff, but that'll come later).
When I move the mouse left or right, the model rotates in the y (up) axis, and when I move it up or down, it rotates on the x axis. Pressing the q and e keys rotate it on the z axis. The problem? Those controls are supposed to move the camera view in those ways, not the objects.
Right now, to render a point, I 1) translate it with the negative of the camera position. (the meshes are stored in world space right now, so i don't need to transform them to world space). 2) rotate with the inverse of the camera rotation.
3) apply a perspective transform. 4) java takes care of view-space to clip-space transformation.
I have a working axis gizmo that shows the directions of the axis and is located at the camera position. So, logically, this should always be at the center of the screen and, if rotated with the camera rotation, will always have the z axis facing directly away from the camera, and the x axis parallel with the view-space top or bottom. But when I move the camera, I can see the gizmo moving relative to the geometry correctly, but not rotated correctly.
The problem is the inverse camera rotation matrix.
I'm not sure how to make it. I would think that you would need the negative values of the camera rotation (stored in Euler angles*), and multiply them in the opposite order to get a total camera rotation matrix, then apply that to a point, but this doesn't work. I've read that I might also need the inverse of these matrices. I've gotten matrix inverses working and tried that, with the same result.
*since the camera rotation angles will always be negefied (word?), I was thinking I could just store the negative values.
I found this website very helpful
3D projection. Who is 3D projection? What is 3D projection? Where is 3D projection? Definition of 3D projection. Meaning of 3D projection.
but it's method for camera transform doesn't seem to work.
Can someone please explain how to make a camera rotation matrix? How is it constructed differently from a regular rotation matrix and, if you can explain, why does it work? Thank you!
If seeing my code would be helpful, I'd be happy to post some.
In the mean time, I'll re-check everything and try different things for camera rotation.
Also, any other suggestions for anything would be appreciated!