1. ## camera rotation matrix

Note: I'm in computer science 2 and am using java, but that shouldn't make a difference. This is more of a math question then a programming question.

I've started programming a 3d wire-frame renderer from scratch. So far it's going well. I have 3d translation, perspective, and rotation working, and can even read from .obj files. (these things are working, not necessarily working as efficiently as they could. Right now I use matrices for rotation, and vectors for translation. Do you recommend using matrices for everything, making transform matrices from translation and rotation matrices? How is this more efficient then translating by adding 3 vector values to 3 coordinate values then rotating with a matrix?).

So right now I can load a mesh, scale it, move it, rotate it, and render it. I have controls working for moving the camera and changing it's view.

But now I'm stuck on the last remaining step to making a complete 3d wire frame world (well, I have yet to make anything to cull non-visible stuff, but that'll come later).

When I move the mouse left or right, the model rotates in the y (up) axis, and when I move it up or down, it rotates on the x axis. Pressing the q and e keys rotate it on the z axis. The problem? Those controls are supposed to move the camera view in those ways, not the objects.

Right now, to render a point, I 1) translate it with the negative of the camera position. (the meshes are stored in world space right now, so i don't need to transform them to world space). 2) rotate with the inverse of the camera rotation.
3) apply a perspective transform. 4) java takes care of view-space to clip-space transformation.

I have a working axis gizmo that shows the directions of the axis and is located at the camera position. So, logically, this should always be at the center of the screen and, if rotated with the camera rotation, will always have the z axis facing directly away from the camera, and the x axis parallel with the view-space top or bottom. But when I move the camera, I can see the gizmo moving relative to the geometry correctly, but not rotated correctly.

The problem is the inverse camera rotation matrix.

I'm not sure how to make it. I would think that you would need the negative values of the camera rotation (stored in Euler angles*), and multiply them in the opposite order to get a total camera rotation matrix, then apply that to a point, but this doesn't work. I've read that I might also need the inverse of these matrices. I've gotten matrix inverses working and tried that, with the same result.

*since the camera rotation angles will always be negefied (word?), I was thinking I could just store the negative values.

I found this website very helpful
3D projection. Who is 3D projection? What is 3D projection? Where is 3D projection? Definition of 3D projection. Meaning of 3D projection.
but it's method for camera transform doesn't seem to work.

Can someone please explain how to make a camera rotation matrix? How is it constructed differently from a regular rotation matrix and, if you can explain, why does it work? Thank you!

If seeing my code would be helpful, I'd be happy to post some.

In the mean time, I'll re-check everything and try different things for camera rotation.

Also, any other suggestions for anything would be appreciated! 2. Use matrices for everything. Which type of matrices are you currently using when you are utilizing them? Are they left-handed or right-handed matrices?

The simplified transform from local to screen space is:

Local/Model * World * View * Projection

The inverse of a matrix effectively will undo the operation. So the inverse of the world matrix will bring the system back into local/model coordinates. Inverse view will give world coordinates. 3. My matrices are rows by columns, row 0 being the topmost and column 0 being the leftmost. They're basically 2 dimensional arrays.

So, what you're saying is, the camera rotation matrix should be the inverse of a regular rotation matrix made from the camera rotation. Use the regular (non-negative) camera Euler angles and multiply their matrices in the regular order, then take the inverse of that.
I guess that makes sense. Though I still don't know why using the negative Eulers and opposite multiplication order wouldn't work.

I'll try it.

Also, how are matrices more efficient then using equations derived from their multiplication. For instance, couldn't I work out what the final rotation matrix would by by multiplying the rotation matrices for each axis in the right order, then use that directly, instead of having the program make and multiply them? Why is creating a translation matrix and multiplying it by the rotation matrix to get a final transform matrix to apply to a point better then just translating it by adding the components of a translation vector to its coordinate then rotating with a matrix? And are quaternions even more efficient them matrices?

Thanks! 4. I did not say any of what you just said.

Any time you wish to display an object you will need it to use the transpose of what you would use for a view matrix. 5. Originally Posted by Bubba I did not say any of what you just said.

Any time you wish to display an object you will need it to use the transpose of what you would use for a view matrix.
Ok, I see what you were saying. You meant that using the inverse of the view matrix* would be wrong because, where the view matrix transforms things from world to view, it's inverse would do the opposite and transform from view to world. using that on the world would mess things up.
You might use the inverse of the view matrix to find what world coordinates correspond to some view coordinates.

*However that assumes that the view matrix actually works, which mine doesn't

O.K. Transpose matrix.
The tutorial I linked to in my first post essentially used the transpose of each individual rotation matrix for each axis, then multiplied those to get the view matrix.
The negatives various sin values were used, which had the same effect of flipping ht matrix actors its top-left to bottom-right diagonal (transpose).
For instance, the regular matrix for rotation about the x axis where s = sin(angle) and c = cos(angle)was

1 0 0 0
0 c s 0
0 -s c 0
0 0 0 1

the transpose would be

1 0 0 0
0 c -s 0
0 s c 0
0 0 0 1

which is the same as multiplying the s values by -1.

So, Do I use the transpose of each axis rotation matrix to make a total rotation matrix, or use the regular axis matrices, multiply them, then get the transpose of that?
What order do I multiply them in?
(Right now, for a normal rotation, I would y=use y (heading), x(attitude), then z(bank) )

Also, should I be using 3x3 matrices? Most websites I've been reading use 4x4 matrices to transform points, which results in an extra "w" , used for texture mapping and determining if the point should be rendered. But I'm only doing wire-frame and have no non-visible geometry culling right now. Though I might expand this program in the future. 6. Sorry for bumping. It won't let me edit my post. But I have another question.

Right now my 3d data is stored in in instances of a 3dobject class. Each 3dobject has a list of verteces, and a list of faces. Each face has an array of ints which represent the verteces of the face by referencing the indexes of the points.
The problem is, for wireframe rendering, I end up drawing most, if not all edges 2 times. Most of the faces have the same edges as other faces, so when it draws one edge of one face, and goes on the other faces, it ends up drawing that edge again, because both faces reference the same 2 points consecutivly.
I know that drawing a line is a very easy thing, but I like efficiancy. So, the obviouse solution is to iether keep a note of every edge already drawn and not draw it again (bad idea), or re-format my 3d data. For instance, I could have each object store edges, not faces. There are 2 problems I see with this. 1: if I wanted to store one 4 sided polygon with faces, it would store the 4 indexes of its verts. But If I wanted to store that same polygon with lines, it would store 4 lines, each with 2 verteces. Thats 8 total, 2X the information as with faces. Secondly, if I wanted to do anything else other then wireframe later, like texture mapping or physics, storing only edges isn't very usefull. So, any ideas?

Also, the camera rotation matrix questions still stand. Popular pages Recent additions 