Stepping Through The Stage3D Pipeline
My article Stage3D Pipeline in a Nutshell is a conceptual journey through the phases of the 3D pipeline, but today I’m going to talk about the practical side. What code do you write if you want to transform points and vectors from one space to another? This article will show you how to do that.
First, let’s list the “spaces” that are the steps of the 3D pipeline:
- Model Space – each model at the center of the coordinate system
- World Space – all models positioned relative to each other
- Camera/View Space – all models positioned relative to the camera
- Clip Space – all models relative to the view frustum
- Viewport Space – all models relative to the viewport
Model space is easy because that’s where models start. The first real transformation between coordinate systems is when points move from model space to world space. This transformation is done by applying a 4D matrix to each point (e.g. via Matrix3D.transformVector
). This matrix is typically the result of multiplying several matrices together so that the one matrix includes all of the effects of the matrices multiplied together. For example, you may multiply a translation matrix (to positon the model), rotation matrix (to orient the model), and scale matrix (to size it correctly) to form your model->world matrix. In any case, you’ll often create this matrix every frame as the model moves about the scene and apply it to each point of each triangle in the model’s meshes.
Next the model moves into camera/view space by multiplying points with a matrix created by the camera. In the case of my Simple 3D Camera, this is its __worldToView
matrix. The camera class has done all the hard work so you don’t have to.
After camera/view space comes clip space. Again, this is handled by the camera class so you don’t have to. Simply multiply points in camera/view space with Camera3D.__viewToClip
and you’ll have them in clip space. In practice, there’s little practical use for points in camera/view space so it’s very common to go straight from world space to clip space. With Camera3D
, you can do this by multiplying your world space points with the Camera3D.__worldToClip
matrix.
At this point you are usually done with your transformations. In pseudo-code, your transformational journey has looked like this:
// AS3 for each model modelToClip = model.modelToWorld * camera.worldToClip upload modelToClip to vertex shader as a constant // vertex shader output vertex = modelToClip * vertex position
After that, the GPU will handle the clip->viewport transformation based on the parameters you’ve set in Context3D.configureBackBuffer
and Stage3D.x/y
to specify the X, Y, width, and height of the viewport. However, what if you wanted to do that work yourself? Well, the first step is to take the clip space point and perform what’s called the perspective divide. This is actually very simple as all you do is divide the X, Y, and Z components by the W component since a clip space point is a 4D point. This is a common operation, so it is already built in to the Flash API in the form of Vector3D.project
. Now you will have a point that is on the “viewplane”. Here, the point is in 2D in the range of [-1:1] in both X and Y. The remaining job is to transform this into pixels in the viewport rectangle like so:
/** * Transform a point from clip space to viewport space * @param viewportRect The viewport rectangle you specified in Stage3D.x/y and Context3D.configureBackBuffer * @param world World space point you want to transform to the viewport * @param camera Camera whose clip space is being transformed from */ function clipToViewport(viewportRect:Rectangle, world:Vector3D, camera:Camera3D): Vector3D { var clip:Vector3D = camera.worldToClipMatrix.transformVector(world); var viewplane:Vector3D = clip.clone(); viewplane.project(); var viewport:Vector3D = viewplane.clone(); viewport.x = ((viewport.x+1)/2) * viewportRect.width + viewportRect.x; viewport.y = ((viewport.y+1)/2) * viewportRect.height + viewportRect.y; return viewport; }
Note that the above function works, but it is not optimized. For example, it allocates two new Vector3D
objects in the process. Optimizing is left as an exercise for the reader.
This concludes the practical steps you can take to transform points through the 3D pipeline yourself, even those normally covered by the GPU. If you’ve spotted a bug or have a question or suggestion, please leave a comment!
#1 by StimpY on June 25th, 2012 ·
I have test this code – In my Project that dosn’t work :(
#2 by ben w on June 25th, 2012 ·
Hey Jackson,
Another thing to consider with such a pipeline, is that you can do the model to world transformation on the GPU.
While this poses the issue of a large amount of technically unnecessary matrix multiplications (one additional m44 per vertex) it also has some sweet benefits.
1. In most of my tests the bottleneck is on the CPU so any work that can be moved to the GPU is a win in most cases. So instead of doing one matrix-matrix multiplication on the CPU per object, you do it all on the GPU in the vertex shader. (1 matrix multiplication on the CPU vs potentially thousands on the GPU)
2. This also allows you to modify the vertex position in local or world space in the vertex shader before it get transformed all the way into clip space, which is another win if you want to change the positions without having to update the buffers.
For static geometry or very large meshes then for sure it will probably be worth while doing on the CPU but just throwing it out there as an alternative route ;)
#3 by jackson on June 25th, 2012 ·
This is a very good point. There are many operations that you can move from the CPU to the GPU or visa versa. Additionally, having a vertex shader means you can insert all kinds of transformations into the basic pipeline I described above. For example, you can do skeletal animation on the GPU.
#4 by Martin on November 13th, 2012 ·
clipToViewport() -> this function seems pretty buggy.
1) there are 2 identically named variables for viewport. One of them is a Vector3D and another is a Rectangle.
So I suppose in this line:
viewport.x = ((viewport.x+1)/2) * viewportRect.width + viewportRect.x;
– this should be (viewportRect.x+1) right ??
2) the function is supposed to return void, instead it returns a Vector3D?
#5 by jackson on November 16th, 2012 ·
Thanks for pointing these out. I’ve changed the return type and renamed the
viewport
parameter toviewportRect
.#6 by MartinV on June 6th, 2013 ·
Hey Jackson,
All this is very clear to me, but I have additional question, regarding rendering to texture.
Shader pipeline outputs vertices in clip space, in -1..1 range. After that GPU maps it to viewport space. Viewport space is set using Context3D.configureBackBuffer and Stage3D.x/y when rendering to backbuffer.
Is there a way to set up viewport space when rendering to texture (and not backbuffer)? I don’t see any API for it (similar to directx IDirect3DDevice9::SetViewport function). Stage 3d has setScissorRectangle, but this is only scissor rectangle, and doesn’t adjust viewspace area – when rendering to texture, rendered image would still be scaled as if rendering to whole texture, but part of it would be cropped, instead of the image being scaled down to viewport.
This is what I’m trying to do: I’m adding postprocessing to my pipeline, which means I need to render everything to texture first, and then copy it (with some processing) to backbuffer. Lets say backbuffer is 800×600, and due to lack of rectangular render targets in stage3d, I need to render it to 1024×1024 texture first. Ideally when rendering to the texture, I would set viewport to 800×600, to get 1:1 mapping of pixels. But I don’t see easy way to do it.
I could:
– Adjust projection matrix – to make it see more of the world, and then crop the center I need. Easy with ortho projection, not so much with perspective. Also I would need different projection matrix depending on what resolution I render to (some render targets go to smaller textures .. )
– I could add few instructions at the end of every shader to scale outputin clip space. Again, complicated on multiple levels, would make projection matrix not matching few frustum, or inverse clip to world transformation.
There gotta be better solution. Would you know of one? I still cannot believe there’s no API for this.
#7 by jackson on June 7th, 2013 ·
Nice, clear description of your problem and possible solutions. I describe a similar process in my post-processing article. Looking over that code again, it looks like it wasn’t necessary to do anything to configure rendering to the texture other than upsizing it to the next power-of-two (as you point out) and creating the texture with the optimizeForRenderToTexture flag set to true. Have you tried that out?
#8 by MartinV on June 11th, 2013 ·
Thanks for the commend Jackson.
At the end I ended up using this solution:
– When rendering to viewport in a texture (and even main backbuffer), before uploading worldViewProjection matrix to shader, I append scale / offset to it to transform clip space to viewport space. In addition to this I also use setScissorRectangle to clip to the viewport.
This works well enough with pretty easy math.
#9 by jackson on June 11th, 2013 ·
Glad to hear you’ve got it working.