Flash 11′s new
Stage3D class introduces a whole new kind of bytecode to Flash: AGAL. Today I’m beginning a series of articles to talk about what AGAL is in the first place, how you can generate its bytecode and, more generally, how these wacky shaders work. Read on for the first article in the series and learn the basics of AGAL.
First things first we need to clear up a bit of terminology. The term AGAL is used to mean two related things. First, it is the name of the
Stage3D shader bytecode format. In this way, it’s like talking about AVM2 bytecode that we’re used to generating when we compile our AS3, C++ (with Alchemy), or HaXE. Second, it is the name of an assembly language that is assembled into AGAL bytecode. I will refer to these as “AGAL bytecode” and “AGAL assembly” to keep the two clear.
It’s worth noting that AGAL assembly is not the only way to generate AGAL bytecode. Currently, these are the ways to generate it (other than a hex editor):
|AGAL||Assembly||Adobe’s AGALMiniAssembler AS3 class||None (directly use
|Pixel Bender 3D||High-Level||Adobe’s beta compiler||Pixel Bender 3D API|
Of these, AGAL assembly is unique in that it provides you with the lowest level code so you can squeeze out the most performance and pack in the most features of any of the language options. It also doesn’t require any bulky APIs or switching to HaXe, which is an appealing bonus. On the downside, you have to write all of your shaders in an assembly language and therefore end up typing a lot more code that is harder to read. However, shaders are virtually always performance-critical code, so this isn’t an area where you want to skimp on optimization. You also won’t be writing very much shader code compared to your app or game, so it’s not as important that you be able to bang out tens of thousands of lines at breakneck speed. As such, I will only be covering AGAL as it is currently my choice of shader language.
It’s important to understand some basics about how
Stage3D shaders work before you actually start writing them. To begin with, they are split into two parts. The first part is called a “vertex shader” and it is responsible for specifying each vertex’s position. The second part is called a “fragment shader” and it is responsible for specifying the color of each fragment, which is roughly one pixel. These two parts combine to form a “shader program” that is uploaded and used via the
// Create a shader program. It is initially unusable. var shaderProgram:Program3D = myContext3D.createProgram(); // Upload vertex shader and fragment shader AGAL bytecode. It is now usable. shaderProgram.upload(vertexShaderAGAL, fragmentShaderAGAL); // Use the shader program for subsequent draw operations myContext3D.setProgram(shaderProgram); // Draw some triangles with the shader program myContext3D.drawTriangles(someTriangleIndexBuffer);
To be able to intelligently set vertex positions and fragment colors, we need to be able to pass data to the shader program. This typically includes camera transformation matrices, textures, colors, normal vectors, and so forth. Here is a diagram describing how this data flows from your AS3 program to the vertex shader to the fragment shader and ultimately to the target of your drawing: the screen or a texture.
Some of these types require a little explanation:
|Data Type||Bound To||Example Uses|
|Vertex Constants||Vertex Shader||Transformation matrices, bones (for skeletal animation)|
|Vertex Attributes||Vertices||Positions, normals, colors|
|Varying Values||Vertices||Texture coordinates, colors|
|Textures||Fragment Shaders||Art-driven colors, normal maps|
|Fragment Constants||Fragment Shaders||Fog colors, object-global alpha values|
The trickiest of these types are the “varying values”. These values are computed by the vertex shader on a per-vertex basis. When the triangle is drawn, the varying values are interpolated (linearly and with perspective correction) across the surface of the triangle. For example, if your vertex shader outputs red for one vertex of the triangle, green for another, and blue for the third, and the fragment shader simply outputs the varying value, you would get a triangle like this:
As you’ll see in future articles, this is quite useful for efficiently performing many 3D drawing operations. For example, a simple textured triangle could look up the texture coordinate at every fragment of the triangle, but this would be extremely wasteful as such a lookup is quite expensive. It’s much better to lookup the texture coordinate at each of the triangle’s vertices and simply interpolate using a varying value.
That’s all for the first article in the series. Stay tuned for next time when we’ll dig into the AGAL assembly language and start writing some shaders!