In this post I’m going to explain some of my findings when playing around with Adobe’s Stage3D and particles. I have pushed the current version of particle engine called evoSpicyParticleEngine into GitHub, will show some examples how to play with it and explain tricks used in it.
… but first I want to joyously announcing that I do not owe you shit. Nor does anyone else that open source stuff or release what ever. So if you don’t like the syntax or what ever don’t pressure me with feature requests. Get involved by forking my codes.
We are the most spoiled generation ever. The notorious West. At that nanosecond everything isn’t absolutely perfect we start to whine like children. One can just wonder how complaining when train schedule fails for 10 minutes sounds from majority perspective. (including those four billion people that live under $2.50 a day)
How utterly ridiculous it is to whine when some extremely sophisticated technology fails to work at that precious moment one felt like using it? (e.g. Internet)
Or how about whining when some refugee that came four months ago from country middle of oil war fails to pronounce the pizza delivery right?
Here’s a great sort youtube clip from Conan O’Briens show related to this.
We face this nonsense when perspective is fucked up. Lets say we ditch the whining and preserve efforts.
Here’s my next effort for our playgrounds wellbeing (no more preaching):
The idea behind this engine was to be able to render massive amount of moving particles. So this meant some restrictions. For example it only renders full buffers, it renders triangles instead of quads and all particles share some properties.
All this stuff can be find from GitHub. You’ll get links at the bottom of this gigantic post. Lets take a look from outside. This is how you render basic particles with sphere explosion movement:
spicy = new EvoSpicyParticleEngine(stage, 0, false, true);
// If context3d failed
// Get info about users GPU.
// Wait until the engine is ready.
// Z-Sort particles or not.
// Texture for particles.
// BlendMode of particles.
// How far away particles are rendered
spicy.zFar = 30000;
// Field Of View
spicy.fov = 90;
// count : int (maximum particle count = 1398080), valueClass : IValue (settings for particles. start points, end points, speeds, colors)
spicy.addParticles( 139808, // Particle count
new ColorGradient([0x3bc1ff, 0x84481b, 0xFFc1ff, 0x3bc1ff]),
100, 100, 100, // Start position multipliers
2500, 2500, 2500, // End position multipliers
16, // Size of single particle
true, // Randomize sizes or not
2));// Add random value for individual speed
// particle renderer.
spicy.setRenderer(renderer = new RendererLinear(
new ProgramLinear(EaseProgram.LINEAR, false, true), // Shaders
0.00008, // Speed of particles
128, // How many particles to rebirth in every frame
0x1e1b17)); // Background color
// Move the start point
renderer.startPoints.y = –500;
// add extra start point(s) (renderer.startPoints)
renderer.addStartPoint(new StartPoint3D(0, 500, 0));
// Start rendering
var time:int = getTimer();
var angle:Number = time * .0001;
var rad:Number = 1000;
spicy.camera.position.x = rad * Math.sin(angle);
spicy.camera.position.z = rad * Math.cos(angle);
spicy.camera.position.y = 0;
spicy.camera.rotationZ = time * .01;
Code above in action
Another example that’s also in GitHub
That looks easy enough?
Lets take a look on some of the tricks.
Like I mentioned earlier in this engine particles are triangles. The ‘normal’ way would have them in quad that contains two triangles. That takes 4 vertices which is more then 3 and triangle can draw a particle as well.
Common sense, huh? ;)
Face particle to camera
This is a trick that I got idea and code examples from blog post by Jean-Philippe Auclair who works at Frima Studio. It’s a great post about particles. Make sure to read it up if you haven’t already.
Trick goes like this:
1. Precalculate a single triangle and store it to GPU. Store those register numbers (10,11,12) into individual particle position component (x, y, z, w <- right there).
context3d.setProgramConstantsFromVector(Context3DProgramType.VERTEX, 10, data, 1);
data = Vector.<Number>([0, 1, 0, 1]);
context3d.setProgramConstantsFromVector(Context3DProgramType.VERTEX, 11, data, 1);
data = Vector.<Number>([1, –1, 0, 1]);
context3d.setProgramConstantsFromVector(Context3DProgramType.VERTEX, 12, data, 1);
2. In render method create a matrix that’s invert from camera matrix and pass it on to GPU
transformCamera.position = camera.position;
transformCamera.pointAt(camera._renderLookAt, Vector3D.Z_AXIS, _up);
camera._doLookAt = false;
transformCamera.copyToMatrix3D(transformParticle); // Copy camera matrix
transformCamera.invert(); // Invert camera
transformModel.position = this;
context3d.setProgramConstantsFromMatrix(Context3DProgramType.VERTEX, 0, transformModel, true);
context3d.setProgramConstantsFromMatrix(Context3DProgramType.VERTEX, 13, transformParticle, true);
3. In vertex shader get position of stored triangle, rotate that with camera invert. Scale it to desired size and transform to world space.
agalVertexSource += "m33 vt0.xyz, vt2.xyz, vc13 \n"; // rotate vertice with camera invert matrix
agalVertexSource += "mov vt0.w, vt2.w \n"; // set proper w value from stored triangle
agalVertexSource += "mul vt0.xyz, vt0.xyz, va3.z \n"; // va3.z = Size of individual particle
agalVertexSource += "add vt0.xyz, vt0.xyz, vt1.xyz \n"; // set new position for vertice
agalVertexSource += "m44 op, vt0, vc0 \n"; // transform and output vertex x,y,z
Tadaa! now it’s always facing the camera and this trick didn’t cost a penny.
Moving particles in vertex shader
In AGAL vertex shader we cannot add values to vertices. Like move them a bit in x-axel in every frame. The data isn’t stored anywhere. All values must be at memory and push to GPU and then it’s just drawn. Continue movement could be done by storing movement values into a texture but we do not have access to texture in vertex shader. This a challenge and there’s a trick to get around this: We store start and end point of every vertices. Give a start time value to every particle and send constantly increasing time value to vertex shader. Then simply tween the particle position between those points according to time and start time value.
This way we can ‘rebirth’ single particle by setting it’s start time value to current time value.
This is how it looks like in AGAL:
// va3 = individual component for every particle
// va3.x = starttime
// va3.y = speed multiplier
// va3.z = size of particle
agalVertexSource += "sub vt3.x, vc4.x, va3.x \n"; // time – start time
agalVertexSource += "mul vt3.x, vt3.x, va3.y \n"; // multiply time with move.y (to get individual speed)
// LINEAR MOVEMENT
agalVertexSource += "sub vt6.xyz, va4.xyz, vt1.xyz \n"; // b – a
agalVertexSource += "mul vt7.xyz, vt6.xyz, vt3.x \n"; // movement
agalVertexSource += "add vt1.xyz, vt1.xyz, vt7.xyz \n"; // add move
And that’s how we can move 1.4 million particles in flash.
Manipulate the movement
AGAL is not limiting language. (except that ‘no access to texture’-thing) It has a stack of mathematical functions. With those a lot is possible. Here’s how particle movement is eased in exponential fashion:
// c * (-Math.pow(2, -10 * t/d) + 1) + b;
// where t = time; b = 0; c = 1; d = 1;
// so it’s 1 * (-Math.pow(2, -10 * time/1) + 1) + 0;
// and -Math.pow(2, -10 * time) + 1;
agalVertexSource += "mul vt6.x, vc5.z, vt3.x \n"; // -10 * vt3.x
agalVertexSource += "pow vt7.x, vc5.w, vt6.x \n"; // Math.pow(2, vt6)
agalVertexSource += "neg vt7.x, vt7.x \n"; // -Math.pow(2, vt6)
agalVertexSource += "add vt3.x, vt7.x, vc5.y \n"; // -Math.pow(2, vt6) + 1
Get your forks polished
– evoSpicyParticleEngine at GitHub. Also check those examples.
Hopefully you learn something from it. You can use it anyway you like. Contributing for its development by forking at GitHub is desirable. I’m slowly developing it more as time goes by. Post process filters, background and overlays are on table. Ohh and lights of course. All those are actually in my classes commented out waiting for polishing.
Peace and love, Simo