Here’s the first glimpse of the application. Video shown at FITC Amsterdam 2012 at the Adobe Keynote. It shows the first demo version of APEXvj. Very old but contains some of the features of the final apparatus and helps understanding this post. (UPDATE 5.3.2012 The video is longer there :/ )
http://www.youtube.com/watch?v=ZgLbjva4O2o

Today I open a new chapter in my little quest for synthesize all online music with realtime graphics. The new APEXvj mobile application is waiting for review at AppleStore and Android Market. The App will be free for all. I only ask little fee for extra visuals. Hopefully someone buys them so that I can continue this quest. In this post I’ll open up the long story behind the development and will share some technical findings and ideas I’ve came across during the process. Project was epic so will be the post.

Chapter 1 - Early childhood education


I was invited to Adobe’s prerelease program very early on. If I remember correct it was early summer 2010. So something like 1.5 years ago. All this Stage3D stuff was more like a glimmer in fathers eye back then. Not so much of a real thing, but we could have learned the basics back then. All the shaders and shit felt like so much work that I couldn’t get enough stamina out of me to get head around it. I had other projects going on like demo for Assembly, got married and plus to that the day job. Later on that year I did the Disconnected demo with Away3D guys. Obviously we used the Away3D and you don’t have to learn much graphics card stuff to create pretty pictures with it. Especially when I was surrounded with people who knew everything.

All along it was obvious that I must create a next generation of APEXvj apps in GPU. Finally in Spring 2011 I took from bulls balls and started to dig in. I believe for just a little bit smarter then average people the stage3d API is easy thing to coupe. I'm not one of those. I'm a slow learner and even worse I'm one of those who needs to have fun while learning or it doesn't happen. I went there where I found the most fun; Particles.

It was great learning bench since with those little bastards you'll always navigate very deep at API. When optimizing one must not compromise. So I learned the API and learned how to get the most performance out of it. This phase took me something like 8 months. Obviously I drink beer, play frisbee golf and did other stuff like work in between. Oh and one baby. My son Akseli ;) (not in the picture)


Chapter 2 - Missed the train? Take the next one


I had a plan that I would create new APEXvj browser in stage3d and release it in first zunami when Adobe come out with the technology. Well I missed that one since I couldn’t get my skill set strong enough. Gladly they couldn’t release the Mobile AIR back then ;) So I spent endless hours with those particles and such. Trying different things to find perfect performance solutions. Finally Fall 2011 I felt like I could really do something with this technology. I managed draw a lot of stuff and fast in my iPhone4.

It’s easy to copy .js lib from web and do as told in tutorial. I need to know how technology smells like in the morning without a makeup before I can master it. And that happens slowly. This goes to all young devs: It is OK to spent months on some minor detail to deeply understand it.

Those conference speakers always say that there’s time if you want there to be. Well.. this is all bullshit. There isn’t time. There’s only you and your sorry ass bending. Weekends are decided to my wife and son, but here’s my average weekday from past 6 months.

1. Baby awakes me up for the last time about 6 clock.
2. Some coffee and 30 or 45 minutes of personal coding.
3. To work. 8 hours of coding like a mother fucker. Advertising Jippikayjey motherfucker stuff.
4. 30-90 minutes after work personal coding.
5. Home playing with boy
6. 20:00 little man goes to sleep.
7. 20:00 - 24:00 personal coding.

If someone wants to build something good they must either risk their finance or hand over they freedom or risk their health or just take time. All of these scenarios have their minuses. Most risk their finance. I don’t want to do that. I have a mortgage and a little kid. Also I have a feeling that investor will take the freedom eventually. I choose the “take time” and make it happen slowly.


Chapter 3 - Goal is empty. Why not shoot!?


Few weeks ago I sent a demo to Adobe people and they loved it. I received a message that they would show it in their presentations. Like they did in Amsterdam couple days ago. I thought here’s my shot. So I took some unpaid vacation and finished the application. Yay! After that during nights I’ve been creating new website for APEXvj, taking million screenshots for different purposes, fooling around with app stores and fine tuning the app. I’m the most satisfied with the result.

No matter if no-one buys this stuff I learned a LOT. Here are some of it I can share with you. Technical mambo nachos coming up:


Mobile AIR Performance


Adobe Mobile AIR Stage3D is fast. Flash on mobiles have limitations and that's CPU. Also obviously GPU is not even close to desktop performance. But when done right a lot can be achieved. Here's couple things that should be keep in mind when doing particles with Mobile Stage3D.

Important lesson here is to define your playground. What can be done and what should be leave to future for better GPU's. Stuff shown here are just something I thought will ably. I can possible be wrong is some cases.


A) Avoid alpha kill it’s slow. Do not do the following.

fragmentSource +=       "tex ft0, v1, fs0 <2d,linear> \n";              // Get texture pixel
fragmentSource +=       "sub ft0, ft0, ft0.x \n";                       // prepare for alpha kill
fragmentSource +=       "kil ft0.y \n";                                 // Alpha Kill red pixels
fragmentSource +=       "mul ft0, v2, v2 \n";                           // Add color

Instead use add blend and accept this as your playground. You can even ditch the z-sort in this approach.

context3d.setBlendFactors(Context3DBlendFactor.SOURCE_ALPHA, Context3DBlendFactor.DESTINATION_ALPHA);
context3d.setBlendFactors(Context3DBlendFactor.SOURCE_ALPHA, Context3DBlendFactor.ONE_MINUS_SOURCE_ALPHA );

fragmentShader += "tex ft0, v0, fs0 <2d,linear> \n";
fragmentShader += "mul ft0, ft0, v1 \n";                                // Add Colors

Don’t try to face particle planes/triangles against camera. It’s slow.

vertexSource +=         "mov vt2, vc[va0.w] \n";
vertexSource +=         "m33 vt0.xyz, vt2.xyz, vc13     \n";
vertexSource +=         "mov vt0.w, vt2.w \n";
vertexSource +=         "mul vt0.xyz, vt0.xyz, va3.z \n";               // Size of particle
vertexSource +=         "add vt0.xyz, vt0.xyz, vt1.xyz \n";
vertexSource +=         "m44 op, vt0, vc0 \n";                          // transform and output vertex x,y,z

Simply keep your camera facing planes/triangles enough all the time and do only one matrix calculation in vertex shader. You can rotate z-axel as much a you like though. Accept this as playground.

vertexShader += "m44 op, vt1, vc0 \n";

_rotationZ = move * 50;
_rotationY =  Math.sin(move) * 25;

Still no matter what the big challenge in mobile world will always be the big variety of devices. You don’t want to build against worst case scenario and cannot build against the best case either. If you go middle way you show your ass to every direction. Gladly there’s a solution for this. Do the trick familiar from post processing. Draw your scene into a texture and render that texture in a plane to screen. You can now set the resolution (quality) of that texture and satisfy all needs.

// create the texture
var bitmapData:BitmapData = new BitmapData(quality, quality, false, 0x000000);
postprocess_texture = context3d.createTexture(bitmapData.width, bitmapData.height, Context3DTextureFormat.BGRA, true);
postprocess_texture.uploadFromBitmapData(bitmapData);

// create the plane
var w:int = 1;
var h:int = 1;
                       
postprocessVertexBuffer = context3d.createVertexBuffer(4, 4);
postprocessIndexBuffer = context3d.createIndexBuffer(6);
postprocessVertexBuffer.uploadFromVector(Vector.<Number>([      w, h, 0, 1,
                                                                w, h, 1, 1,
                                                                w, h, 1, 0,
                                                                w, h, 0, 0]), 0, 4);
postprocessIndexBuffer.uploadFromVector(Vector.<uint>([2, 1, 0, 3, 2, 0]), 0, 6);

// in every frame
context3d.clear ( bgR, bgG, bgB, 1 );
context3d.setRenderToTexture(postprocess_texture, true, 0, 0);
context3d.clear(bgR, bgG, bgB, 1);
/*
Render the magic
*/

context3d.setRenderToBackBuffer();
context3d.setTextureAt( 0, postprocess_texture );
context3d.drawTriangles(postprocessIndexBuffer);

context3d.present();

Handling touches is also a performance matter. According to my tests it’s fastest to disable mouse right from the start and handle mouse events just from stage.

// At the root class simply decide the following
this.mouseEnabled = false;
this.mouseChildren = false;

Seed the final word where ever you think it can be used.

public final class VisualCubesTunnel extends AbstractVisual

override public final function render(event:Event):void

Avoid calling functions. No matter how painful it might be. You need to find a way to create stuff without calling methods. For example I have created all visuals so that they take context3d and other stuff from shared class, but in their render method they all do the same basic routines. For example handle camera, post-process rendering and background rendering. It isn’t the most elegant way to do but performance gain is huge. And my visual engine doesn’t call visual-classes render function. It changes the ENTER_FRAME event listener.

public function setVisual(position:int):void
{
        currentVisualPosition = position;
        if(currentVisual)
        {
                if(isPlaying)
                {
                        this.removeEventListener(Event.ENTER_FRAME, currentVisual.render);
                }
                currentVisual.deactive();
        }
        currentVisual = visuals[position];
        currentVisual.active();
        if(isPlaying)
        {
                this.addEventListener(Event.ENTER_FRAME, currentVisual.render);
        }
}

APEXvj mobiles UI runs smooth 60fps. This was achieved by following few basic principals. First for all I set

<depthAndStencil>true</depthAndStencil>

in application description xml file. This means that every frame is send to GPU and it require sending render command to stage3d in every frame. So when visual isn’t running I have a little Clear-class that does just that.

override public function render(event:Event):void
{
        context3d.clear ( 0.047, 0.047, 0.047, 1 );
        context3d.present();
}

Also all the scrolling between views are done by sprite.scrollRect. Which is by far the most efficient way to scroll. Plus to that all my content is set cacheAsBitmap = true and like I said before elements don’t have mouse enabled.

That’s all about performance at this point. Don’t worry. Epic post is not quite over yet :D


Reacting to sound with just leftPeak


In old browser and desktop version of APEXvj I used the Sound.computerSpectrum method a lot. It has a very annoying feature in it. It fails if you have a youtube or other video open in browsers other tab. When people get that error and are asked to close other tabs they in most cases close the APEXvj. I came to a decision that this method cannot be used ever again. Alternative to that is the sound.extract() which is a failure too since it takes too much CPU to calculate FFT.


So I decided that only method that I could use would be the channel.leftPeak. Which gives me a value (0-1) in every frame telling me how much music is playing in that frame. What can one do with just that? Well it can be used visually in many ways. It can be attached to light brightness. Also it can be attached to how much stuff to show in screen and with it can be infer the intensity of current phase in song.

To check the intensity of a song I add leftPeak value to running variable move. I can now use move for example in rotation. If song has a intensive phase it rolls faster and slower when sound cools down.

var leftPeak:Number = channel.leftPeak;
move += leftPeak;

If leftPeak value reach a certain peak like 0.6 I could do a flash or something, but this doesn’t work the same in every possible song. Some songs are quieter and some are extremely intense. So I figured the following method.

I collect peak values from past 10 frames. Calculate the average. Check if the current value is large versus the average. This gives me the information about when song changes dramatically. According to that difference I can do several effects. Like if it’s big enough I’ll do a camera cut, a flash or post-process effect.

averageStorage.shift();
averageStorage.push(leftPeak);
var aveVal:Number = 0;
for(var i:int = 0; i < averageStorageLength; i++)
{
        aveVal+=averageStorage[i];
}
var peakAvarage:Number = aveVal/averageStorageLength;
var overaverage:Number = leftPeak (peakAvarage*1.45);
                               
if(overaverage > agressive && time > changetime )
{
/*
do the magic
*/

}

This simple but clever method gives me a perfect music video type of cuts in visuals for every type of songs. I think I’ll do all my cuts in future demos this way. :) I was so overwhelm happy when I figured this out! That point where I know when song has a peak can be used so many ways. For example in some effects I check how long it took from last peak and use that value to determine how strong the shown effect will be.

Anyway a lot can be done with just leftPeak and best thing about it is that it works everywhere every time. Takes zero CPU and smells like Swedish designers (good and expensive).

OK that’s it. Hopefully you find something interesting from this exhausting post. The next one will be the grand release post. The approval process probably won’t take more then six months :)