Thursday, November 22, 2012

Decade is leaving blogger.

After a number of years of random and sporadic blogging Decade is leaving blogger. If you would like to follow progress as decade engine continues its move towards mobile platforms please check out the website of my consulting company at Raiblin

Thursday, September 06, 2012

Record video from your iPhone/iPad without any expensive hardware.

In an earlier post I questioned how I could record from my iPad because my Mac Mini struggles to run the IOS emulator and Camtasia at the same time. Google is full of ideas from building frameworks which holds a camera steady above your iPad and actually recording the screen of the physical device, to hardware boxes which take a HDMI input and convert it so that it can be played on your computer screen. All of these are pretty expensive and take substantial effort. There is a simple answer. Use Reflection App.

Reflection App seems to work by pretending its an Apple TV. In the same way that you mirror your iPad display to render on the Apple TV, your Mac will now also appear in the list and your iPad display can be sent to it. I'm not sure how 'legal' this is under Apples terms and conditions but it works well and suits my needs. The free version allows you to record up to 10 minutes of video which will meet the needs of most people wanting to record demo's, but for $14.99 you can have the full version with unlimited recording.

On a side note, the same company have a product called Air Parrot which allows you stream your Mac desktop to an Apple TV. Mountain Lion has this functionality but if you have an older Mac (mid 2011 build for Mac Mini's) Apples solution won't work. Air Parrot will work and again costs $14.99.

Why pay for an online SVN Repository?

This is my first non programming/decade related post in a long time, but before I start I would like to state that I am making no comment about the merits of SVN over Git and other source control systems. I simply prefer SVN and its been my source control of choice for many years. I will also try and make this post not sound like a rant.

Until recently the decade source has existed in a free repository on unfuddle. Their free plan offers 512MB, 1 project, 2 collaborators etc.... Now that decade spans a desktop version (for Windows, Mac and Linux), a mobile version for IOS and Android and is spawning some mobile games the limitations of space and allowed projects is stifling development. My instinct was to pay for one of the services which these online repositories offer. The average cost on an online repository which has 3gb space and allows 10 projects and up to 5 collaborators is about $15 a month. Not a huge amount of money but it got me thinking why it costs so much.

Dropbox offers users 3gb of backed up online storage for free. Granted, the SVN providers need to have SVN running on their servers, but this is free software, right? Overhead of admin for SVN? Perhaps a little cost. The fact that providing SVN services is a niche market compared to dropbox, which anyone can use for any media, can also add a little cost but $180 a year versus $0?

Why limit the number of projects that I can have? If I'm paying $15 a month for 3gb of space, shouldn't I be allowed to have as many projects as I want so long as I stay under my storage limit? The only answer I can find is that its business. They charge simply because they can.

I use Cornerstone on my Mac as an SVN client. (At $65 this is an expensive piece of software compared to the many free SVN clients out there, but since the cost/benefit to me outweighs the price its worth it. I hope this fact will go some way to dispel any opinion that I'm simply too mean to pay the $15 a month. I simply don't think the service provided warrants that cost compared to other generic online backup services). In Cornerstone, with a few clicks of the mouse, I created a SVN repository on my Dropbox drive. Since the data is on dropbox it is immediately backed up to the cloud. Since Dropbox don't care what I put in my account I can have as many projects as I wish. If I used a dropbox account which was specifically for the project and not for personal use, I could supply the details to others and have as many collaborators as needed.

The only issue I can see with this is that there is no level of indirection between me and the data. Deleting data from an online repository through a web portal would require some very deliberate steps and is therefore unlikely to occur by accident. Deleting files from what appears to be a disk on your local machine is very easy and could happen in error, but with this in mind I think any issues can be easily prevented.

As many projects as your allocated storage will allow, no limit on the number of collaborators and for much cheaper than $180 a year, potentially $0?  Simply use Dropbox.

Thoughts?

Tuesday, June 19, 2012

Let there be light

I remember many many years ago, when first learning graphics concepts, I was working though 'Programming Role Playing Games with DirectX'. There was a model included on the CD of a castle with a mote and I thought it looked terrible. The textures were gaudy, the resolution was low and it was quiet jarring and unpleasant to look at. In a later chapter, an introduction to lights and shading, the same model was reused. The difference in results was unbelievable. With the correct shading depth perception was easier and the scene, despite its initial poor quality, took on a much more natural and realistic feel.

Here is a comparison of the same mesh, with the same texture, rendered from the same point of view with and without shading.


The picture above, and the video below, both use a simple diffuse lighting calculation to shade the side of the earth which is facing away from the sun. This can be implemented as per vertex lighting, or per pixel lighting. Each method has pros and cons which I shall briefly discuss here. If I get anything wrong or leave something out please comment.

(Instead of using the diffuse value to shade the back side of the earth, I use it as a ratio in a multisample between a day and night time texture. Since the night time texture is already colored to show shade and darkness the result is the same but I also get lights from civilization on the dark side)

Per Vertex Lighting
  • Diffuse value is calculated in the vertex shader.
  • Faster since diffuse value is only calculated once per polygon and each pixel in the polygon has the same diffuse value (or is that 3 times per polygon, once per vertex, and the diffuse value is interpolated across the face of the poly?)
  • Since the diffuse value is per vertex and not per pixel the value is not always correct and some shade popping occurs. Check out the video to see this.
Vertex Shader

attribute vec4 position;
attribute vec4 normal;
attribute vec2 uv0;

varying vec2 _uv0;
varying float _diffuse;

uniform mat4 modelViewProjectionMatrix;
uniform vec4 normalizedSunPosition;

void main()
{
    //Texure coordinates are needed in fragment shader
    _uv0 = uv0;

    //Calculate diffuse value
    vec4 nor = normalize(modelViewProjectionMatrix * normal);
    _diffuse = max(dot(nor, normalizedSunPosition), 0.0);
   
    //Translate vertex
    gl_Position = modelViewProjectionMatrix * position;
} 

Fragment Shader

varying lowp vec2 _uv0;
varying lowp float _diffuse;

uniform sampler2D dayTexture;
uniform sampler2D nightTexture;

void main()
{
    gl_FragColor = (texture2D(nightTexture, _uv0) * (1.0 - _diffuse)) + (texture2D(dayTexture, _uv0) * _diffuse);
}


Per Pixel Shading
  • Diffuse value is calculated in the fragment shader.
  • Potentially slower as there are generally allot more pixels rendered than vertices and therefore allot more diffuse calculations.
  • More realistic, smooth results.
Vertex Shader

attribute vec4 position;
attribute vec4 normal;
attribute vec2 uv0;

varying vec2 _uv0;
varying vec4 _normal;

uniform mat4 modelViewProjectionMatrix;

void main()
{
    _uv0 = uv0;
    _normal = normalize(modelViewProjectionMatrix * normal);
    gl_Position = modelViewProjectionMatrix * position;
}

Fragment Shader

varying lowp vec2 _uv0;
varying lowp vec4 _normal;

uniform sampler2D dayTexture;
uniform sampler2D nightTexture;

uniform lowp vec4 normalizedSunPosition;

void main()
{
    lowp float _diffuse = max(dot(_normal, normalizedSunPosition), 0.0);
   
    gl_FragColor = (texture2D(nightTexture, _uv0) * (1.0 - _diffuse)) + (texture2D(dayTexture, _uv0) * _diffuse);
}




Sunday, June 17, 2012

Lets bring some atmosphere to the party.

Another minor update. Instead of investing more significant amounts of time and adding atmospheric scattering I decided to simply add a cloud layer to the planet. The cloud texture came as part of the texture set I am using to render the earth.









Initially I bound the cloud textures (the image data and the transparency map) to texture space 1 and 2 (the earth texture is in space 0) and I attempted to render the clouds directly onto the planet. Since the application that this software will be used in will only view the planet from high orbit, flattening the clouds directly onto the earth texture wouldn't be an issue. The results were unsatisfying. If I looked online I could probably find some correct GLSL for doing alpha blending in a shader, and not by setting the OpenGL state machine with glEnable(GL_BLEND); glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA); however I have never tried this before so took quicker approach.

I've created a 2nd sphere for the atmosphere. It is a fraction larger than the earth sphere and the cloud texture is blended onto this. This approach is more costly to render as I am rendering 2 spheres instead of one, and the alpha blending needs to sample the color buffer, however the spheres are relatively low resolution (1681 vertices, 9600 indices which are rendered as a triangle strip to make 3200 polygons per sphere. The sphere is dynamically built at run-time allowing this resolution to be changed).

This method also allows the clouds to easily move independently of the planet, as real clouds do. I don't want to suggest that this wouldn't be possible if I flattened the clouds onto the earth sphere, it probably would by doing some texture coordinate scrolling, but it would result in a more complex shader. Slower to run? Perhaps but defiantly more difficult to understand.

No code worth while to show in this example. The shaders used for the atmosphere layer are pretty much identical to those shown previously.

Next task is to add time of day by creating a light source (the sun) and using texture blending to render the dark side of the earth. I am confident that a standard lighting algorithm will work for this, but instead of using the diffuse lighting value to darken a color (shade that pixel) the diffuse value will be used as a ratio to sample the day and night earth textures.

*Edit*
I've made a short video showing the mobile version of the engine running on a mobile platform, or at least the simulator for that platform.






Monday, June 11, 2012

A nice segue from Decade Engine to Mobile Development.

A friend is in the process of writing a nice iPad application. I shall not go into any detail regarding the app as it is his idea and not mine to share. The app needs to render the earth, allow the user to rotate the planet, zoom in and out to country level and also allow the user to touch anywhere on the globe and if a country has been pressed, that country is highlighted and this information available to the app layer.

To date he has been using an open framework called whirlyglobe. This is a pretty impressive framework and I would recommend that you check it out but after testing it on an iPad2 and 'The New iPad' within the app it seemed a little slow. Vector files are being used to highlight the country, raising it above others when selected. All of this is in very high detail and looks excellent, but this detail does come at a cost. The response on the iPad is sluggish and would probably be even more so when there is an app sitting above it.

When looking into how we could improve the performance, I suggested that I could use the concepts that I developed when programming the original Decade Engine, along with the new features I have been learning with converting the original engine to OpenGL 3/OpenGL ES 2.0.

Here is the first rendering from Decade Mobile. Please note that this video was recorded off my Mac Mini but the same code (with minor changes which I shall document in a latter post) has been built and runs on an iPad and iPhone.







The textures used in this video have been purchased from here. Since zooming is only required to the country level and not to the cm or meter level as was possible in the original Decade Engine, I thought it overkill to use the procedural sphere technique so instead just use a normal sphere. Some webgl code for generating the vertices for a sphere can be found here.
_______________________________________________________________________________
Sphere Generation (Vertex and Index Buffer) Code

void Sphere::Create(const Vector3 center, GLfloat radius, GLuint precision)
{
    vector vertices;
   
    GLuint latitudeBands = precision;
    GLuint longitudeBands = precision;
   
    for (GLuint latNumber = 0; latNumber <= latitudeBands; latNumber++)
    {
        GLfloat theta = latNumber * M_PI / latitudeBands;
        GLfloat sinTheta = sinf(theta);
        GLfloat cosTheta = cosf(theta);
       
        for (GLuint longNumber = 0; longNumber <= longitudeBands; longNumber++)
        {
            GLfloat phi = longNumber * 2 * M_PI / longitudeBands;
            GLfloat sinPhi = sinf(phi);
            GLfloat cosPhi = cosf(phi);
           
            GLfloat x = cosPhi * sinTheta;
            GLfloat y = cosTheta;
            GLfloat z = sinPhi * sinTheta;
            GLfloat u = 1.0f - ((GLfloat)longNumber / (GLfloat)longitudeBands);
            GLfloat v = (GLfloat)latNumber / (GLfloat)latitudeBands;
           
            VERTEX_POSITION_UV0 vertex;
            vertex.Position = Point4(radius * x, radius * y, radius * z, 1.0f);
            vertex.U0 = u;
            vertex.V0 = 1.0f - v;
            vertices.push_back(vertex);
        }
    }
   
    vector indices;
    for (GLuint latNumber = 0; latNumber < latitudeBands; latNumber++)
    {
        for (GLuint longNumber = 0; longNumber < longitudeBands; longNumber++)
        {
            GLuint first = (latNumber * (longitudeBands + 1)) + longNumber;
            GLuint second = first + longitudeBands + 1;
           
            indices.push_back(first);
            indices.push_back(second);
            indices.push_back(first + 1);
           
            indices.push_back(second);
            indices.push_back(second + 1);
            indices.push_back(first + 1);
        }
    }
    

    vertexBuffer.Create((float*)vertices, VERTEX_POSITION_UV0::GetFloatsInFormat(), vertices,size(), VERTEX_POSITION_UV0::GetFormat());
   
    indexBuffer.Create(indices, indices.size());
}

void Sphere::Bind()
{
    vertexBuffer.Bind();
    indexBuffer.Bind();
}

void Sphere::Render()
{
    vertexBuffer.Render(&indexBuffer, GL_TRIANGLES);
}
_________________________________________________________________________________
Vertex Shader

uniform mat4 mvp;

in vec4 position;
in vec2 uv0;

out vec2 textureCoord0;

void main (void)
{
    textureCoord0 = uv0;
    gl_Position = mvp * position;
}
_________________________________________________________________________________
Fragment Shader

in vec2 textureCoord0;
uniform sampler2D texture0;

out vec4 fragColor;

void main(void)
{
    fragColor = texture(texture0, textureCoord0);
}

Wednesday, May 16, 2012

Creating an OpenGL 3 application on Mac

Except for a brief adventure into the world of Ubuntu, most development of Decade was completed in Windows therefore to transition to Mac and IOS new projects have to be created.

Using XCode to create an IOS (iPhone or iPad) application, the IDE pretty much does all the setup for you when you select an OpenGL project.


Modifying 

self.context = [[[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2] autorelease];

allows you choose between OpenGL ES 1 and OpenGL ES 2.

Creating an OS X application does not present the same OpenGL option. This is your responsibility but I found to my frustration that allot of the tutorials online are insufficient. The standard approach seems to be create your own view which inherits from NSOpenGLView and Cocos handles allot of the setup for you. Rather than repeat what is already documented on may sites you can find an informative and easy to follow tutorial here.

This initially worked great. Using the fixed function pipeline I could position, rotate and render a cube. Since OpenGL ES 2 does not support the fixed function pipeline I needed to modify this code to remove all fixed functions and use GLSL, the programmable pipeline, instead.

The issues were obvious immediately. The shaders would not compile.

glGetShaderiv(shader, GL_COMPILE_STATUS, &status);

always returned 0, and

glGetShaderInfoLog(shader, logLength, &logLength, log);

the function used to get the errors when compiling shaders always returned an empty string. It took quiet a long time browsing forums to find out why this was happening. It turns out that Apple have decided that when NSOpenGLView is used as above, it will always uses OpenGL version 1, which doesn't support shaders. Interface builder does not allow you change the version of OpenGL used. This makes me question why the code even compiled and ran if it was using a version of OpenGL which did not support shaders.

The Solution:
To create a view which inherited from NSObject and create your own NSOpenGLView in code.
My interface now looks like

@interface DecadeEngine : NSObject

@property (nonatomic, readwrite, retain) IBOutlet NSWindow *window;
@property (nonatomic, readwrite, retain) IBOutlet NSOpenGLView *view;

@end

and OpenGL can be initialised in

- (void)awakeFromNib
{
     NSOpenGLPixelFormatAttribute pixelFormatAttributes[] =
     {
          NSOpenGLPFAOpenGLProfile, NSOpenGLProfileVersion3_2Core,
          NSOpenGLPFAColorSize        , 24                            ,
          NSOpenGLPFAAlphaSize        , 8                             ,
          NSOpenGLPFADepthSize        , 32                           ,
          NSOpenGLPFADoubleBuffer  ,
          NSOpenGLPFAAccelerated     ,
          NSOpenGLPFANoRecovery    ,
          0
     };
    NSOpenGLPixelFormat *pixelFormat = [[[NSOpenGLPixelFormat alloc] initWithAttributes:pixelFormatAttributes] autorelease];
    [self setView:[[[NSOpenGLView alloc] initWithFrame:[[[self window] contentView] bounds] pixelFormat:pixelFormat] autorelease]];
    [[[self window] contentView] addSubview:[self view]];
}

The shaders now compiled successfully and I could again see the rotating cubes on screen. :)

Thursday, May 10, 2012

Decade goes Mobile

The development of Decade in its original form (Desktop application, mainly developed in Visual Studio on Windows) is pretty much dead. Perhaps, like in the recent popularity of zombies, it will come back to life some day, but for now the source is sitting in an online repository gathering dust.

I have started to play around with graphics on mobile devices and OpenGL ES 2.0. I remember many years ago, when I first started Decade Engine, how much motivation I received from writing blog posts especially when comments were left. Hope to receive the same motivation this time I am going to repeat the blogging process and can hopefully write some interesting and informative posts.

Staying true to my ever existing believe that C++ is the best game development language means that I can reuse allot of Decade Engine code which will hopefully speed things up a little. Does this limit which mobile devices I can develop for? Not really.
  • On IOS (iPhone and iPad) one can easily mix C or C++ into Objective C. To make an Objective C file compile as C++ instead of C, you simply rename its extension from .m to .mm.
  • On Android I am going to use the exact same C++ code that I write for the IOS game. There is an Eclipse plugin called "Sequoyah" which allows you to compile and debug native code however the Android emulator does not support OpenGL ES 2.0 therefore pretty much anything I write will not work. Because of this, the IOS version will take priority until I purchase an Android device.
I do not want to give too much away about the game I plan on making, firstly because I think its a nice idea and secondly because I haven't actually figured it all out yet. In basic terms, it will be a 2D puzzle game with a slight emphasis on physics. The closest comparisons I can think would be "Feed me Oil" or "Wheres my water" but that is a very distant connection.

Now to the technology.... My first foray into mobile development was to implement a plasma shader. I thought that a fragment shader implementation of, good old, Perlin Noise would be perfect for a plasma effect. Not yet familiar with loading resources on a mobile device, I decided to use a self contained noise implementation which did not require a permutations texture. An excellent and interactive example of such a shader can be found here,


The screenshot is taken from the demo running on an iPad3. The frame rate is terrible on the emulator therefore taking a video isn't an option. I have tried to take video from my iPad, but given that I use my phone camera it is very jerky and poor quality. Is there any good way video capture an iPad screen?

There is nothing really special about this. It runs very poorly on an iPhone 3GS and runs OK, but not great, on an iPad2 and an iPad3. I am also unsure why there are obvious gradient levels instead of smooth blending between colors. The color of a given pixel is calculated as

float n = noise(vec3(position, time));   

//Red and Yellow
vec3 col1    = vec3(n, 0.0, 0.0);
vec3 col2 = vec3(1.0 - n, 1.0 - n,   0.0);

gl_FragColor = vec4((col1 + col2) / 2.0, 1.0);

Any obvious bug here?

The next stage will be to add the permutations texture to the shader. I believe that this method is a little more work upfront, but should be faster as the permutation data (or at least an approximation of the permutation data) does not need to be calculated each time the shader is run. (iPad3 retina display is 2048x1536 pixels. The fragment shader is run for each pixel which is rendered. My plasma is full screen therefore the fragment shader is run 3145728 times per frame. That is a lot of potential calculations which can be avoided).

That's pretty much all for now. Its good to be back, and I hope to post more than once or twice a year.

Ciaran