How to Embed Webgl into a WordPress Post

Embedding into WordPress

This information was originally part of WebGL Lesson One: Getting Started, but I thought it would be more useful if I also broke it out into a post on its own.

Embedding WebGL into WordPress can be a little tricky, but if you use the right tags and the HTML editor it can be done without too much trouble! You’ll need to insert a canvas in your post, script includes for any third-party libraries, and a script body for your main script (this can also be an include).

First, place a canvas tag where you would like the graphics to appear:

<pre><canvas id="canvas" width="550" height="375">Your browser does not support the canvas tag. This is a static example of what would be seen.</canvas></pre>

Then, include any necessary scripts:

<pre><script type="text/javascript" src="http://www.learnopengles.com/wordpress/wp-content/uploads/2011/06/webgl-utils.js"></script></pre>

Embed any additional scripts that you need, such as the main body for your WebGL script:

<pre><script type="text/javascript">
/**
* Lesson_one.js
*/

...

</script></pre>

The <pre> tag is important; otherwise WordPress will mangle your scripts and insert random paragraph tags and other stuff inside. Also, once you’ve inserted this code, you have to stick to using the HTML editor, as the visual editor will also mangle or delete your scripts.

If you’ve embedded your scripts correctly, then your WebGL canvas should appear, like below:

Your browser does not support the canvas tag. This is a static example of what would be seen.
WebGL lesson one, example of what would be seen in a browser.

Hope this helps out some people out there!








Android Lesson Four: Introducing Basic Texturing

Lesson Four: Basic Texturing
Basic texturing.

This is the fourth tutorial in our Android series. In this lesson, we’re going to add to what we learned in lesson three and learn how to add texturing.We’ll look at how to read an image from the application resources, load this image into OpenGL ES, and display it on the screen.

Follow along with me and you’ll understand basic texturing in no time flat!

Assumptions and prerequisites

Each lesson in this series builds on the lesson before it. This lesson is an extension of lesson three, so please be sure to review that lesson before continuing on. Here are the previous lessons in the series:

The basics of texturing

The art of texture mapping (along with lighting) is one of the most important parts of building up a realistic-looking 3D world. Without texture mapping, everything is smoothly shaded and looks quite artificial, like an old console game from the 90s.

The first games to start heavily using textures, such as Doom and Duke Nukem 3D, were able to greatly enhance the realism of the gameplay through the added visual impact — these were games that could start to truly scare us if played at night in the dark.

Here’s a look at a scene, without and with texturing:

Per fragment lighting; centered between four vertices of a square.
Per fragment lighting; centered between four vertices of a square.

Per fragment lighting with texturing: centered between four vertices of a square.
Adding texturing; centered between four vertices of a square.
 

In the image on the left, the scene is lit with per-pixel lighting and colored. Otherwise the scene appears very smooth. There are not many places in real-life where we would walk into a room full of smooth-shaded objects like this cube.In the image on the right, the same scene has now also been textured. The ambient lighting has also been increased because the use of textures darkens the overall scene, so this was done so you could also see the effects of texturing on the side cubes. The cubes have the same number of polygons as before, but they appear a lot more detailed with the new texture.

For those who are curious, the texture source is from public domain textures.

Texture coordinates

In OpenGL, texture coordinates are sometimes referred to in coordinates (s, t) instead of (x, y). (s, t) represents a texel on the texture, which is then mapped to the polygon. Another thing to note is that these texture coordinates are like other OpenGL coordinates: The t (or y) axis is pointing upwards, so that values get higher the higher you go.

In most computer images, the y axis is pointing downwards. This means that the top-left most corner of the image is (0, 0), and the y values increase the lower you go.In other words, the y-axis is flipped between OpenGL’s coordinate system and most computer images, and this is something you need to take into account.

OpenGL's texture coordinate system.
OpenGL's texture coordinate system.

The basics of texture mapping

In this lesson, we will look at regular 2D textures (GL_TEXTURE_2D) with red, green, and blue color information (GL_RGB). OpenGL ES also offers other texture modes that let you do different and more specialized effects. We’ll look at point sampling using GL_NEAREST. GL_LINEAR and mip-mapping will be covered in a future lesson.

Let’s start getting into the code and see how to start using basic texturing in Android!

Vertex shader

We’re going to take our per-pixel lighting shader from the previous lesson, and add texturing support. Here are the new changes:

attribute vec2 a_TexCoordinate; // Per-vertex texture coordinate information we will pass in.

...

varying vec2 v_TexCoordinate;   // This will be passed into the fragment shader.

...
// Pass through the texture coordinate.
v_TexCoordinate = a_TexCoordinate;

In the vertex shader, we add a new attribute of type vec2 (an array with two components) that will take in texture coordinate information as input. This will be per-vertex, like the position, color, and normal data. We also add a new varying that will pass this data through to the fragment shader via linear interpolation across the surface of the triangle.

Fragment shader
uniform sampler2D u_Texture;    // The input texture.

...

varying vec2 v_TexCoordinate; // Interpolated texture coordinate per fragment.

...

// Add attenuation.
 diffuse = diffuse * (1.0 / (1.0 + (0.10 * distance)));

...

// Add ambient lighting
 diffuse = diffuse + 0.3;

...

// Multiply the color by the diffuse illumination level and texture value to get final output color.

gl_FragColor = (v_Color * diffuse * texture2D(u_Texture, v_TexCoordinate));

We add a new uniform of type sampler2D to represent the actual texture data (as opposed to texture coordinates). The varying passes in the interpolated texture coordinates from the vertex shader, and we call texture2D(texture, textureCoordinate) to read in the value of the texture at the current coordinate. We then take this value and multiply it with the other terms to get the final output color.

Adding in a texture this way darkens the overall scene somewhat, so we also boost up the ambient lighting a bit and reduce the lighting attenuation.

Loading in the texture from an image file
	public static int loadTexture(final Context context, final int resourceId)
	{
		final int[] textureHandle = new int[1];

		GLES20.glGenTextures(1, textureHandle, 0);

		if (textureHandle[0] != 0)
		{
			final BitmapFactory.Options options = new BitmapFactory.Options();
			options.inScaled = false;	// No pre-scaling

			// Read in the resource
			final Bitmap bitmap = BitmapFactory.decodeResource(context.getResources(), resourceId, options);

			// Bind to the texture in OpenGL
			GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, textureHandle[0]);

			// Set filtering
			GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_NEAREST);
			GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_NEAREST);

			// Load the bitmap into the bound texture.
			GLUtils.texImage2D(GLES20.GL_TEXTURE_2D, 0, bitmap, 0);

			// Recycle the bitmap, since its data has been loaded into OpenGL.
			bitmap.recycle();
		}

		if (textureHandle[0] == 0)
		{
			throw new RuntimeException("Error loading texture.");
		}

		return textureHandle[0];
	}

This bit of code will read in a graphics file from your Android res folder and load it into OpenGL. I’ll explain what each part does.

We first need to ask OpenGL to create a new handle for us. This handle serves as a unique identifier, and we use it whenever we want to refer to the same texture in OpenGL.

final int[] textureHandle = new int[1];
GLES20.glGenTextures(1, textureHandle, 0);

The OpenGL method can be used to generate multiple handles at the same time; here we generate just one.

Once we have a texture handle, we use it to load the texture. First, we need to get the texture in a format that OpenGL will understand. We can’t just feed it raw data from a PNG or JPG, because it won’t understand that. The first step that we need to do is to decode the image file into an Android Bitmap object:

final BitmapFactory.Options options = new BitmapFactory.Options();
options.inScaled = false;	// No pre-scaling

// Read in the resource
final Bitmap bitmap = BitmapFactory.decodeResource(context.getResources(), resourceId, options);

By default, Android applies pre-scaling to bitmaps depending on the resolution of your device and which resource folder you placed the image in. We don’t want Android to scale our bitmap at all, so to be sure, we set inScaled to false.

// Bind to the texture in OpenGL
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, textureHandle[0]);

// Set filtering
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_NEAREST);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_NEAREST);

We then bind to the texture and set a couple of parameters. Binding to a texture tells OpenGL that subsequent OpenGL calls should affect this texture. We set the default filters to GL_NEAREST, which is the quickest and also the roughest form of filtering. All it does is pick the nearest texel at each point in the screen, which can lead to graphical artifacts and aliasing.

  • GL_TEXTURE_MIN_FILTER — This tells OpenGL what type of filtering to apply when drawing the texture smaller than the original size in pixels.
  • GL_TEXTURE_MAG_FILTER — This tells OpenGL what type of filtering to apply when magnifying the texture beyond its original size in pixels.
// Load the bitmap into the bound texture.
GLUtils.texImage2D(GLES20.GL_TEXTURE_2D, 0, bitmap, 0);

// Recycle the bitmap, since its data has been loaded into OpenGL.
bitmap.recycle();

Android has a very useful utility to load bitmaps directly into OpenGL. Once you’ve read in a resource into a Bitmap object, GLUtils.texImage2D() will take care of the rest. Here’s the method signature:

public static void texImage2D (int target, int level, Bitmap bitmap, int border)

We want a regular 2D bitmap, so we pass in GL_TEXTURE_2D as the first parameter. The second parameter is for mip-mapping, and lets you specify the image to use at each level. We’re not using mip-mapping here so we’ll put 0 which is the default level. We pass in the bitmap, and we’re not using the border so we pass in 0.

We then call recycle() on the original bitmap, which is an important step to free up memory. The texture has been loaded into OpenGL, so we don’t need to keep a copy of it lying around. Yes, Android apps run under a Dalvik VM that performs garbage collection, but Bitmap objects contain data that resides in native memory and they take a few cycles to be garbage collected if you don’t recycle them explicitly. This means that you could actually crash with an out of memory error if you forget to do this, even if you no longer hold any references to the bitmap.

Applying the texture to our scene

First, we need to add various members to the class to hold stuff we need for our texture:

/** Store our model data in a float buffer. */
private final FloatBuffer mCubeTextureCoordinates;

/** This will be used to pass in the texture. */
private int mTextureUniformHandle;

/** This will be used to pass in model texture coordinate information. */
private int mTextureCoordinateHandle;

/** Size of the texture coordinate data in elements. */
private final int mTextureCoordinateDataSize = 2;

/** This is a handle to our texture data. */
private int mTextureDataHandle;

We basically need to add new members to track what we added to the shaders, as well as hold a reference to our texture.

Defining the texture coordinates

We define our texture coordinates in the constructor:

// S, T (or X, Y)
// Texture coordinate data.
// Because images have a Y axis pointing downward (values increase as you move down the image) while
// OpenGL has a Y axis pointing upward, we adjust for that here by flipping the Y axis.
// What's more is that the texture coordinates are the same for every face.
final float[] cubeTextureCoordinateData =
{
		// Front face
		0.0f, 0.0f,
		0.0f, 1.0f,
		1.0f, 0.0f,
		0.0f, 1.0f,
		1.0f, 1.0f,
		1.0f, 0.0f,

...

The coordinate data might look a little confusing here. If you go back and look at how the position points are defined in Lesson 3, you’ll see that we define two triangles per face of the cube. The points are defined like this:

(Triangle 1)
Upper-left,
Lower-left,
Upper-right,

(Triangle 2)
Lower-left,
Lower-right,
Upper-right

The texture coordinates are pretty much the position coordinates for the front face, but with the Y axis flipped to compensate for the fact that in graphics images, the Y axis points in the opposite direction of OpenGL’s Y axis.

Setting up the texture

We load the texture in the onSurfaceCreated() method.

@Override
public void onSurfaceCreated(GL10 glUnused, EGLConfig config)
{

	...

	// The below glEnable() call is a holdover from OpenGL ES 1, and is not needed in OpenGL ES 2.
	// Enable texture mapping
	// GLES20.glEnable(GLES20.GL_TEXTURE_2D);

	...

	mProgramHandle = ShaderHelper.createAndLinkProgram(vertexShaderHandle, fragmentShaderHandle,
			new String[] {"a_Position",  "a_Color", "a_Normal", "a_TexCoordinate"});

	...

	// Load the texture
	mTextureDataHandle = TextureHelper.loadTexture(mActivityContext, R.drawable.bumpy_bricks_public_domain);

We pass in “a_TexCoordinate” as a new attribute to bind to in our shader program, and we load in our texture using the loadTexture() method we created above.

Using the texture

We also add some code to the onDrawFrame(GL10 glUnused) method.

@Override
public void onDrawFrame(GL10 glUnused)
{

	...

	mTextureUniformHandle = GLES20.glGetUniformLocation(mProgramHandle, "u_Texture");
	mTextureCoordinateHandle = GLES20.glGetAttribLocation(mProgramHandle, "a_TexCoordinate");

	// Set the active texture unit to texture unit 0.
	GLES20.glActiveTexture(GLES20.GL_TEXTURE0);

	// Bind the texture to this unit.
	GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, mTextureDataHandle);

	// Tell the texture uniform sampler to use this texture in the shader by binding to texture unit 0.
	GLES20.glUniform1i(mTextureUniformHandle, 0);

We get the shader locations for the texture data and texture coordinates. In OpenGL, textures need to be bound to texture units before they can be used in rendering. A texture unit is what reads in the texture and actually passes it through the shader so it can be displayed on the screen. Different graphics chips have a different number of texture units, so you’ll need to check if additional texture units exist before using them.

First, we tell OpenGL that we want to set the active texture unit to the first unit, texture unit 0. Our call to glBindTexture() will then automatically bind the texture to the first texture unit. Finally, we tell OpenGL that we want to bind the first texture unit to the mTextureUniformHandle, which refers to “u_Texture” in the fragment shader.

In short:

  1. Set the active texture unit.
  2. Bind a texture to this unit.
  3. Assign this unit to a texture uniform in the fragment shader.

Repeat for as many textures as you need.

Further exercises

Once you’ve made it this far, you’re done! Surely that wasn’t as bad as you expected… or was it? 😉 As your next exercise, try to add multi-texturing by loading in another texture, binding it to another unit, and using it in the shader.

Review

Here is a review of the full shader code, as well as a new helper function that we added to read in the shader code from the resource folder instead of storing it as a Java String:

Vertex shader
uniform mat4 u_MVPMatrix;		// A constant representing the combined model/view/projection matrix.
uniform mat4 u_MVMatrix;		// A constant representing the combined model/view matrix.

attribute vec4 a_Position;		// Per-vertex position information we will pass in.
attribute vec4 a_Color;			// Per-vertex color information we will pass in.
attribute vec3 a_Normal;		// Per-vertex normal information we will pass in.
attribute vec2 a_TexCoordinate; // Per-vertex texture coordinate information we will pass in.

varying vec3 v_Position;		// This will be passed into the fragment shader.
varying vec4 v_Color;			// This will be passed into the fragment shader.
varying vec3 v_Normal;			// This will be passed into the fragment shader.
varying vec2 v_TexCoordinate;   // This will be passed into the fragment shader.

// The entry point for our vertex shader.
void main()
{
	// Transform the vertex into eye space.
	v_Position = vec3(u_MVMatrix * a_Position);

	// Pass through the color.
	v_Color = a_Color;

	// Pass through the texture coordinate.
	v_TexCoordinate = a_TexCoordinate;

	// Transform the normal's orientation into eye space.
    v_Normal = vec3(u_MVMatrix * vec4(a_Normal, 0.0));

	// gl_Position is a special variable used to store the final position.
	// Multiply the vertex by the matrix to get the final point in normalized screen coordinates.
	gl_Position = u_MVPMatrix * a_Position;
}
Fragment shader
precision mediump float;       	// Set the default precision to medium. We don't need as high of a
								// precision in the fragment shader.
uniform vec3 u_LightPos;       	// The position of the light in eye space.
uniform sampler2D u_Texture;    // The input texture.

varying vec3 v_Position;		// Interpolated position for this fragment.
varying vec4 v_Color;          	// This is the color from the vertex shader interpolated across the
  								// triangle per fragment.
varying vec3 v_Normal;         	// Interpolated normal for this fragment.
varying vec2 v_TexCoordinate;   // Interpolated texture coordinate per fragment.

// The entry point for our fragment shader.
void main()
{
	// Will be used for attenuation.
    float distance = length(u_LightPos - v_Position);

	// Get a lighting direction vector from the light to the vertex.
    vec3 lightVector = normalize(u_LightPos - v_Position);

	// Calculate the dot product of the light vector and vertex normal. If the normal and light vector are
	// pointing in the same direction then it will get max illumination.
    float diffuse = max(dot(v_Normal, lightVector), 0.0);

	// Add attenuation.
    diffuse = diffuse * (1.0 / (1.0 + (0.10 * distance)));

    // Add ambient lighting
    diffuse = diffuse + 0.3;

	// Multiply the color by the diffuse illumination level and texture value to get final output color.
    gl_FragColor = (v_Color * diffuse * texture2D(u_Texture, v_TexCoordinate));
  }
How to read in the shader from a text file in the raw resources folder
public static String readTextFileFromRawResource(final Context context,
			final int resourceId)
	{
		final InputStream inputStream = context.getResources().openRawResource(
				resourceId);
		final InputStreamReader inputStreamReader = new InputStreamReader(
				inputStream);
		final BufferedReader bufferedReader = new BufferedReader(
				inputStreamReader);

		String nextLine;
		final StringBuilder body = new StringBuilder();

		try
		{
			while ((nextLine = bufferedReader.readLine()) != null)
			{
				body.append(nextLine);
				body.append('\n');
			}
		}
		catch (IOException e)
		{
			return null;
		}

		return body.toString();
	}

Wrapping up

The full source code for this lesson can be downloaded from the project site on GitHub.

A compiled version of the lesson can also be downloaded directly from the Android Market:

QR code for link to the app on the Android Market.

As always, please don’t hesitate to leave feedbacks or comments, and thanks for stopping by!

Enhanced by Zemanta

The Project Code Has Moved to Github!

Hi guys,

It’s been ages since I last posted an update, I know. I went away during the summer and neglected the site upon coming back, and now that I’m busy with school it’s been harder than ever to find the time to find an update. Excuses, excuses, I know. 😉

In any case, many of you were asking for a tutorial and demo on texturing, and this is what I’m going to talk about next. There also seems to be a lot more interest for Android tutorials rather than WebGL tutorials, so I will be focusing more time on Android. Let me know if you guys have other thoughts and suggestions.

The project source code is now moving to GitHub! The project page is located at http://learnopengles.github.com/Learn-OpenGLES-Tutorials/ and the repository is located at https://github.com/learnopengles/Learn-OpenGLES-Tutorials. The old repository at https://code.google.com/p/learn-opengles-tutorials/ will remain, but will no longer be updated going forward.

There was nothing wrong with the Google Code project site, and in fact I prefer the simplicity of Google’s interface, but I also prefer to develop using Git. Once you’ve gotten used to Git, it’s hard to go back to anything else. An advantage of GitHub is that it should be easier for others to fork and contribute to the project if they wish to.

As always, let me know your comments and thoughts. The code for Lesson 4 is already done, so I’ll start writing it up now and hopefully publish that soon!

Enhanced by Zemanta

Open Source Cross-Platform OpenGL Frameworks for Android

Android robot logo.
Image via Wikipedia

Let’s say you’ve decided to develop the next viral game for Android. You now have a choice: Do you go with a pre-packaged solution, flawed and rough around the edges though it may be, or do you decide to DIY (Do It Yourself) which has the disadvantage of reinventing the wheel and spending more time writing boiler-plate code? You also need to decide if you are going to go with a commercial solution or with one of the open-source libraries available.

Here are two of the more well-known open-source libraries that won’t cost you a dime to use:

libgdx

libgdx is an open-source framework which abstracts away the job of developing graphics for Android, and it also allows you to build for the desktop with only a few lines of code. It also appears to have support for OpenGL 2 on the desktop, though using standard OpenGL 2 instead of OpenGL ES 2.

forplay

forplay is a cross-platform library for developing games to target to the desktop, HTML5, Android, and Flash. It seems to be geared toward making 2d platformers rather than more intensive 3D games. Examples of forplay in action and more information can be seen at the Google IO 2011 session titled “Kick-ass Game Programming with Google Web Toolkit“.

Using a framework versus DIY

The pros

You can focus on the implementation of your app or game and save development time by not having to reinvent the wheel and rewrite boiler-plate code; being able to build for different platforms with only a few lines of code is a neat thing. Rovio reportedly used forplay in the development of the WebGL version of Angry Birds.

The cons

By using a framework, you won’t learn about the finer details of OpenGL ES and other aspects of game development, and ultimately, you’ll want to learn and understand these finer details if you also want to understand the broader picture. You’ll also have to live with the design decisions and implementation details of the various frameworks, as well as any rough edges. If you’re targeting Android and the Android Market, it’s better to test on and develop for the phone rather than on the desktop — it’s better to do well on one platform than mediocre on a few.

Conclusion

With the wide availability of code snippets and open-source libraries, there’s no need to go either-or. You can go with an existing framework if that’s most convenient for you, or you can start building from scratch, while taking code and math from the vast array of resources available on the Internet. Be sure to check the licenses before using code from other libraries — some open-source libraries are GPL licensed, which requires you to make your source code available for others should you incorporate it into your own code.

As always, don’t hesitate to leave your comments and feedback. 🙂

Enhanced by Zemanta

WebGL Lesson One: Getting Started

WebGL Lesson One.
WebGL Lesson One.
This is the first tutorial for learning OpenGL ES 2 on the web, using WebGL. In this lesson, we’ll look at how to create a basic WebGL instance and display stuff to the screen, as well as what you need in order to view WebGL in your browser. There will also be an introduction to shaders and matrices.

What is WebGL?

Previously, if you wanted to do real-time 3D graphics on the web, your only real option was to use a plugin such as Java or Flash. However, there is currently a push to bring hardware-accelerated graphics to the web called WebGL. WebGL is based on OpenGL ES 2, which means that we’ll need to use shaders. Since WebGL runs inside a web browser, we’ll also need to use JavaScript to control it.

Prerequisites

You’ll need a browser that supports WebGL, and you should also have the most recent drivers installed for your video card. You can visit Get WebGL to see if your browser supports WebGL and if not, it will tell you where you can get a browser that supports it.

The latest stable releases of Chrome and Firefox support WebGL, so you can always start there.

This lesson uses the following third-party libraries:

  • webgl-utils.js — for basic initialization of an OpenGL context and rendering on browser request.
  • glMatrix.js — for matrix operations.

Assumptions

The reader should be familiar with programming and 3D concepts on a basic level. The Khronos WebGL Public Wiki is a good place to start out.

Getting started

As I write this, I am also learning WebGL, so we’ll be learning together! We’ll look at how to get a context and start drawing stuff to the screen, and we’ll more or less follow lesson one for Android as this lesson is based on it. For those of you who followed the Android lesson, you may remember that getting an OpenGL context consisted of creating an activity and setting the content view to a GLSurfaceView object. We also provided a class which overrode GLSurfaceView.Renderer and provided methods which were called by the system.

With WebGL, it is just as easy to get things setup and running. The webgl-utils.js script provides us with two functions to get things going:

function setupWebGL(canvas, opt_attribs);

function window.requestAnimFrame(callback, element);

The setupWebGL() function takes care of initializing WebGL for us, as well as pointing the user to a browser that supports WebGL or further troubleshooting if there were errors initializing WebGL. More info on the optional parameters can be found at the WebGL Specification page, section 5.2.1.

The second function provides a cross-browser way of setting up a render callback. The browser will call the function provided in the callback parameter at a regular interval. The element parameter lets the browser know for which element the callback is firing.

In our script we have a function main() which is our main entry point, and is called once at the end of the script. In this function, we initialize WebGL with the following calls:

    // Try to get a WebGL context
    canvas = document.getElementById("canvas");

    // We don't need a depth buffer.
    // See https://www.khronos.org/registry/webgl/specs/1.0/ Section 5.2 
    // for more info on parameters and defaults.
    gl = WebGLUtils.setupWebGL(canvas, { depth: false });

If the calls were successful, then we go on to initialize our model data and set up our rendering callback.

Visualizing a 3D world

Like in lesson one for Android, we need to define our model data as an array of floating point numbers. These numbers can represent vertex positions, colors, or anything else that we need. Unlike OpenGL ES 2 on Android, WebGL does not support client-side buffers. This means that we need to load all of the data into WebGL using vertex buffer objects (VBOs). Thankfully, this is a pretty trivial step and it will be explained in more detail further below.

Before we transfer the data into WebGL, we’ll define it in client memory first using the Float32Array datatype. These typed arrays are an attempt to increase the performance of Javascript by adding typing information.

		// Define points for equilateral triangles.
		trianglePositions = new Float32Array([
				// X, Y, Z,
	            -0.5, -0.25, 0.0,
	            0.5, -0.25, 0.0,
	            0.0, 0.559016994, 0.0]);

		// This triangle is red, green, and blue.
		triangle1Colors = new Float32Array([
  				// R, G, B, A
  	            1.0, 0.0, 0.0, 1.0,
  	            0.0, 0.0, 1.0, 1.0,
  	            0.0, 1.0, 0.0, 1.0]);

		...

All of the triangles can share the same position data, but we’ll define a different set of colors for each triangle.

Setting up initial parameters

After defining basic model data, our main function calls startRendering(), which takes care of setting up the viewport, building the shaders, and starting the rendering loop.

Setting up the viewport and projection matrix

First, we configure the viewport to be the same size as the canvas viewport. Note that this assumes a canvas that doesn’t change size, since we’re only doing this once.

	// Set the OpenGL viewport to the same size as the canvas.
	gl.viewport(0, 0, canvas.clientWidth, canvas.clientHeight);

Then we setup the projection matrix. Please see “Understanding matrices” for more information.

	// Create a new perspective projection matrix. The height will stay the same
	// while the width will vary as per aspect ratio.
	var ratio = canvas.clientWidth / canvas.clientHeight;
	var left = -ratio;
	var right = ratio;
	var bottom = -1.0;
	var top = 1.0;
	var near = 1.0;
	var far = 10.0;
		
	mat4.frustum(left, right, bottom, top, near, far, projectionMatrix);
Configuring the view matrix and default parameters

Setting up the viewport and configuring the projection matrix is something we should do whenever the canvas has changed size. The next step is to set the default clear color as well as the view matrix.

	// Set the background clear color to gray.
	gl.clearColor(0.5, 0.5, 0.5, 1.0);		
	
	/* Configure camera */
	// Position the eye behind the origin.
	var eyeX = 0.0;
	var eyeY = 0.0;
	var eyeZ = 1.5;

	// We are looking toward the distance
	var lookX = 0.0;
	var lookY = 0.0;
	var lookZ = -5.0;

	// Set our up vector. This is where our head would be pointing were we holding the camera.
	var upX = 0.0;
	var upY = 1.0;
	var upZ = 0.0;
	
	// Set the view matrix. This matrix can be said to represent the camera position.		
	var eye = vec3.create();
	eye[0] = eyeX; eye[1] = eyeY; eye[2] = eyeZ;
	
	var center = vec3.create();
	center[0] = lookX; center[1] = lookY; center[2] = lookZ;
	
	var up = vec3.create();
	up[0] = upX; up[1] = upY; up[2] = upZ;
	
	mat4.lookAt(eye, center, up, viewMatrix);
Loading in shaders

Finally, we load in our shaders and compile them. The code for this is essentially identical to Android; please see “Defining vertex and fragment shaders” and “Loading shaders into OpenGL” for more information.

In WebGL we can embed shaders in a few ways: we can embed them as a JavaScript string, we can embed them into the HTML of the page that contains the script, or we can put them in a separate file and link to that file from our script. In this lesson, we take the second approach:

<script id="vertex_shader" type="x-shader/x-vertex">
uniform mat4 u_MVPMatrix;

...
</script>

<script id="vertex_shader" type="x-shader/x-vertex">
precision mediump float;

...
</script>

We can then read in these scripts using the following code snippit:

		// Read the embedded shader from the document.
		var shaderSource = document.getElementById(sourceScriptId);
		
		if (!shaderSource)
		{
			throw("Error: shader script '" + sourceScriptId + "' not found");
		}
		
		// Pass in the shader source.
		gl.shaderSource(shaderHandle, shaderSource.text);
Uploading data into buffer objects

I mentioned a bit earlier that WebGL doesn’t support client-side buffers, so we need to upload our data into WebGL itself using buffer objects. This is actually pretty straightforward:

    // Create buffers in OpenGL's working memory.
    trianglePositionBufferObject = gl.createBuffer();
    gl.bindBuffer(gl.ARRAY_BUFFER, trianglePositionBufferObject);
    gl.bufferData(gl.ARRAY_BUFFER, trianglePositions, gl.STATIC_DRAW);
    
    triangleColorBufferObject1 = gl.createBuffer();
    gl.bindBuffer(gl.ARRAY_BUFFER, triangleColorBufferObject1);
    gl.bufferData(gl.ARRAY_BUFFER, triangle1Colors, gl.STATIC_DRAW);
    
    ... 

First we create a buffer object using createBuffer(), then we bind the buffer. Then we pass in the data using gl.bufferData() and tell OpenGL that this buffer will be used for static drawing; this hints to OpenGL that we will not be updating this buffer often.

WebGL versus OpenGL ES 2

You may have noticed that the WebGL API is a bit different than the base OpenGL ES 2 API: functions and variable names have had their “gl” or “GL_” prefixes removed. This actually makes the API a bit cleaner to use and read. At the same time, some functions have been modified a bit to mesh better with the JavaScript environment.

Setting up a rendering callback

We finally kick off the rendering loop by calling window.requestAnimFrame():

	// Tell the browser we want render() to be called whenever it's time to draw another frame.
	window.requestAnimFrame(render, canvas);
Rendering to the screen

The code to render to the screen is pretty much a transpose of the lesson one code for Android. One main difference is that we call window.requestAnimFrame() at the end to request another animation frame.

	// Request another frame
	window.requestAnimFrame(render, canvas);

Recap

If everything went well, you should end up with an animated canvas like the one just below:

Your browser does not support the canvas tag. This is a static example of what would be seen.
WebGL lesson one, example of what would be seen in a browser.

If you would like more explanations behind the shaders or other aspects of the program, please be sure to check out lesson one for Android.

Debugging

Debugging in JavaScript in the browser is a little more difficult than within an integrated environment such as Eclipse, but it can be done using tools such as Chrome’s inspector. You can also use the WebGL Inspector, which is a plugin that lets you delve into WebGL’s internals and get a better idea of what’s going on.

Embedding into WordPress

WebGL can easily be embedded into your posts and pages! You need a canvas, script includes for any third-party libraries, and a script body for your main script (this can also be an include).

Example of a canvas:
<pre><canvas id="canvas" width="550" height="375">Your browser does not support the canvas tag. This is a static example of what would be seen.</canvas></pre>

Example of a script include:
<pre><script type="text/javascript" src="http://www.learnopengles.com/wordpress/wp-content/uploads/2011/06/webgl-utils.js"></script></pre>

Example of an embedded script:
<pre><script type="text/javascript">
/**
* Lesson_one.js
*/

...

</script></pre>

The <pre> tag is important; otherwise WordPress will mangle your scripts and insert random paragraph tags and other stuff inside. Also, once you’ve inserted this code, you have to stick to using the HTML editor, as the visual editor will also mangle or delete your scripts.

Further reading

Exploring further

Try changing the animation speed, vertex points, or colors, and see what happens!

The full source code for this lesson can be downloaded from the project site on GitHub.

Don’t hesitate to ask any questions or offer feedback, and thanks for stopping by!








Android Lesson Three: Moving to Per-Fragment Lighting

Per fragment lighting; At the corner of a square.
Per fragment lighting; At the corner of a square.

Welcome to the the third tutorial for Android! In this lesson, we’re going to take everything we learned in lesson two and learn how to apply the same lighting technique on a per-pixel basis. We will be able to see the difference, even when using standard diffuse lighting with simple cubes.

Assumptions and prerequisites

Each lesson in this series builds on the lesson before it. This lesson is an extension of lesson two, so please be sure to review that lesson before continuing on. Here are the previous lessons in the series:

What is per-pixel lighting?

Per-pixel lighting is a relatively new phenomenon in gaming with the advent of the use of shaders. Many famous old games such as the original Half Life were developed before the time of shaders and featured mainly static lighting, with some tricks for simulating dynamic lighting using either per-vertex (otherwise known as Gouraud shading) lights or other techniques, such as dynamic lightmaps.

Lightmaps can give a very nice result and can sometimes give even better results than shaders alone as expensive light calculations can be precomputed, but the downside is that they take up a lot of memory and doing dynamic lighting with them is limited to simple calculations.

With shaders, a lot of these calculations can now be offloaded to the GPU, which allows for many more effects to be done in real-time.

Moving from per-vertex to per-fragment lighting

In this lesson, we’re going to look at the same lighting code for a per-vertex solution and a per-fragment solution. Although I have referred to this type of lighting as per-pixel, in OpenGL ES we actually work with fragments, and several fragments can contribute to the final value of a pixel.

Mobile GPUs are getting faster and faster, but performance is still a concern. For “soft” lighting such as terrain, per-vertex lighting may be good enough. Ensure you have a proper balance between quality and speed.

A significant difference between the two types of lighting can be seen in certain situations. Take a look at the following screen captures:

Per vertex lighting; centered between four vertices of a square.
Per vertex lighting; centered between four vertices of a square.

Per fragment lighting; centered between four vertices of a square.
Per fragment lighting; centered between four vertices of a square.
 

In the per-vertex lighting in the left image, the front face of the cube appears as if flat-shaded, and there is no evidence of a light nearby. This is because each of the four points of the front face are more or less equidistant from the light, and the low light intensity at each of these four points is simply interpolated across the two triangles that make up the front face.

The per-fragment version shows a nice highlight in comparison.

Per vertex lighting; At the corner of a square.
Per vertex lighting; At the corner of a square.

Per fragment lighting; At the corner of a square.
Per fragment lighting; At the corner of a square.
 

The left image shows a Gouraud-shaded cube.As the light source moves near the corner of the front face of the cube, a triangle-like effect can be seen. This is because the front face is actually composed of two triangles, and as the values are interpolated in different directions across each triangle we can see the underlying geometry.

The per-fragment version shows no such interpolation errors and shows a nice circular highlight near the edge.

An overview of per-vertex lighting

Let’s take a look at our shaders from lesson two; a more detailed explanation on what the shaders do can be found in that lesson.

Vertex shader
uniform mat4 u_MVPMatrix;     // A constant representing the combined model/view/projection matrix.
uniform mat4 u_MVMatrix;      // A constant representing the combined model/view matrix.
uniform vec3 u_LightPos;      // The position of the light in eye space.

attribute vec4 a_Position;    // Per-vertex position information we will pass in.
attribute vec4 a_Color;       // Per-vertex color information we will pass in.
attribute vec3 a_Normal;      // Per-vertex normal information we will pass in.

varying vec4 v_Color;         // This will be passed into the fragment shader.

// The entry point for our vertex shader.
void main()
{
	// Transform the vertex into eye space.
	vec3 modelViewVertex = vec3(u_MVMatrix * a_Position);

	// Transform the normal's orientation into eye space.
	vec3 modelViewNormal = vec3(u_MVMatrix * vec4(a_Normal, 0.0));

	// Will be used for attenuation.
	float distance = length(u_LightPos - modelViewVertex);

	// Get a lighting direction vector from the light to the vertex.
	vec3 lightVector = normalize(u_LightPos - modelViewVertex);

	// Calculate the dot product of the light vector and vertex normal. If the normal and light vector are
	// pointing in the same direction then it will get max illumination.
	float diffuse = max(dot(modelViewNormal, lightVector), 0.1);

	// Attenuate the light based on distance.
	diffuse = diffuse * (1.0 / (1.0 + (0.25 * distance * distance)));

	// Multiply the color by the illumination level. It will be interpolated across the triangle.
	v_Color = a_Color * diffuse;

	// gl_Position is a special variable used to store the final position.
	// Multiply the vertex by the matrix to get the final point in normalized screen coordinates.
	gl_Position = u_MVPMatrix * a_Position;
}
Fragment shader
precision mediump float;       // Set the default precision to medium. We don't need as high of a
							   // precision in the fragment shader.
varying vec4 v_Color;          // This is the color from the vertex shader interpolated across the
		  					   // triangle per fragment.

// The entry point for our fragment shader.
void main()
{
	gl_FragColor = v_Color;    // Pass the color directly through the pipeline.
}

As you can see, most of the work is being done in our vertex shader. Moving to per-fragment lighting means that our fragment shader is going to have more work to do.

Implementing per-fragment lighting

Here is what the code looks like after moving to per-fragment lighting.

Vertex shader
uniform mat4 u_MVPMatrix;      // A constant representing the combined model/view/projection matrix.
uniform mat4 u_MVMatrix;       // A constant representing the combined model/view matrix.

attribute vec4 a_Position;     // Per-vertex position information we will pass in.
attribute vec4 a_Color;        // Per-vertex color information we will pass in.
attribute vec3 a_Normal;       // Per-vertex normal information we will pass in.

varying vec3 v_Position;       // This will be passed into the fragment shader.
varying vec4 v_Color;          // This will be passed into the fragment shader.
varying vec3 v_Normal;         // This will be passed into the fragment shader.

// The entry point for our vertex shader.
void main()
{
	// Transform the vertex into eye space.
    v_Position = vec3(u_MVMatrix * a_Position);

	// Pass through the color.
    v_Color = a_Color;

	// Transform the normal's orientation into eye space.
    v_Normal = vec3(u_MVMatrix * vec4(a_Normal, 0.0));

	// gl_Position is a special variable used to store the final position.
	// Multiply the vertex by the matrix to get the final point in normalized screen coordinates.
    gl_Position = u_MVPMatrix * a_Position;
}

The vertex shader is simpler than before. We have added two linearly-interpolated variables to be passed through to the fragment shader: the vertex position and the vertex normal. Both of these will be used when calculating lighting in the fragment shader.

Fragment shader
precision mediump float;       // Set the default precision to medium. We don't need as high of a
							   // precision in the fragment shader.
uniform vec3 u_LightPos;       // The position of the light in eye space.

varying vec3 v_Position;	   // Interpolated position for this fragment.
varying vec4 v_Color;          // This is the color from the vertex shader interpolated across the
		  					   // triangle per fragment.
varying vec3 v_Normal;         // Interpolated normal for this fragment.

// The entry point for our fragment shader.
void main()
{
	// Will be used for attenuation.
   	float distance = length(u_LightPos - v_Position);

	// Get a lighting direction vector from the light to the vertex.
   	vec3 lightVector = normalize(u_LightPos - v_Position);

	// Calculate the dot product of the light vector and vertex normal. If the normal and light vector are
	// pointing in the same direction then it will get max illumination.
   	float diffuse = max(dot(v_Normal, lightVector), 0.1);

	// Add attenuation.
   	diffuse = diffuse * (1.0 / (1.0 + (0.25 * distance * distance)));

	// Multiply the color by the diffuse illumination level to get final output color.
   	gl_FragColor = v_Color * diffuse;
}

With per-fragment lighting, our fragment shader has a lot more work to do. We have basically moved the Lambertian calculation and attenuation to the per-pixel level, which gives us more realistic lighting without needing to add more vertices.

Further exercises

Could we calculate the distance in the vertex shader instead and simply pass that on to the pixel shader using linear interpolation via a varying?

Wrapping up

The full source code for this lesson can be downloaded from the project site on GitHub.

A compiled version of the lesson can also be downloaded directly from the Android Market:

QR code for link to the app on the Android Market.

As always, please don’t hesitate to leave feedbacks or comments, and thanks for stopping by!

Android Lesson Two: Ambient and Diffuse Lighting

Visual output of Lesson Two.Welcome to the second tutorial for Android. In this lesson, we’re going to learn how to implement Lambertian reflectance using shaders, otherwise known as your standard diffuse lighting. In OpenGL ES 2, we need to implement our own lighting algorithms, so we will learn how the math works and how we can apply it to our scenes.

Assumptions and prerequisites

Each lesson in this series builds on the lesson before it. Before we begin, please review the first lesson as this lesson will build upon the concepts introduced there.

What is light?

A world without lighting would be a dim one, indeed. Without light, we would not even be able to perceive the world or the objects that lie around us, except via the other senses such as sound and touch. Light shows us how bright or dim something is, how near or far it is, and what angle it lies at.

In the real world, what we perceive as light is really the aggregation of trillions of tiny particles called photons, which fly out of a light source, bounce around thousands or millions of times, and eventually reach our eye where we perceive it as light.

How can we simulate the effects of light via computer graphics? There are two popular ways to do it: ray tracing, and rasterisation. Ray tracing works by mathematically tracing actual rays of light and seeing where they end up. This technique gives very accurate and realistic results, but the downside is that simulating all of those rays is very computationally expensive, and usually too slow for real-time rendering. Due to this limitation, most real-time computer graphics use rasterisation instead, which simulates lighting by approximating the result. Given the realism of recent games, rasterisation can also look very nice, and is fast enough for real-time graphics even on mobile phones. Open GL ES is primarily a rasterisation library, so this is the approach we will focus on.

The different kinds of light

It turns out that we can abstract the way that light works and come up with three basic types of lighting:

Ambient lighting. Source: http://www.geograph.org.uk/photo/774138
Ambient lighting.
Ambient lighting
This is a base level of lighting that seems to pervade an entire scene. It is light that doesn’t appear to come from any light source in particular because it has bounced around so much before reaching you. This type of lighting can be experienced outdoors on an overcast day, or indoors as the cumulative effect of many different light sources. Instead of calculating all of the individual lights, we can just set a base light level for the object or scene.

An example of ambient and diffuse lighting. Source: http://commons.wikimedia.org/wiki/File:Wikibooks_povray_colors_transparent_sphere.gif
An example of ambient and diffuse lighting.
Diffuse lighting
This is light that reaches your eye after bouncing directly off of an object. The illumination level of the object varies with its angle to the lighting. Something facing the light head on is lit more brightly than something facing the light at an angle. Also, we perceive the object to be the same brightness no matter which angle we are at relative to the object. This is otherwise known as Lambert’s cosine law. Diffuse lighting or Lambertian reflectance is common in everyday life and can be easily seen on a white wall lit up by an indoor light.

Specular highlight. Source: http://en.wikipedia.org/wiki/File:Specular_highlight.jpg
An example of specular highlights.
Specular lighting
Unlike diffuse lighting, specular lighting changes as we move relative to the object. This gives “shininess” to the object and can be seen on “smoother” surfaces such as glass and other shiny objects.

Simulating light

Just as there are three main types of light in a 3D scene, there are also three main types of light sources: directional, point, and spotlight. These can also be easily seen in everyday life.

Brightly lit landscape. Source: http://upload.wikimedia.org/wikipedia/commons/2/2a/KaliGandaki.jpg
A brightly lit landscape.
Directional lighting
Directional lighting usually comes from a bright source that is so far away that it lights up the entire scene evenly and to the same brightness. This light source is the simplest type as the light is the same strength and direction no matter where you are in the scene.

Point light. Source: http://commons.wikimedia.org/wiki/File:Flare_0.jpg
An example of point lights.
Point lighting
Point lights can be added to a scene in order to give more varied and realistic lighting. The illumination of a point light falls off with distance, and its light rays travel out in all directions with the point light at the center.

Spotlight. Source: http://www.flickr.com/photos/trektrack/24583555/
Spotlight.
Spot lighting
In addition to the properties of a point light, spot lights also have the direction of light attenuated, usually in the shape of a cone.
The math

In this lesson, we’re going to be looking at ambient lighting and diffuse lighting coming from a point source.

Ambient lighting

Ambient lighting is really indirect diffuse lighting, but it can also be thought of as a low-level light which pervades the entire scene. If we think of it that way, then it becomes very easy to calculate:

final color = material color * ambient light color

For example, let’s say our object is red and our ambient light is a dim white. Let’s assume that we store color as an array of three colors: red, green, and blue, using the RGB color model:

final color = {1, 0, 0} * {0.1, 0.1, 0.1} = {0.1, 0.0, 0.0}

The final color of the object will be a dim red, which is what you’d expect if you had a red object illuminated by a dim white light. There is really nothing more to basic ambient lighting than that, unless you want to get into more advanced lighting techniques such as radiosity.

Diffuse lighting – point light source

For diffuse lighting, we need to add attenuation and a light position. The light position will be used to calculate the angle between the light and the surface, which will affect the surface’s overall level of lighting. It will also be used to calculate the distance between the light and the surface, which determines the strength of the light at that point.

Step 1: Calculate the lambert factor.

The first major calculation we need to make is to figure out the angle between the surface and the light. A surface which is facing the light straight-on should be illuminated at full strength, while a surface which is slanted should get less illumination. The proper way to calculate this is by using Lambert’s cosine law. If we have two vectors, one being from the light to a point on the surface, and the second being a surface normal (if the surface is a flat plane, then the surface normal is a vector pointing straight up, or orthogonal to that surface), then we can calculate the cosine by first normalizing each vector so that it has a length of one, and then by calculating the dot product of the two vectors. This is an operation that can easily be done via OpenGL ES 2 shaders.

Let’s call this the lambert factor, and it will have a range of between 0 and 1.

light vector = light position - object position
cosine = dot product(object normal, normalize(light vector))
lambert factor = max(cosine, 0)

Fitst we calculate the light vector by subtracting the object position from the light position. Then we get the cosine by doing a dot product between the object normal and the light vector. We normalize the light vector, which means to change its length so it has a length of one. The object normal should already have a length of one. Taking the dot product of two normalized vectors gives you the cosine between them. Because the dot product can have a range of -1 to 1, we clamp it to a range of 0 to 1.

Here’s an example with an flat plane at the origin and the surface normal pointing straight up toward the sky. The light is positioned at {0, 10, -10}, or 10 units up and 10 units straight ahead. We want to calculate the light at the origin.

light vector = {0, 10, -10} - {0, 0, 0} = {0, 10, -10}
object normal = {0, 1, 0}

In plain English, if we move from where we are along the light vector, we reach the position of the light. To normalize the vector, we divide each component by the vector length:

light vector length = square root(0*0 + 10*10 + -10*-10) = square root(200) = 14.14
normalized light vector = {0, 10/14.14, -10/14.14} = {0, 0.707, -0.707}

Then we calculate the dot product:

dot product({0, 1, 0}, {0, 0.707, -0.707}) = (0 * 0) + (1 * 0.707) + (0 * -0.707) = 0 + 0.707 + 0 = 0.707

Here is a good explanation of the dot product and what it calculates. Finally, we clamp the range:

lambert factor = max(0.707, 0) = 0.707

OpenGL ES 2’s shading language has built in support for some of these functions so we don’t need to do all of the math by hand, but it can still be useful to understand what is going on.

Step 2: Calculate the attenuation factor.

Next, we need to calculate the attenuation. Real light attenuation from a point light source follows the inverse square law, which can also be stated as:

luminosity = 1 / (distance * distance)

Going back to our example, since we have a distance of 14.14, here is what our final luminosity looks like:

luminosity = 1 / (14.14*14.14) = 1 / 200 = 0.005

As you can see, the inverse square law can lead to a strong attenuation over distance. This is how light from a point light source works in the real world, but since our graphics displays have a limited range, it can be useful to dampen this attenuation factor so we still get realistic lighting without things looking too dark.

Step 3: Calculate the final color.

Now that we have both the cosine and the attenuation, we can calculate our final illumination level:

final color = material color * (light color * lambert factor * luminosity)

Going with our previous example of a red material and a full white light source, here is the final calculation:

final color = {1, 0, 0} * ({1, 1, 1} * 0.707 * 0.005}) = {1, 0, 0} * {0.0035, 0.0035, 0.0035} = {0.0035, 0, 0}

To recap, for diffuse lighting we need to use the angle between the surface and the light as well as the distance between the surface and the light in order to calculate the final overall diffuse illumination level. Here are the steps:

//Step one
light vector = light position - object position
cosine = dot product(object normal, normalize(light vector))
lambert factor = max(cosine, 0)

//Step two
luminosity = 1 / (distance * distance)

//Step three
final color = material color * (light color * lambert factor * luminosity)
Putting this all into OpenGL ES 2 shaders
The vertex shader
		  final String vertexShader =
			"uniform mat4 u_MVPMatrix;      \n"		// A constant representing the combined model/view/projection matrix.
		  + "uniform mat4 u_MVMatrix;       \n"		// A constant representing the combined model/view matrix.
		  + "uniform vec3 u_LightPos;       \n"	    // The position of the light in eye space.

		  + "attribute vec4 a_Position;     \n"		// Per-vertex position information we will pass in.
		  + "attribute vec4 a_Color;        \n"		// Per-vertex color information we will pass in.
		  + "attribute vec3 a_Normal;       \n"		// Per-vertex normal information we will pass in.

		  + "varying vec4 v_Color;          \n"		// This will be passed into the fragment shader.

		  + "void main()                    \n" 	// The entry point for our vertex shader.
		  + "{                              \n"
		// Transform the vertex into eye space.
		  + "   vec3 modelViewVertex = vec3(u_MVMatrix * a_Position);              \n"
		// Transform the normal's orientation into eye space.
		  + "   vec3 modelViewNormal = vec3(u_MVMatrix * vec4(a_Normal, 0.0));     \n"
		// Will be used for attenuation.
		  + "   float distance = length(u_LightPos - modelViewVertex);             \n"
		// Get a lighting direction vector from the light to the vertex.
		  + "   vec3 lightVector = normalize(u_LightPos - modelViewVertex);        \n"
		// Calculate the dot product of the light vector and vertex normal. If the normal and light vector are
		// pointing in the same direction then it will get max illumination.
		  + "   float diffuse = max(dot(modelViewNormal, lightVector), 0.1);       \n"
		// Attenuate the light based on distance.
		  + "   diffuse = diffuse * (1.0 / (1.0 + (0.25 * distance * distance)));  \n"
		// Multiply the color by the illumination level. It will be interpolated across the triangle.
		  + "   v_Color = a_Color * diffuse;                                       \n"
		// gl_Position is a special variable used to store the final position.
		// Multiply the vertex by the matrix to get the final point in normalized screen coordinates.
		  + "   gl_Position = u_MVPMatrix * a_Position;                            \n"
		  + "}                                                                     \n";

There is quite a bit going on here. We have our combined model/view/projection matrix as in lesson one, but we’ve also added a model/view matrix. Why? We will need this matrix in order to calculate the distance between the position of the light source and the position of the current vertex. For diffuse lighting, it actually doesn’t matter whether you use world space (model matrix) or eye space (model/view matrix) so long as you can calculate the proper distances and angles.

We pass in the vertex color and position information, as well as the surface normal. We will pass the final color to the fragment shader, which will interpolate it between the vertices. This is also known as Gouraud shading.

Let’s look at each part of the shader to see what’s going on:

		// Transform the vertex into eye space.
		  + "   vec3 modelViewVertex = vec3(u_MVMatrix * a_Position);              \n"

Since we’re passing in the position of the light in eye space, we convert the current vertex position to a coordinate in eye space so we can calculate the proper distances and angles.

		// Transform the normal's orientation into eye space.
		  + "   vec3 modelViewNormal = vec3(u_MVMatrix * vec4(a_Normal, 0.0));     \n"

We also need to transform the normal’s orientation. Here we are just doing a regular matrix multiplication like with the position, but if the model or view matrices have been scaled or skewed, this won’t work: we’ll actually have to undo the effect of the skew or scale by multiplying the normal by the transpose of the inverse of the original matrix. This website best explains why we have to do this.

		// Will be used for attenuation.
		  + "   float distance = length(u_LightPos - modelViewVertex);             \n"

As shown before in the math section, we need the distance in order to calculate the attenuation factor.

		// Get a lighting direction vector from the light to the vertex.
		  + "   vec3 lightVector = normalize(u_LightPos - modelViewVertex);        \n"

We also need the light vector to calculate the Lambertian reflectance factor.

		// Calculate the dot product of the light vector and vertex normal. If the normal and light vector are
		// pointing in the same direction then it will get max illumination.
		  + "   float diffuse = max(dot(modelViewNormal, lightVector), 0.1);       \n"

This is the same math as above in the math section, just done in an OpenGL ES 2 shader. The 0.1 at the end is just a really cheap way of doing ambient lighting (the value will be clamped to a minimum of 0.1).

		// Attenuate the light based on distance.
		  + "   diffuse = diffuse * (1.0 / (1.0 + (0.25 * distance * distance)));  \n"

The attenuation math is a bit different than above in the math section. We scale the square of the distance by 0.25 to dampen the attenuation effect, and we also add 1.0 to the modified distance so that we don’t get oversaturation when the light is very close to an object (otherwise, when the distance is less than one, this equation will actually brighten the light instead of attenuating it).

		// Multiply the color by the illumination level. It will be interpolated across the triangle.
		  + "   v_Color = a_Color * diffuse;                                       \n"
		// gl_Position is a special variable used to store the final position.
		// Multiply the vertex by the matrix to get the final point in normalized screen coordinates.
		  + "   gl_Position = u_MVPMatrix * a_Position;                            \n"

Once we have our final light color, we multiply it by the vertex color to get the final output color, and then we project the position of this vertex to the screen.

The pixel shader
		  final String fragmentShader =
			"precision mediump float;       \n"		// Set the default precision to medium. We don't need as high of a
													// precision in the fragment shader.
		  + "varying vec4 v_Color;          \n"		// This is the color from the vertex shader interpolated across the
		  											// triangle per fragment.
		  + "void main()                    \n"		// The entry point for our fragment shader.
		  + "{                              \n"
		  + "   gl_FragColor = v_Color;     \n"		// Pass the color directly through the pipeline.
		  + "}                              \n";

Because we are calculating light on a per-vertex basis, our fragment shader looks the same as it did in the first lesson — all we do is pass through the color directly through. In the next lesson, we’ll look at per-pixel lighting.

Per-Vertex versus per-pixel lighting

In this lesson we have focused on implementing per-vertex lighting. For diffuse lighting of objects with smooth surfaces, such as terrain, or for objects with many triangles, this will often be good enough. However, when your objects don’t contain many vertices (such as our cubes in this example program) or have sharp corners, vertex lighting can result in artifacts as the light level is linearly interpolated across the polygon; these artifacts also become much more apparent when specular highlights are added to the image. More can be seen at the Wiki article on Gouraud shading.

An explanation of the changes to the program

Besides the addition of per-vertex lighting, there are other changes to the program. We’ve switched from displaying a few triangles to a few cubes, and we’ve also added utility functions to load in the shader programs. There are also new shaders to display the position of the light as a point, as well as other various small changes.

Construction of the cube

In lesson one, we packed both position and color attributes into the same array, but OpenGL ES 2 also lets us specify these attributes in seperate arrays:

		// X, Y, Z
		final float[] cubePositionData =
		{
				// In OpenGL counter-clockwise winding is default. This means that when we look at a triangle,
				// if the points are counter-clockwise we are looking at the "front". If not we are looking at
				// the back. OpenGL has an optimization where all back-facing triangles are culled, since they
				// usually represent the backside of an object and aren't visible anyways.

				// Front face
				-1.0f, 1.0f, 1.0f,
				-1.0f, -1.0f, 1.0f,
				1.0f, 1.0f, 1.0f,
				-1.0f, -1.0f, 1.0f,
				1.0f, -1.0f, 1.0f,
				1.0f, 1.0f, 1.0f,
				...

		// R, G, B, A
		final float[] cubeColorData =
		{
				// Front face (red)
				1.0f, 0.0f, 0.0f, 1.0f,
				1.0f, 0.0f, 0.0f, 1.0f,
				1.0f, 0.0f, 0.0f, 1.0f,
				1.0f, 0.0f, 0.0f, 1.0f,
				1.0f, 0.0f, 0.0f, 1.0f,
				1.0f, 0.0f, 0.0f, 1.0f,
				...
New OpenGL flags

We have also enabled culling and the depth buffer via glEnable() calls:

		// Use culling to remove back faces.
		GLES20.glEnable(GLES20.GL_CULL_FACE);

		// Enable depth testing
		GLES20.glEnable(GLES20.GL_DEPTH_TEST);

As an optimization, you can tell OpenGL to eliminate triangles that are on the back side of an object. When we defined our cube, we also defined the three points of each triangle so that they are counter-clockwise when looking at the “front” side. When we flip the triangle around so we’re looking at the “back” side, the points then appear clockwise. You can only ever see three sides of a cube at the same time so this optimization tells OpenGL to not waste its time drawing the back sides of triangles.

Later when we draw transparent objects we may want to turn culling back off, as then it will be possible to see the back sides of objects.

We’ve also enabled depth testing. If you always draw things in order from back to front then depth testing is not strictly necessary, but by enabling it not only do you not need to worry about the draw order (although rendering can be faster if you draw closer objects first), but some graphics cards will also make optimizations which can speed up rendering by spending less time drawing pixels that will be drawn over anyways.

Changes in loading shader programs

Because the steps to loading shader programs in OpenGL are mostly the same, these steps can easily be refactored into a separate method. We’ve also added the following calls to retrieve debug info, in case the compilation/link fails:

GLES20.glGetProgramInfoLog(programHandle);
GLES20.glGetShaderInfoLog(shaderHandle);
Vertex and shader program for the light point

There is a new vertex and shader program specifically for drawing the point on the screen that represents the current position of the light:

        // Define a simple shader program for our point.
        final String pointVertexShader =
        	"uniform mat4 u_MVPMatrix;      \n"
          +	"attribute vec4 a_Position;     \n"
          + "void main()                    \n"
          + "{                              \n"
          + "   gl_Position = u_MVPMatrix   \n"
          + "               * a_Position;   \n"
          + "   gl_PointSize = 5.0;         \n"
          + "}                              \n";

        final String pointFragmentShader =
        	"precision mediump float;       \n"
          + "void main()                    \n"
          + "{                              \n"
          + "   gl_FragColor = vec4(1.0,    \n"
          + "   1.0, 1.0, 1.0);             \n"
          + "}                              \n";

This shader is similar to the simple shader from the first lesson. There’s a new property, gl_PointSize which we hard-code to 5.0; this is the output point size in pixels. It’s used when we draw the point using GLES20.GL_POINTS as the mode. We’ve also hard-coded the output color to white.

Further Exercises

  • Try removing the “oversaturation protection” and see what happens.
  • There is a flaw with the way the lighting is done. Can you spot what it is? Hint: What is the downside of the way I did ambient lighting, and what happens to the alpha?
  • What happens if you add a gl_PointSize to the cube shader and draw it using GL_POINTS?

Further Reading

The further reading section above was an invaluable resource to me while writing this tutorial, so I highly recommend reading them for more information and explanations.

Wrapping up

The full source code for this lesson can be downloaded from the project site on GitHub.

A compiled version of the lesson can also be downloaded directly from the Android Market:

QR code for link to the app on the Android Market.

Thanks for getting through another big lesson! I learned a lot while writing it, and I hope you learned a lot by following it through as well. Feel free to ask any questions or offer feedback, and thanks for stopping by!

Android Lesson One: Getting Started

This is the first tutorial on using OpenGL ES 2 on Android. In this lesson, we’re going to go over the code step-by-step, and look at how to create an OpenGL ES 2 context and draw to the screen. We’ll also take a look at what shaders are and how they work, as well as how matrices are used to transform the scene into the image you see on the screen. Finally, we’ll look at what you need to add to the manifest to let the Android market know that you’re using OpenGL ES 2.

Prerequisites

Before we begin, you’ll want to make sure you have the following tools installed on your machine:

Unfortunately, the Android emulator does not support OpenGL ES 2, so you’ll need access to an actual Android device in order to run the tutorial. The Android Emulator now supports OpenGL ES 2 in recent versions of the Android SDK. Most recent devices should also support OpenGL ES 2 these days, so all you need to do is enable developer mode and hook the phone up to your machine.

Assumptions

The reader should be familiar with Android and Java on a basic level. The Android Tutorials are a good place to start.

Getting started

We’ll go over all of the code below and explain what each part does. You can copy the code on a segment by segment basis by creating your own project, or you can download the completed project at the end of the lesson. Once you have the tools installed, go ahead and create a new Android project in Eclipse. The names don’t matter, but for the lesson I will be referring to the main activity as LessonOneActivity.

Let’s take a look at the code:

	/** Hold a reference to our GLSurfaceView */
	private GLSurfaceView mGLSurfaceView;

The GLSurfaceView is a special view which manages OpenGL surfaces for us and draws it into the Android view system. It also adds a lot of features which make it easier to use OpenGL, including but not limited to:

  • It provides a dedicated render thread for OpenGL so that the main thread is not stalled.
  • It supports continuous or on-demand rendering.
  • It takes care of the screen setup for you using EGL, the interface between OpenGL and the underlying window system.

The GLSurfaceView makes setting up and using OpenGL from Android relatively painless.

	@Override
	public void onCreate(Bundle savedInstanceState)
	{
		super.onCreate(savedInstanceState);

		mGLSurfaceView = new GLSurfaceView(this);

		// Check if the system supports OpenGL ES 2.0.
		final ActivityManager activityManager = (ActivityManager) getSystemService(Context.ACTIVITY_SERVICE);
		final ConfigurationInfo configurationInfo = activityManager.getDeviceConfigurationInfo();
		final boolean supportsEs2 = configurationInfo.reqGlEsVersion >= 0x20000;

		if (supportsEs2)
		{
			// Request an OpenGL ES 2.0 compatible context.
			mGLSurfaceView.setEGLContextClientVersion(2);

			// Set the renderer to our demo renderer, defined below.
			mGLSurfaceView.setRenderer(new LessonOneRenderer());
		}
		else
		{
			// This is where you could create an OpenGL ES 1.x compatible
			// renderer if you wanted to support both ES 1 and ES 2.
			return;
		}

		setContentView(mGLSurfaceView);
	}

The onCreate() of our activity is the important part where our OpenGL context gets created and where everything starts happening. In our onCreate(), the first thing we do after calling the superclass is creating our GLSurfaceView. We then need to figure out if the system supports OpenGL ES 2. To do this, we get an ActivityManager instance which lets us interact with the global system state. We can then use this to get the device configuration info, which will tell us if the device supports OpenGL ES 2.

Once we know if the device supports OpenGL ES 2 or not, we tell the surface view we want an OpenGL ES 2 compatible surface and then we pass in a custom renderer. This renderer will be called by the system whenever it’s time to adjust the surface or draw a new frame. We can also support OpenGL Es 1.x by passing in a different renderer, though we would need to write different code as the APIs are different. For this lesson we’ll only look at supporting OpenGL ES 2.

Finally, we set the content view to our GLSurfaceView, which tells Android that the activity’s contents should be filled by our OpenGL surface. To get into OpenGL, it’s as easy as that!

	@Override
	protected void onResume()
	{
		// The activity must call the GL surface view's onResume() on activity onResume().
		super.onResume();
		mGLSurfaceView.onResume();
	}

	@Override
	protected void onPause()
	{
		// The activity must call the GL surface view's onPause() on activity onPause().
		super.onPause();
		mGLSurfaceView.onPause();
	}

GLSurfaceView requires that we call its onResume() and onPause() methods whenever the parent Activity has its own onResume() and onPaused() called. We add in the calls here to round out our activity.

Visualizing a 3D world

In this section, we’ll start looking at how OpenGL ES 2 works and how we can start drawing stuff onto the screen. In the activity we passed in a custom GLSurfaceView.Renderer to the GLSurfaceView, which will be defined here. The renderer has three important methods which will be automatically called by the system whenever events happen:

public void onSurfaceCreated(GL10 glUnused, EGLConfig config)
This method is called when the surface is first created. It will also be called if we lose our surface context and it is later recreated by the system.
public void onSurfaceChanged(GL10 glUnused, int width, int height)
This is called whenever the surface changes; for example, when switching from portrait to landscape. It is also called after the surface has been created.
public void onDrawFrame(GL10 glUnused)
This is called whenever it’s time to draw a new frame.

You may have noticed that the GL10 instance passed in is referred to as glUnused. We don’t use this when drawing using OpenGL ES 2; instead, we use the static methods of the class GLES20. The GL10 parameter is only there because the same interface is used for OpenGL ES 1.x.

Before our renderer can display anything, we’ll need to have something to display. In OpenGL ES 2, we pass in stuff to display by specifying arrays of numbers. These numbers can represent positions, colors, or anything else we need them to. In this demo, we’ll display three triangles.

	// New class members
	/** Store our model data in a float buffer. */
	private final FloatBuffer mTriangle1Vertices;
	private final FloatBuffer mTriangle2Vertices;
	private final FloatBuffer mTriangle3Vertices;

	/** How many bytes per float. */
	private final int mBytesPerFloat = 4;

	/**
	 * Initialize the model data.
	 */
	public LessonOneRenderer()
	{
		// This triangle is red, green, and blue.
		final float[] triangle1VerticesData = {
				// X, Y, Z,
				// R, G, B, A
	            -0.5f, -0.25f, 0.0f,
	            1.0f, 0.0f, 0.0f, 1.0f,

	            0.5f, -0.25f, 0.0f,
	            0.0f, 0.0f, 1.0f, 1.0f,

	            0.0f, 0.559016994f, 0.0f,
	            0.0f, 1.0f, 0.0f, 1.0f};

		...

		// Initialize the buffers.
		mTriangle1Vertices = ByteBuffer.allocateDirect(triangle1VerticesData.length * mBytesPerFloat)
        .order(ByteOrder.nativeOrder()).asFloatBuffer();

		...

		mTriangle1Vertices.put(triangle1VerticesData).position(0);

		...
	}

So, what’s all this about? If you’ve ever used OpenGL 1, you might be used to doing things this way:

glBegin(GL_TRIANGLES);
glVertex3f(-0.5f, -0.25f, 0.0f);
glColor3f(1.0f, 0.0f, 0.0f);
...
glEnd();

Things don’t work that way in OpenGL ES 2. Instead of defining points via a bunch of method calls, we define an array instead. Let’s take a look at our array again:

final float[] triangle1VerticesData = {
				// X, Y, Z,
				// R, G, B, A
	            -0.5f, -0.25f, 0.0f,
	            1.0f, 0.0f, 0.0f, 1.0f,
	            ...

This represents one point of the triangle. We’ve set things up so that the first three numbers represent the position (X, Y, and Z), and the last four numbers represent the color (red, green, blue, and alpha (transparency)). You don’t need to worry too much about how this array is defined; just remember that when we want to draw stuff in OpenGL ES 2, we need to pass it data in chunks instead of passing it in one at a time.

Understanding buffers
// Initialize the buffers.
		mTriangle1Vertices = ByteBuffer.allocateDirect(triangle1VerticesData.length * mBytesPerFloat)
        .order(ByteOrder.nativeOrder()).asFloatBuffer();
		...
		mTriangle1Vertices.put(triangle1VerticesData).position(0);

We do our coding in Java on Android, but the underlying implementation of OpenGL ES 2 is actually written in C. Before we pass our data to OpenGL, we need to convert it into a form that it’s going to understand. Java and the native system might not store their bytes in the same order, so we use a special set of buffer classes and create a ByteBuffer large enough to hold our data, and tell it to store its data using the native byte order. We then convert it into a FloatBuffer so that we can use it to hold floating-point data. Finally, we copy our array into the buffer.

This buffer stuff might seem confusing (it was to me when I first came across it!), but just remember that it’s an extra step we need to do before passing our data to OpenGL. Our buffers are now ready to be used to pass data into OpenGL.

As a side note, float buffers are slow on Froyo and moderately faster on Gingerbread, so you probably don’t want to be changing them too often.

Understanding matrices
	 // New class definitions
	 /**
	 * Store the view matrix. This can be thought of as our camera. This matrix transforms world space to eye space;
	 * it positions things relative to our eye.
	 */
	private float[] mViewMatrix = new float[16];

	@Override
	public void onSurfaceCreated(GL10 glUnused, EGLConfig config)
	{
		// Set the background clear color to gray.
		GLES20.glClearColor(0.5f, 0.5f, 0.5f, 0.5f);

		// Position the eye behind the origin.
		final float eyeX = 0.0f;
		final float eyeY = 0.0f;
		final float eyeZ = 1.5f;

		// We are looking toward the distance
		final float lookX = 0.0f;
		final float lookY = 0.0f;
		final float lookZ = -5.0f;

		// Set our up vector. This is where our head would be pointing were we holding the camera.
		final float upX = 0.0f;
		final float upY = 1.0f;
		final float upZ = 0.0f;

		// Set the view matrix. This matrix can be said to represent the camera position.
		// NOTE: In OpenGL 1, a ModelView matrix is used, which is a combination of a model and
		// view matrix. In OpenGL 2, we can keep track of these matrices separately if we choose.
		Matrix.setLookAtM(mViewMatrix, 0, eyeX, eyeY, eyeZ, lookX, lookY, lookZ, upX, upY, upZ);

		...

Another ‘fun’ topic is matrices! These will become your best friends whenever you do 3D programming, so you’ll want to get to know them well.

When our surface is created, the first thing we do is set our clear color to gray. The alpha part has also been set to gray, but we’re not doing alpha blending in this lesson so this value is unused. We only need to set the clear color once since we will not be changing it later.

The second thing we do is setup our view matrix. There are several different kinds of matrices we use and they all do something important:

  1. The model matrix. This matrix is used to place a model somewhere in the “world”. For example, if you have a model of a car and you want it located 1000 meters to the east, you will use the model matrix to do this.
  2. The view matrix. This matrix represents the camera. If we want to view our car which is 1000 meters to the east, we’ll have to move ourselves 1000 meters to the east as well (another way of thinking about it is that we remain stationary, and the rest of the world moves 1000 meters to the west). We use the view matrix to do this.
  3. The projection matrix. Since our screens are flat, we need to do a final transformation to “project” our view onto our screen and get that nice 3D perspective. This is what the projection matrix is used for.

A good explanation of this can be found over at SongHo’s OpenGL Tutorials. I recommend reading it a few times until you grasp the idea well; don’t worry, it took me a few reads as well!

In OpenGL 1, the model and view matrices are combined and the camera is assumed to be at (0, 0, 0) and facing the -Z direction.

We don’t need to construct these matrices by hand. Android has a Matrix helper class which can do the heavy lifting for us. Here, I create a view matrix for a camera which is positioned behind the origin and looking toward the distance.

Defining vertex and fragment shaders
		final String vertexShader =
			"uniform mat4 u_MVPMatrix;      \n"		// A constant representing the combined model/view/projection matrix.

		  + "attribute vec4 a_Position;     \n"		// Per-vertex position information we will pass in.
		  + "attribute vec4 a_Color;        \n"		// Per-vertex color information we will pass in.

		  + "varying vec4 v_Color;          \n"		// This will be passed into the fragment shader.

		  + "void main()                    \n"		// The entry point for our vertex shader.
		  + "{                              \n"
		  + "   v_Color = a_Color;          \n"		// Pass the color through to the fragment shader.
		  											// It will be interpolated across the triangle.
		  + "   gl_Position = u_MVPMatrix   \n" 	// gl_Position is a special variable used to store the final position.
		  + "               * a_Position;   \n"     // Multiply the vertex by the matrix to get the final point in
		  + "}                              \n";    // normalized screen coordinates.

In OpenGL ES 2, anything we want to display on the screen is first going to have to go through a vertex and fragment shader. The good thing is that these shaders are really not as complicated as they appear. Vertex shaders perform operations on each vertex, and the results of these operations are used in the fragment shaders which do additional calculations per pixel.

Each shader basically consists of input, output, and a program. First we define a uniform, which is a combined matrix containing all of our transformations. This is a constant across all vertices and is used to project them onto the screen. Then we define two attributes for position and color. These attributes will be read in from the buffer we defined earlier on, and specify the position and color of each vertex. We then define a varying, which interpolates values across the triangle and passes it on to the fragment shader. When it gets to the fragment shader, it will hold an interpolated value for each pixel.

Let’s say we defined a triangle with each point being red, green, and blue, and we sized it so that it will take up 10 pixels on the screen. When the fragment shader runs, it will contain a different varying color for each pixel. At one point, that varying will be red, but halfway between red and blue it may be a more purplish color.

Aside from setting the color, we also tell OpenGL what the final position of the vertex should be on the screen. Then we define the fragment shader:

		final String fragmentShader =
			"precision mediump float;       \n"		// Set the default precision to medium. We don't need as high of a
													// precision in the fragment shader.
		  + "varying vec4 v_Color;          \n"		// This is the color from the vertex shader interpolated across the
		  											// triangle per fragment.
		  + "void main()                    \n"		// The entry point for our fragment shader.
		  + "{                              \n"
		  + "   gl_FragColor = v_Color;     \n"		// Pass the color directly through the pipeline.
		  + "}                              \n";

This is the fragment shader which will actually put stuff on the screen. In this shader, we grab the varying color from the vertex shader, and just pass it straight through to OpenGL. The point is already interpolated per pixel since the fragment shader runs for each pixel that will be drawn.

More information can be found on the OpenGL ES 2 quick reference card.

Loading shaders into OpenGL
		// Load in the vertex shader.
		int vertexShaderHandle = GLES20.glCreateShader(GLES20.GL_VERTEX_SHADER);

		if (vertexShaderHandle != 0)
		{
			// Pass in the shader source.
			GLES20.glShaderSource(vertexShaderHandle, vertexShader);

			// Compile the shader.
			GLES20.glCompileShader(vertexShaderHandle);

			// Get the compilation status.
			final int[] compileStatus = new int[1];
			GLES20.glGetShaderiv(vertexShaderHandle, GLES20.GL_COMPILE_STATUS, compileStatus, 0);

			// If the compilation failed, delete the shader.
			if (compileStatus[0] == 0)
			{
				GLES20.glDeleteShader(vertexShaderHandle);
				vertexShaderHandle = 0;
			}
		}

		if (vertexShaderHandle == 0)
		{
			throw new RuntimeException("Error creating vertex shader.");
		}

First, we create the shader object. If this succeeded, we’ll get a reference to the object. Then we use this reference to pass in the shader source code, and then we compile it. We can obtain the status from OpenGL and see if it compiled successfully. If there were errors, we can use GLES20.glGetShaderInfoLog(shader) to find out why. We follow the same steps to load the fragment shader.

Linking a vertex and fragment shader together into a program
		// Create a program object and store the handle to it.
		int programHandle = GLES20.glCreateProgram();

		if (programHandle != 0)
		{
			// Bind the vertex shader to the program.
			GLES20.glAttachShader(programHandle, vertexShaderHandle);

			// Bind the fragment shader to the program.
			GLES20.glAttachShader(programHandle, fragmentShaderHandle);

			// Bind attributes
			GLES20.glBindAttribLocation(programHandle, 0, "a_Position");
			GLES20.glBindAttribLocation(programHandle, 1, "a_Color");

			// Link the two shaders together into a program.
			GLES20.glLinkProgram(programHandle);

			// Get the link status.
			final int[] linkStatus = new int[1];
			GLES20.glGetProgramiv(programHandle, GLES20.GL_LINK_STATUS, linkStatus, 0);

			// If the link failed, delete the program.
			if (linkStatus[0] == 0)
			{
				GLES20.glDeleteProgram(programHandle);
				programHandle = 0;
			}
		}

		if (programHandle == 0)
		{
			throw new RuntimeException("Error creating program.");
		}

Before we can use our vertex and fragment shader, we need to bind them together into a program. This is what connects the output of the vertex shader with the input of the fragment shader. It’s also what lets us pass in input from our program and use the shader to draw our shapes.

We create a new program object, and if that succeeded, we then attach our shaders. We want to pass in the position and color as attributes, so we need to bind these attributes. We then link the shaders together.

	//New class members
	/** This will be used to pass in the transformation matrix. */
	private int mMVPMatrixHandle;

	/** This will be used to pass in model position information. */
	private int mPositionHandle;

	/** This will be used to pass in model color information. */
	private int mColorHandle;

	@Override
	public void onSurfaceCreated(GL10 glUnused, EGLConfig config)
	{
        ...

        // Set program handles. These will later be used to pass in values to the program.
        mMVPMatrixHandle = GLES20.glGetUniformLocation(programHandle, "u_MVPMatrix");
        mPositionHandle = GLES20.glGetAttribLocation(programHandle, "a_Position");
        mColorHandle = GLES20.glGetAttribLocation(programHandle, "a_Color");

        // Tell OpenGL to use this program when rendering.
        GLES20.glUseProgram(programHandle);
	}

After we successfully linked our program, we finish up with a couple more tasks so we can actually use it. The first task is obtaining references so we can pass data into the program. Then we tell OpenGL to use this program when drawing. Since we only use one program in this lesson, we can put this in the onSurfaceCreated() instead of the onDrawFrame().

Setting the perspective projection
	// New class members
	/** Store the projection matrix. This is used to project the scene onto a 2D viewport. */
	private float[] mProjectionMatrix = new float[16];

	@Override
	public void onSurfaceChanged(GL10 glUnused, int width, int height)
	{
		// Set the OpenGL viewport to the same size as the surface.
		GLES20.glViewport(0, 0, width, height);

		// Create a new perspective projection matrix. The height will stay the same
		// while the width will vary as per aspect ratio.
		final float ratio = (float) width / height;
		final float left = -ratio;
		final float right = ratio;
		final float bottom = -1.0f;
		final float top = 1.0f;
		final float near = 1.0f;
		final float far = 10.0f;

		Matrix.frustumM(mProjectionMatrix, 0, left, right, bottom, top, near, far);
	}

Our onSurfaceChanged() is called at least once and also whenever our surface is changed. Since we only need to reset our projection matrix whenever the screen we’re projecting onto has changed, onSurfaceChanged() is an ideal place to do it.

Drawing stuff to the screen!
	// New class members
	/**
	 * Store the model matrix. This matrix is used to move models from object space (where each model can be thought
	 * of being located at the center of the universe) to world space.
	 */
	private float[] mModelMatrix = new float[16];

	@Override
	public void onDrawFrame(GL10 glUnused)
	{
		GLES20.glClear(GLES20.GL_DEPTH_BUFFER_BIT | GLES20.GL_COLOR_BUFFER_BIT);

        // Do a complete rotation every 10 seconds.
        long time = SystemClock.uptimeMillis() % 10000L;
        float angleInDegrees = (360.0f / 10000.0f) * ((int) time);

        // Draw the triangle facing straight on.
        Matrix.setIdentityM(mModelMatrix, 0);
        Matrix.rotateM(mModelMatrix, 0, angleInDegrees, 0.0f, 0.0f, 1.0f);
        drawTriangle(mTriangle1Vertices);

        ...
}

This is where stuff actually get displayed on the screen. We clear the screen so we don’t get any weird hall of mirror effects, and we want our triangles to animate smoothly so we rotate them using time. Whenever you animate something on the screen, it’s usually better to use time instead of framerate.

The actual drawing is done in drawTriangle:

	// New class members
	/** Allocate storage for the final combined matrix. This will be passed into the shader program. */
	private float[] mMVPMatrix = new float[16];

	/** How many elements per vertex. */
	private final int mStrideBytes = 7 * mBytesPerFloat;

	/** Offset of the position data. */
	private final int mPositionOffset = 0;

	/** Size of the position data in elements. */
	private final int mPositionDataSize = 3;

	/** Offset of the color data. */
	private final int mColorOffset = 3;

	/** Size of the color data in elements. */
	private final int mColorDataSize = 4;

	/**
	 * Draws a triangle from the given vertex data.
	 *
	 * @param aTriangleBuffer The buffer containing the vertex data.
	 */
	private void drawTriangle(final FloatBuffer aTriangleBuffer)
	{
		// Pass in the position information
		aTriangleBuffer.position(mPositionOffset);
        GLES20.glVertexAttribPointer(mPositionHandle, mPositionDataSize, GLES20.GL_FLOAT, false,
        		mStrideBytes, aTriangleBuffer);

        GLES20.glEnableVertexAttribArray(mPositionHandle);

        // Pass in the color information
        aTriangleBuffer.position(mColorOffset);
        GLES20.glVertexAttribPointer(mColorHandle, mColorDataSize, GLES20.GL_FLOAT, false,
        		mStrideBytes, aTriangleBuffer);

        GLES20.glEnableVertexAttribArray(mColorHandle);

		// This multiplies the view matrix by the model matrix, and stores the result in the MVP matrix
        // (which currently contains model * view).
        Matrix.multiplyMM(mMVPMatrix, 0, mViewMatrix, 0, mModelMatrix, 0);

        // This multiplies the modelview matrix by the projection matrix, and stores the result in the MVP matrix
        // (which now contains model * view * projection).
        Matrix.multiplyMM(mMVPMatrix, 0, mProjectionMatrix, 0, mMVPMatrix, 0);

        GLES20.glUniformMatrix4fv(mMVPMatrixHandle, 1, false, mMVPMatrix, 0);
        GLES20.glDrawArrays(GLES20.GL_TRIANGLES, 0, 3);
	}

Do you remember those buffers we defined when we originally created our renderer? We’re finally going to be able to use them. We need to tell OpenGL how to use this data using GLES20.glVertexAttribPointer(). Let’s look at the first call.

		// Pass in the position information
		aTriangleBuffer.position(mPositionOffset);
        GLES20.glVertexAttribPointer(mPositionHandle, mPositionDataSize, GLES20.GL_FLOAT, false,
        		mStrideBytes, aTriangleBuffer);
        GLES20.glEnableVertexAttribArray(mPositionHandle);

We set our buffer position to the position offset, which is at the beginning of the buffer. We then tell OpenGL to use this data and feed it into the vertex shader and apply it to our position attribute. We also need to tell OpenGL how many elements there are between each vertex, or the stride.

Note: The stride needs to be defined in bytes. Although we have 7 elements (3 for the position, 4 for the color) between vertices, we actually have 28 bytes, since each floating point number takes up 4 bytes. Forgetting this step may not cause any errors, but you will be wondering why you don’t see anything on the screen.

Finally, we enable the vertex attribute and move on to the next attribute. A little bit further down we build a combined matrix to project points onto the screen. We could do this in the vertex shader, too, but since it only needs to be done once we may as well just cache the result. We pass in the final matrix to the vertex shader using GLES20.glUniformMatrix4fv() and GLES20.glDrawArrays() converts our points into a triangle and draws it on the screen.

Recap

Visual output of Lesson one.Whew! This was a big lesson, and kudos to you if you made it all the way through. We learned how to create our OpenGL context, pass shape data, load in a vertex and pixel shader, set up our transformation matrices, and finally bring it all together. If everything went well, you should see something similar to the screenshot on the right.

This lesson was a lot to digest and you may need to go over the steps a few times to understand it well. OpenGL ES 2 takes more setup work to get going, but once you’ve been through the process a few times you’ll remember the flow by the back of your hand.

Publishing on Android Market

When developing apps, we wouldn’t want people unable to run those apps to see them in the market, otherwise we could end up with a lot of bad reviews and ratings when the app crashes on their device. To prevent an OpenGL ES 2 app from appearing on a device which doesn’t support it, you can add this to your manifest:

<uses-feature
android:glEsVersion="0x00020000"
android:required="true" />

This tells the Market that your app requires OpenGL ES 2, and it will hide your app from devices which don’t support it.

Exploring further

Try changing the animation speed, vertex points, or colors, and see what happens!

The full source code for this lesson can be downloaded from the project site on GitHub.

A compiled version of the lesson can also be downloaded directly from the Android Market.

QR code for link to the app on the Android Market.

I also recommend looking at the code in ApiDemos, which can be found under the Samples folder in your Android SDK. The code in there helped me out a lot when I was preparing this lesson.

Please don’t hesitate to ask any questions or offer feedback, and thanks for stopping by!

Welcome to Learn OpenGL ES!

Hi there, and welcome to Learn OpenGL ES!

This is the beginning of a site about developing graphics in OpenGL ES for Android, WebGL, and more. According to Khronos, OpenGL ES is “the standard for embedded accelerated 3D graphics” and is entering widespread use.

What is OpenGL ES?

OpenGL ES is a graphics library that can be used for 2D and 3D graphics. It is basically a slimmed-down and optimized version of desktop OpenGL and is used on Android, iPhone, J2ME, the web via WebGL, and more. There are currently two main branches: OpenGL ES 1.x, and OpenGL ES 2.x. The first version uses a fixed-function pipeline which will be familiar to people who have worked with desktop OpenGL and for those who have followed tutorials from other sites such as Nehe’s OpenGL tutorials.

The second version is somewhat of a different beast entirely, as many of the fixed functions have been removed with the functionality replaced by small programs known as shaders. This ability to program the graphics card gives a lot more power and flexibility, but it also requires more work and understanding from the developer.

Why learn OpenGL ES?

OpenGL ES is a cross-platform library, so once you learn it, you’ll be able to take that knowledge with you and create compelling and beautiful graphics across a wide variety of platforms. Hundreds of thousands of devices are shipping with OpenGL ES every single day. If you’re interested in learning more about mobile game development or the future of graphics on the web, then you’ll definitely want to learn OpenGL ES.

About me

Like some of you, I am also a beginner when it comes to OpenGL ES. My goal with this website is not only to develop my skills, but to share what I have learned with others. I have found over time that this is a good way to learn a topic that you’re interested in. I am personally interested in Android and WebGL in particular, so I will focus on those for now. In the meantime, I have linked to some great resources over in the right sidebar.

Thanks for stopping by, and hope to see you again soon.