## Rotating An Object With Touch Events

Rotating an object in 3D is a neat way of letting your users interact with the scene, but the math can be tricky to get right. In this article, I’ll take a look at a simple way to rotate an object based on the touch events, and how to work around the main drawback of this method.

#### Simple rotations.

This is the easiest way to rotate an object based on touch movement. Here is example pseudocode:

```Matrix.setIdentity(modelMatrix);

... do other translations here ...

Matrix.rotateX(totalYMovement);
Matrix.rotateY(totalXMovement);```

This is done every frame.

To rotate an object up or down, we rotate it around the X-axis, and to rotate an object left or right, we rotate it around the Y axis. We could also rotate an object around the Z axis if we wanted it to spin around.

#### How to make the rotation appear relative to the user’s point of view.

The main problem with the simple way of rotating is that the object is being rotated in relation to itself, instead of in relation to the user’s point of view. If you rotate left and right from a point of zero rotation, the cube will rotate as you expect, but what if you then rotate it up or down 180 degrees? Trying to rotate the cube left or right will now rotate it in the opposite direction!

One easy way to work around this problem is to keep a second matrix around that will store all of the accumulated rotations.

Here’s what we need to do:

1. Every frame, calculate the delta between the last position of the pointer, and the current position of the pointer. This delta will be used to rotate our accumulated rotation matrix.
2. Use this matrix to rotate the cube.

What this means is that drags left, right, up, and down will always move the cube in the direction that we expect.

##### Android Code

The code examples here are written for Android, but can easily be adapted to any platform running OpenGL ES. The code is based on Android Lesson Six: An Introduction to Texture Filtering.

In LessonSixGLSurfaceView.java, we declare a few member variables:

```private float mPreviousX;
private float mPreviousY;

private float mDensity;```

We will store the previous pointer position each frame, so that we can calculate the relative movement left, right, up, or down. We also store the screen density so that drags across the screen can move the object a consistent amount across devices, regardless of the pixel density.

Here’s how to get the pixel density:

```final DisplayMetrics displayMetrics = new DisplayMetrics();
getWindowManager().getDefaultDisplay().getMetrics(displayMetrics);
density = displayMetrics.density```

Then we add our touch event handler to our custom GLSurfaceView:

```public boolean onTouchEvent(MotionEvent event)
{
if (event != null)
{
float x = event.getX();
float y = event.getY();

if (event.getAction() == MotionEvent.ACTION_MOVE)
{
if (mRenderer != null)
{
float deltaX = (x - mPreviousX) / mDensity / 2f;
float deltaY = (y - mPreviousY) / mDensity / 2f;

mRenderer.mDeltaX += deltaX;
mRenderer.mDeltaY += deltaY;
}
}

mPreviousX = x;
mPreviousY = y;

return true;
}
else
{
return super.onTouchEvent(event);
}
}```

Every frame, we compare the current pointer position with the previous, and use that to calculate the delta offset. We then divide that delta offset by the pixel density and a slowing factor of 2.0f to get our final delta values. We apply those directly to the renderer to a couple of public variables that we have also declared as volatile, so that they can be updated between threads.

Remember, on Android, the OpenGL renderer runs in a different thread than the UI event handler thread, and there is a slim chance that the other thread fires in-between the X and Y variable assignments (there are also additional points of contention with the += syntax). I have left the code like this to bring up this point; as an exercise for the reader I leave it to you to add synchronized statements around the public variable read and write pairs instead of using volatile variables.

First, let’s add a couple of matrices and initialize them:

```/** Store the accumulated rotation. */
private final float[] mAccumulatedRotation = new float;

/** Store the current rotation. */
private final float[] mCurrentRotation = new float;```
```@Override
public void onSurfaceCreated(GL10 glUnused, EGLConfig config)
{

...

// Initialize the accumulated rotation matrix
Matrix.setIdentityM(mAccumulatedRotation, 0);
}```

Here’s what our matrix code looks like in the onDrawFrame method:

```// Draw a cube.
// Translate the cube into the screen.
Matrix.setIdentityM(mModelMatrix, 0);
Matrix.translateM(mModelMatrix, 0, 0.0f, 0.8f, -3.5f);

// Set a matrix that contains the current rotation.
Matrix.setIdentityM(mCurrentRotation, 0);
Matrix.rotateM(mCurrentRotation, 0, mDeltaX, 0.0f, 1.0f, 0.0f);
Matrix.rotateM(mCurrentRotation, 0, mDeltaY, 1.0f, 0.0f, 0.0f);
mDeltaX = 0.0f;
mDeltaY = 0.0f;

// Multiply the current rotation by the accumulated rotation, and then set the accumulated
// rotation to the result.
Matrix.multiplyMM(mTemporaryMatrix, 0, mCurrentRotation, 0, mAccumulatedRotation, 0);
System.arraycopy(mTemporaryMatrix, 0, mAccumulatedRotation, 0, 16);

// Rotate the cube taking the overall rotation into account.
Matrix.multiplyMM(mTemporaryMatrix, 0, mModelMatrix, 0, mAccumulatedRotation, 0);
System.arraycopy(mTemporaryMatrix, 0, mModelMatrix, 0, 16);```
1. First we translate the cube.
2. Then we build a matrix that will contain the current amount of rotation, between this frame and the preceding frame.
3. We then multiply this matrix with the accumulated rotation, and assign the accumulated rotation to the result. The accumulated rotation contains the result of all of our rotations since the beginning.
4. Now that we’ve updated the accumulated rotation matrix with the most recent rotation, we finally rotate the cube by multiplying the model matrix with our rotation matrix, and then we set the model matrix to the result.

The above code might look a bit confusion due to the placement of the variables, so remember the definitions:

public static void multiplyMM (float[] result, int resultOffset, float[] lhs, int lhsOffset, float[] rhs, int rhsOffset)

public static void arraycopy (Object src, int srcPos, Object dst, int dstPos, int length)

Note the position of source and destination for each method call.

##### Trouble spots and pitfalls
• The accumulated matrix should be set to identity once when initialized, and should not be reset to identity each frame.
• Previous pointer positions must also be set on pointer down events, not only on pointer move events.
• Watch the order of parameters, and also watch out for corrupting your matrices. Android’s Matrix.multiplyMM states that “the result element values are undefined if the result elements overlap either the lhs or rhs elements.” Use temporary matrices to avoid this problem.
##### WebGL examples

The example on the left uses the simplest method of rotating, while the example on the right uses the accumulated rotations matrix.

 Your browser does not support the canvas tag. Your browser does not support the canvas tag.
##### Further exercises

What are the drawbacks of using a matrix to hold accumulated rotations and updating it every frame based on the movement delta for that frame? What other ways of rotation are there? Try experimenting, and see what else you can come up with!

```  // <![CDATA[
uniform mat4 u_MVPMatrix;      		// A constant representing the combined model/view/projection matrix.
uniform mat4 u_MVMatrix;       		// A constant representing the combined model/view matrix.

attribute vec4 a_Position;     		// Per-vertex position information we will pass in.
attribute vec4 a_Color;        		// Per-vertex color information we will pass in.
attribute vec3 a_Normal;       		// Per-vertex normal information we will pass in.

varying vec3 v_Position;       		// This will be passed into the fragment shader.
varying vec4 v_Color;          		// This will be passed into the fragment shader.
varying vec3 v_Normal;         		// This will be passed into the fragment shader.

// The entry point for our vertex shader.
void main()
{
// Transform the vertex into eye space.
v_Position = vec3(u_MVMatrix * a_Position);

// Pass through the color.
v_Color = a_Color;

// Transform the normal's orientation into eye space.
v_Normal = vec3(u_MVMatrix * vec4(a_Normal, 0.0));

// gl_Position is a special variable used to store the final position.
// Multiply the vertex by the matrix to get the final point in normalized screen coordinates.
gl_Position = u_MVPMatrix * a_Position;
}
// ]]&gt;// <![CDATA[
precision mediump float;       		// Set the default precision to medium. We don't need as high of a
// precision in the fragment shader.
uniform vec3 u_LightPos;       	    // The position of the light in eye space.

varying vec3 v_Position;			// Interpolated position for this fragment.
varying vec4 v_Color;          		// This is the color from the vertex shader interpolated across the
// triangle per fragment.
varying vec3 v_Normal;         		// Interpolated normal for this fragment.

// The entry point for our fragment shader.
void main()
{
// Will be used for attenuation.
float distance = length(u_LightPos - v_Position);

// Get a lighting direction vector from the light to the vertex.
vec3 lightVector = normalize(u_LightPos - v_Position);

// Calculate the dot product of the light vector and vertex normal. If the normal and light vector are
// pointing in the same direction then it will get max illumination.
float diffuse = max(dot(v_Normal, lightVector), 0.3);

diffuse = diffuse * (1.0 / (1.0 + (0.25 * distance * distance)));

// Multiply the color by the diffuse illumination level to get final output color.
gl_FragColor = v_Color * diffuse;
}
// ]]&gt;// <![CDATA[
function runWebGlProgram(canvasIdString, rotationMode) {
/**
* Lesson_two.js
*/

// We make use of the WebGL utility library, which was downloaded from here:
// https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/sdk/demos/common/webgl-utils.js
//
// It defines two functions which we use here:
//
// // Creates a WebGL context.
// WebGLUtils.setupWebGL(canvas);
//
// // Requests an animation callback. See: https://developer.mozilla.org/en/DOM/window.requestAnimationFrame
// window.requestAnimFrame(callback, node);
//
// We also make use of the glMatrix file which can be downloaded from here:
//

/** Hold a reference to the WebGLContext */
var gl = null;

/** Hold a reference to the canvas DOM object. */
var canvas = null;

/** Used for moving the cube. */
var deltaX = 0;
var deltaY = 0;

/**
* Store the model matrix. This matrix is used to move models from object space (where each model can be thought
* of being located at the center of the universe) to world space.
*/
var modelMatrix = mat4.create();

/**
* Store the view matrix. This can be thought of as our camera. This matrix transforms world space to eye space;
* it positions things relative to our eye.
*/
var viewMatrix = mat4.create();

/** Store the projection matrix. This is used to project the scene onto a 2D viewport. */
var projectionMatrix = mat4.create();

/** Allocate storage for the final combined matrix. This will be passed into the shader program. */
var mvpMatrix = mat4.create();

/** For rotations by the user. */
var currentRotationMatrix = mat4.create();
var accumulatedRotationMatrix = mat4.create();
mat4.identity(accumulatedRotationMatrix);

/**
* Stores a copy of the model matrix specifically for the light position.
*/
var lightModelMatrix = mat4.create();

/** Store our model data in a Float32Array buffer. */
var cubePositions;
var cubeColors;
var cubeNormals;

/** Store references to the vertex buffer objects (VBOs) that will be created. */
var cubePositionBufferObject;
var cubeColorBufferObject;
var cubeNormalBufferObject;

/** This will be used to pass in the transformation matrix. */
var mvpMatrixHandle;

/** This will be used to pass in the modelview matrix. */
var mvMatrixHandle;

/** This will be used to pass in the light position. */
var lightPosHandle;

/** This will be used to pass in model position information. */
var positionHandle;

/** This will be used to pass in model color information. */
var colorHandle;

/** This will be used to pass in model normal information. */
var normalHandle;

/** Size of the position data in elements. */
var positionDataSize = 3;

/** Size of the color data in elements. */
var colorDataSize = 4;

/** Size of the normal data in elements. */
var normalDataSize = 3;

/** Used to hold a light centered on the origin in model space. We need a 4th coordinate so we can get translations to work when
*  we multiply this by our transformation matrices. */
var lightPosInModelSpace = new Array(0, 0, 0, 1);

/** Used to hold the current position of the light in world space (after transformation via model matrix). */
var lightPosInWorldSpace = new Array(0, 0, 0, 0);

/** Used to hold the transformed position of the light in eye space (after transformation via modelview matrix) */
var lightPosInEyeSpace = new Array(0, 0, 0, 0);

/** This is a handle to our per-vertex cube shading program. */
var perVertexProgramHandle;

// Helper function to load a shader
{
var error;

if (shaderHandle != 0)
{
// Read the embedded shader from the document.
var shaderSource = document.getElementById(sourceScriptId);

{
throw("Error: shader script '" + sourceScriptId + "' not found");
}

// Pass in the shader source.

// Compile the shader.

// Get the compilation status.

// If the compilation failed, delete the shader.
if (!compiled)
{
}
}

if (shaderHandle == 0)
{
throw("Error creating shader " + sourceScriptId + ": " + error);
}

}

// Helper function to link a program
{
// Create a program object and store the handle to it.
var programHandle = gl.createProgram();
var error;

if (programHandle != 0)
{
// Bind the vertex shader to the program.

// Bind the fragment shader to the program.

// Bind attributes
if (attributes)
{
for (i = 0; i < attributes.length; i++)
{
gl.bindAttribLocation(programHandle, i, attributes[i]);
}
}

// Link the two shaders together into a program.

// Get the link status.

// If the link failed, delete the program.
{
error = gl.getProgramInfoLog(programHandle);
gl.deleteProgram(programHandle);
programHandle = 0;
}
}

if (programHandle == 0)
{
throw("Error creating program:" + error);
}

return programHandle;
}

// Called when we have the context
function startRendering()
{
/* Configure viewport */

// Set the OpenGL viewport to the same size as the canvas.
gl.viewport(0, 0, canvas.clientWidth, canvas.clientHeight);

// Create a new perspective projection matrix. The height will stay the same
// while the width will vary as per aspect ratio.
var ratio = canvas.clientWidth / canvas.clientHeight;
var left = -ratio;
var right = ratio;
var bottom = -1.0;
var top = 1.0;
var near = 1.0;
var far = 10.0;

mat4.frustum(left, right, bottom, top, near, far, projectionMatrix);

/* Configure general parameters */

// Set the background clear color to black.
gl.clearColor(0.0, 0.0, 0.0, 0.0);

// Use culling to remove back faces.
gl.enable(gl.CULL_FACE);

// Enable depth testing
gl.enable(gl.DEPTH_TEST);

// Enable dithering
gl.enable(gl.DITHER);

/* Configure camera */
// Position the eye behind the origin.
var eyeX = 0.0;
var eyeY = 0.0;
var eyeZ = -0.5;

// We are looking toward the distance
var lookX = 0.0;
var lookY = 0.0;
var lookZ = -5.0;

// Set our up vector. This is where our head would be pointing were we holding the camera.
var upX = 0.0;
var upY = 1.0;
var upZ = 0.0;

// Set the view matrix. This matrix can be said to represent the camera position.
var eye = vec3.create();
eye = eyeX; eye = eyeY; eye = eyeZ;

var center = vec3.create();
center = lookX; center = lookY; center = lookZ;

var up = vec3.create();
up = upX; up = upY; up = upZ;

mat4.lookAt(eye, center, up, viewMatrix);

/* Configure shaders */

// Create a program object and store the handle to it.

// Create buffers in OpenGL's working memory.
cubePositionBufferObject = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, cubePositionBufferObject);
gl.bufferData(gl.ARRAY_BUFFER, cubePositions, gl.STATIC_DRAW);

cubeColorBufferObject = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, cubeColorBufferObject);
gl.bufferData(gl.ARRAY_BUFFER, cubeColors, gl.STATIC_DRAW);

cubeNormalBufferObject = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, cubeNormalBufferObject);
gl.bufferData(gl.ARRAY_BUFFER, cubeNormals, gl.STATIC_DRAW);

// Tell the browser we want render() to be called whenever it's time to draw another frame.
window.requestAnimFrame(render, canvas);
}

// Callback called each time the browser wants us to draw another frame
function render(time)
{
// Clear the canvas
gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);

// Do a complete rotation every 10 seconds.
var time = Date.now() % 10000;
var angleInDegrees = (360.0 / 10000.0) * time;
var angleInRadians = angleInDegrees / 57.2957795;

// Set our per-vertex lighting program.
gl.useProgram(perVertexProgramHandle);

// Set program handles for cube drawing.
mvpMatrixHandle = gl.getUniformLocation(perVertexProgramHandle, "u_MVPMatrix");
mvMatrixHandle = gl.getUniformLocation(perVertexProgramHandle, "u_MVMatrix");
lightPosHandle = gl.getUniformLocation(perVertexProgramHandle, "u_LightPos");
positionHandle = gl.getAttribLocation(perVertexProgramHandle, "a_Position");
colorHandle = gl.getAttribLocation(perVertexProgramHandle, "a_Color");
normalHandle = gl.getAttribLocation(perVertexProgramHandle, "a_Normal");

var v = vec3.create();

// Calculate position of the light. Rotate and then push into the distance.
mat4.identity(lightModelMatrix);
v = 0; v = 0; v = -5;
mat4.translate(lightModelMatrix, v);

v = 0; v = 0; v = 2;
mat4.translate(lightModelMatrix, v);

mat4.multiplyVec4(lightModelMatrix, lightPosInModelSpace, lightPosInWorldSpace);
mat4.multiplyVec4(viewMatrix, lightPosInWorldSpace, lightPosInEyeSpace);

// Draw a cubes.
mat4.identity(modelMatrix);
v = 0; v = 0; v = -5;
mat4.translate(modelMatrix, v);

if (rotationMode == 1)
{
mat4.rotateX(modelMatrix, deltaY / 100);
mat4.rotateY(modelMatrix, deltaX / 100);
}
else
{
mat4.identity(currentRotationMatrix);
mat4.rotateX(currentRotationMatrix, deltaY / 100);
mat4.rotateY(currentRotationMatrix, deltaX / 100);
deltaX = 0;
deltaY = 0;

mat4.multiply(currentRotationMatrix, accumulatedRotationMatrix, accumulatedRotationMatrix);
mat4.multiply(modelMatrix, accumulatedRotationMatrix);
}

drawCube();

// Send the commands to WebGL
gl.flush();

// Request another frame
window.requestAnimFrame(render, canvas);
}

/**
* Draws a cube.
*/
function drawCube()
{
// Pass in the position information
gl.enableVertexAttribArray(positionHandle);
gl.bindBuffer(gl.ARRAY_BUFFER, cubePositionBufferObject);
gl.vertexAttribPointer(positionHandle, positionDataSize, gl.FLOAT, false, 0, 0);

// Pass in the color information
gl.enableVertexAttribArray(colorHandle);
gl.bindBuffer(gl.ARRAY_BUFFER, cubeColorBufferObject);
gl.vertexAttribPointer(colorHandle, colorDataSize, gl.FLOAT, false, 0, 0);

// Pass in the normal information
gl.enableVertexAttribArray(normalHandle);
gl.bindBuffer(gl.ARRAY_BUFFER, cubeNormalBufferObject);
gl.vertexAttribPointer(normalHandle, normalDataSize, gl.FLOAT, false, 0, 0);

// This multiplies the view matrix by the model matrix, and stores the result in the MVP matrix
// (which currently contains model * view).
mat4.multiply(viewMatrix, modelMatrix, mvpMatrix);

// Pass in the modelview matrix.
gl.uniformMatrix4fv(mvMatrixHandle, false, mvpMatrix);

// This multiplies the modelview matrix by the projection matrix, and stores the result in the MVP matrix
// (which now contains model * view * projection).
mat4.multiply(projectionMatrix, mvpMatrix, mvpMatrix);

// Pass in the combined matrix.
gl.uniformMatrix4fv(mvpMatrixHandle, false, mvpMatrix);

// Pass in the light position in eye space.
gl.uniform3f(lightPosHandle, lightPosInEyeSpace, lightPosInEyeSpace, lightPosInEyeSpace);

// Draw the cube.
gl.drawArrays(gl.TRIANGLES, 0, 36);
}

// Main entry point
function main()
{
// Try to get a WebGL context
canvas = document.getElementById(canvasIdString);

// We don't need alpha blending.
// See https://www.khronos.org/registry/webgl/specs/1.0/ Section 5.2 for more info on parameters and defaults.
gl = WebGLUtils.setupWebGL(canvas, {alpha: false});

if (gl != null)
{
// // Define points for a cube.

// X, Y, Z
cubePositions = new Float32Array
([
// In OpenGL counter-clockwise winding is default. This means that when we look at a triangle,
// if the points are counter-clockwise we are looking at the "front". If not we are looking at
// the back. OpenGL has an optimization where all back-facing triangles are culled, since they
// usually represent the backside of an object and aren't visible anyways.

// Front face
-1.0, 1.0, 1.0,
-1.0, -1.0, 1.0,
1.0, 1.0, 1.0,
-1.0, -1.0, 1.0,
1.0, -1.0, 1.0,
1.0, 1.0, 1.0,

// Right face
1.0, 1.0, 1.0,
1.0, -1.0, 1.0,
1.0, 1.0, -1.0,
1.0, -1.0, 1.0,
1.0, -1.0, -1.0,
1.0, 1.0, -1.0,

// Back face
1.0, 1.0, -1.0,
1.0, -1.0, -1.0,
-1.0, 1.0, -1.0,
1.0, -1.0, -1.0,
-1.0, -1.0, -1.0,
-1.0, 1.0, -1.0,

// Left face
-1.0, 1.0, -1.0,
-1.0, -1.0, -1.0,
-1.0, 1.0, 1.0,
-1.0, -1.0, -1.0,
-1.0, -1.0, 1.0,
-1.0, 1.0, 1.0,

// Top face
-1.0, 1.0, -1.0,
-1.0, 1.0, 1.0,
1.0, 1.0, -1.0,
-1.0, 1.0, 1.0,
1.0, 1.0, 1.0,
1.0, 1.0, -1.0,

// Bottom face
1.0, -1.0, -1.0,
1.0, -1.0, 1.0,
-1.0, -1.0, -1.0,
1.0, -1.0, 1.0,
-1.0, -1.0, 1.0,
-1.0, -1.0, -1.0
]);

// R, G, B, A
cubeColors = new Float32Array
([
// Front face (red)
1.0, 0.0, 0.0, 1.0,
1.0, 0.0, 0.0, 1.0,
1.0, 0.0, 0.0, 1.0,
1.0, 0.0, 0.0, 1.0,
1.0, 0.0, 0.0, 1.0,
1.0, 0.0, 0.0, 1.0,

// Right face (green)
0.0, 1.0, 0.0, 1.0,
0.0, 1.0, 0.0, 1.0,
0.0, 1.0, 0.0, 1.0,
0.0, 1.0, 0.0, 1.0,
0.0, 1.0, 0.0, 1.0,
0.0, 1.0, 0.0, 1.0,

// Back face (blue)
0.0, 0.0, 1.0, 1.0,
0.0, 0.0, 1.0, 1.0,
0.0, 0.0, 1.0, 1.0,
0.0, 0.0, 1.0, 1.0,
0.0, 0.0, 1.0, 1.0,
0.0, 0.0, 1.0, 1.0,

// Left face (yellow)
1.0, 1.0, 0.0, 1.0,
1.0, 1.0, 0.0, 1.0,
1.0, 1.0, 0.0, 1.0,
1.0, 1.0, 0.0, 1.0,
1.0, 1.0, 0.0, 1.0,
1.0, 1.0, 0.0, 1.0,

// Top face (cyan)
0.0, 1.0, 1.0, 1.0,
0.0, 1.0, 1.0, 1.0,
0.0, 1.0, 1.0, 1.0,
0.0, 1.0, 1.0, 1.0,
0.0, 1.0, 1.0, 1.0,
0.0, 1.0, 1.0, 1.0,

// Bottom face (magenta)
1.0, 0.0, 1.0, 1.0,
1.0, 0.0, 1.0, 1.0,
1.0, 0.0, 1.0, 1.0,
1.0, 0.0, 1.0, 1.0,
1.0, 0.0, 1.0, 1.0,
1.0, 0.0, 1.0, 1.0
]);

// X, Y, Z
// The normal is used in light calculations and is a vector which points
// orthogonal to the plane of the surface. For a cube model, the normals
// should be orthogonal to the points of each face.
cubeNormals = new Float32Array
([
// Front face
0.0, 0.0, 1.0,
0.0, 0.0, 1.0,
0.0, 0.0, 1.0,
0.0, 0.0, 1.0,
0.0, 0.0, 1.0,
0.0, 0.0, 1.0,

// Right face
1.0, 0.0, 0.0,
1.0, 0.0, 0.0,
1.0, 0.0, 0.0,
1.0, 0.0, 0.0,
1.0, 0.0, 0.0,
1.0, 0.0, 0.0,

// Back face
0.0, 0.0, -1.0,
0.0, 0.0, -1.0,
0.0, 0.0, -1.0,
0.0, 0.0, -1.0,
0.0, 0.0, -1.0,
0.0, 0.0, -1.0,

// Left face
-1.0, 0.0, 0.0,
-1.0, 0.0, 0.0,
-1.0, 0.0, 0.0,
-1.0, 0.0, 0.0,
-1.0, 0.0, 0.0,
-1.0, 0.0, 0.0,

// Top face
0.0, 1.0, 0.0,
0.0, 1.0, 0.0,
0.0, 1.0, 0.0,
0.0, 1.0, 0.0,
0.0, 1.0, 0.0,
0.0, 1.0, 0.0,

// Bottom face
0.0, -1.0, 0.0,
0.0, -1.0, 0.0,
0.0, -1.0, 0.0,
0.0, -1.0, 0.0,
0.0, -1.0, 0.0,
0.0, -1.0, 0.0
]);

startRendering();
}
}

// Execute the main entry point
main();

// Add click handlers
var mouseIsDown = false;
var previousX;
var previousY;

{
mouseIsDown = true;
previousX = e.screenX;
previousY = e.screenY;
});

document.addEventListener("mouseup", function(e) { mouseIsDown = false; });

{
if (mouseIsDown) {
deltaX += e.screenX - previousX;
deltaY += e.screenY - previousY;
}

previousX = e.screenX;
previousY = e.screenY;
});
}

runWebGlProgram("canvas", 2);
runWebGlProgram("firstCanvas", 1);
// ]]&gt;```

## OpenGL ES Roundup, February 12, 2012

If you haven’t checked it out yet, I recommend taking a look at the new Android Design website. There are a lot of resources and interesting information on developing attractive apps for Ice Cream Sandwich, Android’s newest platform. With Ice Cream Sandwich comes new changes, such as the deprecation of the menu bar.

Roundup

Question for you, dear reader: What do you all think about the current site design? I’d love to hear your thoughts and feedback, both positive and negative. ## OpenGL ES Roundup, October 4, 2011

Diney from db-in.com has a great introduction to shaders up at his site. He annotates his post with useful diagrams and also has a clear and neat introduction to tangent space.

Learning WebGL has a cool roundup of WebGL around the web.

The NeHe OpenGL site is alive! They are starting to post updates on a regular basis again.

Is 3D the future of Android? This article by The Droid Demos takes a look at the emerging 3D screen technologies.

Android and Me has a possible screenshot of the Nexus Prime, Google’s next flagship device which will run Ice Cream Sandwich.

## How to Embed Webgl into a WordPress Post

##### Embedding into WordPress

This information was originally part of WebGL Lesson One: Getting Started, but I thought it would be more useful if I also broke it out into a post on its own.

Embedding WebGL into WordPress can be a little tricky, but if you use the right tags and the HTML editor it can be done without too much trouble! You’ll need to insert a canvas in your post, script includes for any third-party libraries, and a script body for your main script (this can also be an include).

First, place a canvas tag where you would like the graphics to appear:

`<pre><canvas id="canvas" width="550" height="375">Your browser does not support the canvas tag. This is a static example of what would be seen.</canvas></pre>`

Then, include any necessary scripts:

`<pre><script type="text/javascript" src="http://www.learnopengles.com/wordpress/wp-content/uploads/2011/06/webgl-utils.js"></script></pre>`

Embed any additional scripts that you need, such as the main body for your WebGL script:

```<pre><script type="text/javascript"> /** * Lesson_one.js */```

``` ... ```

`</script></pre>`

The <pre> tag is important; otherwise WordPress will mangle your scripts and insert random paragraph tags and other stuff inside. Also, once you’ve inserted this code, you have to stick to using the HTML editor, as the visual editor will also mangle or delete your scripts.

If you’ve embedded your scripts correctly, then your WebGL canvas should appear, like below:

Your browser does not support the canvas tag. This is a static example of what would be seen. Hope this helps out some people out there!

```

uniform mat4 u_MVPMatrix;   // A constant representing the combined model/view/projection matrix.

attribute vec4 a_Position;  // Per-vertex position information we will pass in.
attribute vec4 a_Color;     // Per-vertex color information we will pass in.

varying vec4 v_Color;       // This will be passed into the fragment shader.

void main()                 // The entry point for our vertex shader.
{
v_Color = a_Color;      // Pass the color through to the fragment shader.
// It will be interpolated across the triangle.

// gl_Position is a special variable used to store the final position.
// Multiply the vertex by the matrix to get the final point in normalized screen coordinates.
gl_Position = u_MVPMatrix * a_Position;
}

precision mediump float;       // Set the default precision to medium. We don't need as high of a
// precision in the fragment shader.
varying vec4 v_Color;          // This is the color from the vertex shader interpolated across the
// triangle per fragment.
void main()                    // The entry point for our fragment shader.
{
gl_FragColor = v_Color;    // Pass the color directly through the pipeline.
}

/**
* Lesson_one.js
*/

// We make use of the WebGL utility library, which was downloaded from here:
// https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/sdk/demos/common/webgl-utils.js
//
// It defines two functions which we use here:
//
// // Creates a WebGL context.
// WebGLUtils.setupWebGL(canvas);
//
// Requests an animation callback. See: https://developer.mozilla.org/en/DOM/window.requestAnimationFrame
// window.requestAnimFrame(callback, node);
//
// We also make use of the glMatrix file which can be downloaded from here:
//

/** Hold a reference to the WebGLContext */
var gl = null;

/** Hold a reference to the canvas DOM object. */
var canvas = null;

/**
* Store the model matrix. This matrix is used to move models from object space (where each model can be thought
* of being located at the center of the universe) to world space.
*/
var modelMatrix = mat4.create();

/**
* Store the view matrix. This can be thought of as our camera. This matrix transforms world space to eye space;
* it positions things relative to our eye.
*/
var viewMatrix = mat4.create();

/** Store the projection matrix. This is used to project the scene onto a 2D viewport. */
var projectionMatrix = mat4.create();

/** Allocate storage for the final combined matrix. This will be passed into the shader program. */
var mvpMatrix = mat4.create();

/** Store our model data in a Float32Array buffer. */
var trianglePositions;
var triangle1Colors;
var triangle2Colors;
var triangle3Colors;

/** Store references to the vertex buffer objects (VBOs) that will be created. */
var trianglePositionBufferObject;
var triangleColorBufferObject1;
var triangleColorBufferObject2;
var triangleColorBufferObject3;

/** This will be used to pass in the transformation matrix. */
var mvpMatrixHandle;

/** This will be used to pass in model position information. */
var positionHandle;

/** This will be used to pass in model color information. */
var colorHandle;

/** Size of the position data in elements. */
var positionDataSize = 3;

/** Size of the color data in elements. */
var colorDataSize = 4;

{
var error;

if (shaderHandle != 0)
{
// Read the embedded shader from the document.
var shaderSource = document.getElementById(sourceScriptId);

{
throw("Error: shader script '" + sourceScriptId + "' not found");
}

// Pass in the shader source.

// Compile the shader.

// Get the compilation status.

// If the compilation failed, delete the shader.
if (!compiled)
{
}
}

if (shaderHandle == 0)
{
throw("Error creating shader " + sourceScriptId + ": " + error);
}

}

// Helper function to link a program
{
// Create a program object and store the handle to it.
var programHandle = gl.createProgram();

if (programHandle != 0)
{
// Bind the vertex shader to the program.

// Bind the fragment shader to the program.

// Bind attributes
gl.bindAttribLocation(programHandle, 0, "a_Position");
gl.bindAttribLocation(programHandle, 1, "a_Color");

// Link the two shaders together into a program.

// Get the link status.

// If the link failed, delete the program.
{
gl.deleteProgram(programHandle);
programHandle = 0;
}
}

if (programHandle == 0)
{
throw("Error creating program.");
}

return programHandle;
}

//Called when we have the context
function startRendering()
{
/* Configure viewport */
// Set the OpenGL viewport to the same size as the canvas.
gl.viewport(0, 0, canvas.clientWidth, canvas.clientHeight);

// Create a new perspective projection matrix. The height will stay the same
// while the width will vary as per aspect ratio.
var ratio = canvas.clientWidth / canvas.clientHeight;
var left = -ratio;
var right = ratio;
var bottom = -1.0;
var top = 1.0;
var near = 1.0;
var far = 10.0;

mat4.frustum(left, right, bottom, top, near, far, projectionMatrix);

/* Configure general parameters */

// Set the background clear color to gray.
gl.clearColor(0.5, 0.5, 0.5, 1.0);

/* Configure camera */
// Position the eye behind the origin.
var eyeX = 0.0;
var eyeY = 0.0;
var eyeZ = 1.5;

// We are looking toward the distance
var lookX = 0.0;
var lookY = 0.0;
var lookZ = -5.0;

// Set our up vector. This is where our head would be pointing were we holding the camera.
var upX = 0.0;
var upY = 1.0;
var upZ = 0.0;

// Set the view matrix. This matrix can be said to represent the camera position.
var eye = vec3.create();
eye = eyeX; eye = eyeY; eye = eyeZ;
var center = vec3.create();
center = lookX; center = lookY; center = lookZ;
var up = vec3.create();
up = upX; up = upY; up = upZ;
mat4.lookAt(eye, center, up, viewMatrix);
/* Configure shaders */

// Create a program object and store the handle to it.

// Set program handles. These will later be used to pass in values to the program.
mvpMatrixHandle = gl.getUniformLocation(programHandle, "u_MVPMatrix");
positionHandle = gl.getAttribLocation(programHandle, "a_Position");
colorHandle = gl.getAttribLocation(programHandle, "a_Color");

// Tell OpenGL to use this program when rendering.
gl.useProgram(programHandle);

// Create buffers in OpenGL's working memory.
trianglePositionBufferObject = gl.createBuffer();
//    checkError();
gl.bindBuffer(gl.ARRAY_BUFFER, trianglePositionBufferObject);
//    checkError();
gl.bufferData(gl.ARRAY_BUFFER, trianglePositions, gl.STATIC_DRAW);
//    checkError();

triangleColorBufferObject1 = gl.createBuffer();
//    checkError();
gl.bindBuffer(gl.ARRAY_BUFFER, triangleColorBufferObject1);
//    checkError();
gl.bufferData(gl.ARRAY_BUFFER, triangle1Colors, gl.STATIC_DRAW);
//    checkError();

triangleColorBufferObject2 = gl.createBuffer();
//    checkError();
gl.bindBuffer(gl.ARRAY_BUFFER, triangleColorBufferObject2);
//    checkError();
gl.bufferData(gl.ARRAY_BUFFER, triangle2Colors, gl.STATIC_DRAW);
//    checkError();

triangleColorBufferObject3 = gl.createBuffer();
//    checkError();
gl.bindBuffer(gl.ARRAY_BUFFER, triangleColorBufferObject3);
//    checkError();
gl.bufferData(gl.ARRAY_BUFFER, triangle3Colors, gl.STATIC_DRAW);
//    checkError();

// Tell the browser we want render() to be called whenever it's time to draw another frame.
window.requestAnimFrame(render, canvas);
}

// Callback called each time the browser wants us to draw another frame
function render(time)
{
// Clear the canvas
gl.clear(gl.COLOR_BUFFER_BIT);

// Do a complete rotation every 10 seconds.
var time = Date.now() % 10000;
var angleInDegrees = (360.0 / 10000.0) * time;
var angleInRadians = angleInDegrees / 57.2957795;

var xyz = vec3.create();

// Draw the triangle facing straight on.
mat4.identity(modelMatrix);
drawTriangle(triangleColorBufferObject1);

// Draw one translated a bit down and rotated to be flat on the ground.
mat4.identity(modelMatrix);
xyz = 0; xyz = -1; xyz = 0;
mat4.translate(modelMatrix, xyz);
mat4.rotateX(modelMatrix, 90 / 57.2957795);
xyz = 0; xyz = 0; xyz = 1;
drawTriangle(triangleColorBufferObject2);

// Draw one translated a bit to the right and rotated to be facing to the left.
mat4.identity(modelMatrix);
xyz = 1; xyz = 0; xyz = 0;
mat4.translate(modelMatrix, xyz);
mat4.rotateY(modelMatrix, 90 / 57.2957795);
xyz = 0; xyz = 0; xyz = 1;
drawTriangle(triangleColorBufferObject3);

// Send the commands to WebGL
gl.flush();

// Request another frame
window.requestAnimFrame(render, canvas);
}

function checkError()
{
var error = gl.getError();

if (error)
{
throw("error: " + error);
}
}

// Draws a triangle from the given vertex data.
function drawTriangle(triangleColorBufferObject)
{
// Pass in the position information
//	console.log("positionHandle=" +  positionHandle);
//	console.log("colorHandle=" +  colorHandle);
gl.enableVertexAttribArray(positionHandle);
//    checkError();

gl.bindBuffer(gl.ARRAY_BUFFER, trianglePositionBufferObject);
gl.vertexAttribPointer(positionHandle, positionDataSize, gl.FLOAT, false,
0, 0);
//    checkError();

// Pass in the color information
gl.enableVertexAttribArray(colorHandle);
//    checkError();

gl.bindBuffer(gl.ARRAY_BUFFER, triangleColorBufferObject);
gl.vertexAttribPointer(colorHandle, colorDataSize, gl.FLOAT, false,
0, 0);
//    checkError();

// This multiplies the view matrix by the model matrix, and stores the result in the modelview matrix
// (which currently contains model * view).
mat4.multiply(viewMatrix, modelMatrix, mvpMatrix);
// Matrix.multiplyMM(mMVPMatrix, 0, mViewMatrix, 0, mModelMatrix, 0);

// This multiplies the modelview matrix by the projection matrix, and stores the result in the MVP matrix
// (which now contains model * view * projection).
mat4.multiply(projectionMatrix, mvpMatrix, mvpMatrix);

// Matrix.multiplyMM(mMVPMatrix, 0, mProjectionMatrix, 0, mMVPMatrix, 0);

gl.uniformMatrix4fv(mvpMatrixHandle, false, mvpMatrix);
//    checkError();
gl.drawArrays(gl.TRIANGLES, 0, 3);
//    checkError();
//    console.log("Made it past one frame");
}

// Main entry point
function main()
{
// Try to get a WebGL context
canvas = document.getElementById("canvas");

// We don't need a depth buffer. See https://www.khronos.org/registry/webgl/specs/1.0/ Section 5.2 for more info.
gl = WebGLUtils.setupWebGL(canvas, { depth: false });

if (gl != null)
{
// Init model data.

// Define points for equilateral triangles.
trianglePositions = new Float32Array([
// X, Y, Z,
-0.5, -0.25, 0.0,
0.5, -0.25, 0.0,
0.0, 0.559016994, 0.0]);

// This triangle is red, green, and blue.
triangle1Colors = new Float32Array([
// R, G, B, A
1.0, 0.0, 0.0, 1.0,
0.0, 0.0, 1.0, 1.0,
0.0, 1.0, 0.0, 1.0]);

// This triangle is yellow, cyan, and magenta.
triangle2Colors = new Float32Array([
// R, G, B, A
1.0, 1.0, 0.0, 1.0,
0.0, 1.0, 1.0, 1.0,
1.0, 0.0, 1.0, 1.0]);

// This triangle is white, gray, and black.
triangle3Colors = new Float32Array([
// R, G, B, A
1.0, 1.0, 1.0, 1.0,
0.5, 0.5, 0.5, 1.0,
0.0, 0.0, 0.0, 1.0]);

startRendering();
}
}

// Execute the main entry point
main();
```

## The Project Code Has Moved to Github!

Hi guys,

It’s been ages since I last posted an update, I know. I went away during the summer and neglected the site upon coming back, and now that I’m busy with school it’s been harder than ever to find the time to find an update. Excuses, excuses, I know. 😉

In any case, many of you were asking for a tutorial and demo on texturing, and this is what I’m going to talk about next. There also seems to be a lot more interest for Android tutorials rather than WebGL tutorials, so I will be focusing more time on Android. Let me know if you guys have other thoughts and suggestions.

The project source code is now moving to GitHub! The project page is located at http://learnopengles.github.com/Learn-OpenGLES-Tutorials/ and the repository is located at https://github.com/learnopengles/Learn-OpenGLES-Tutorials. The old repository at https://code.google.com/p/learn-opengles-tutorials/ will remain, but will no longer be updated going forward.

There was nothing wrong with the Google Code project site, and in fact I prefer the simplicity of Google’s interface, but I also prefer to develop using Git. Once you’ve gotten used to Git, it’s hard to go back to anything else. An advantage of GitHub is that it should be easier for others to fork and contribute to the project if they wish to.

As always, let me know your comments and thoughts. The code for Lesson 4 is already done, so I’ll start writing it up now and hopefully publish that soon! 