## Rotating An Object With Touch Events

Rotating an object in 3D is a neat way of letting your users interact with the scene, but the math can be tricky to get right. In this article, I’ll take a look at a simple way to rotate an object based on the touch events, and how to work around the main drawback of this method.

#### Simple rotations.

This is the easiest way to rotate an object based on touch movement. Here is example pseudocode:

```Matrix.setIdentity(modelMatrix);

... do other translations here ...

Matrix.rotateX(totalYMovement);
Matrix.rotateY(totalXMovement);```

This is done every frame.

To rotate an object up or down, we rotate it around the X-axis, and to rotate an object left or right, we rotate it around the Y axis. We could also rotate an object around the Z axis if we wanted it to spin around.

#### How to make the rotation appear relative to the user’s point of view.

The main problem with the simple way of rotating is that the object is being rotated in relation to itself, instead of in relation to the user’s point of view. If you rotate left and right from a point of zero rotation, the cube will rotate as you expect, but what if you then rotate it up or down 180 degrees? Trying to rotate the cube left or right will now rotate it in the opposite direction!

One easy way to work around this problem is to keep a second matrix around that will store all of the accumulated rotations.

Here’s what we need to do:

1. Every frame, calculate the delta between the last position of the pointer, and the current position of the pointer. This delta will be used to rotate our accumulated rotation matrix.
2. Use this matrix to rotate the cube.

What this means is that drags left, right, up, and down will always move the cube in the direction that we expect.

##### Android Code

The code examples here are written for Android, but can easily be adapted to any platform running OpenGL ES. The code is based on Android Lesson Six: An Introduction to Texture Filtering.

In LessonSixGLSurfaceView.java, we declare a few member variables:

```private float mPreviousX;
private float mPreviousY;

private float mDensity;```

We will store the previous pointer position each frame, so that we can calculate the relative movement left, right, up, or down. We also store the screen density so that drags across the screen can move the object a consistent amount across devices, regardless of the pixel density.

Here’s how to get the pixel density:

```final DisplayMetrics displayMetrics = new DisplayMetrics();
getWindowManager().getDefaultDisplay().getMetrics(displayMetrics);
density = displayMetrics.density```

Then we add our touch event handler to our custom GLSurfaceView:

```public boolean onTouchEvent(MotionEvent event)
{
if (event != null)
{
float x = event.getX();
float y = event.getY();

if (event.getAction() == MotionEvent.ACTION_MOVE)
{
if (mRenderer != null)
{
float deltaX = (x - mPreviousX) / mDensity / 2f;
float deltaY = (y - mPreviousY) / mDensity / 2f;

mRenderer.mDeltaX += deltaX;
mRenderer.mDeltaY += deltaY;
}
}

mPreviousX = x;
mPreviousY = y;

return true;
}
else
{
return super.onTouchEvent(event);
}
}```

Every frame, we compare the current pointer position with the previous, and use that to calculate the delta offset. We then divide that delta offset by the pixel density and a slowing factor of 2.0f to get our final delta values. We apply those directly to the renderer to a couple of public variables that we have also declared as volatile, so that they can be updated between threads.

Remember, on Android, the OpenGL renderer runs in a different thread than the UI event handler thread, and there is a slim chance that the other thread fires in-between the X and Y variable assignments (there are also additional points of contention with the += syntax). I have left the code like this to bring up this point; as an exercise for the reader I leave it to you to add synchronized statements around the public variable read and write pairs instead of using volatile variables.

First, let’s add a couple of matrices and initialize them:

```/** Store the accumulated rotation. */
private final float[] mAccumulatedRotation = new float;

/** Store the current rotation. */
private final float[] mCurrentRotation = new float;```
```@Override
public void onSurfaceCreated(GL10 glUnused, EGLConfig config)
{

...

// Initialize the accumulated rotation matrix
Matrix.setIdentityM(mAccumulatedRotation, 0);
}```

Here’s what our matrix code looks like in the onDrawFrame method:

```// Draw a cube.
// Translate the cube into the screen.
Matrix.setIdentityM(mModelMatrix, 0);
Matrix.translateM(mModelMatrix, 0, 0.0f, 0.8f, -3.5f);

// Set a matrix that contains the current rotation.
Matrix.setIdentityM(mCurrentRotation, 0);
Matrix.rotateM(mCurrentRotation, 0, mDeltaX, 0.0f, 1.0f, 0.0f);
Matrix.rotateM(mCurrentRotation, 0, mDeltaY, 1.0f, 0.0f, 0.0f);
mDeltaX = 0.0f;
mDeltaY = 0.0f;

// Multiply the current rotation by the accumulated rotation, and then set the accumulated
// rotation to the result.
Matrix.multiplyMM(mTemporaryMatrix, 0, mCurrentRotation, 0, mAccumulatedRotation, 0);
System.arraycopy(mTemporaryMatrix, 0, mAccumulatedRotation, 0, 16);

// Rotate the cube taking the overall rotation into account.
Matrix.multiplyMM(mTemporaryMatrix, 0, mModelMatrix, 0, mAccumulatedRotation, 0);
System.arraycopy(mTemporaryMatrix, 0, mModelMatrix, 0, 16);```
1. First we translate the cube.
2. Then we build a matrix that will contain the current amount of rotation, between this frame and the preceding frame.
3. We then multiply this matrix with the accumulated rotation, and assign the accumulated rotation to the result. The accumulated rotation contains the result of all of our rotations since the beginning.
4. Now that we’ve updated the accumulated rotation matrix with the most recent rotation, we finally rotate the cube by multiplying the model matrix with our rotation matrix, and then we set the model matrix to the result.

The above code might look a bit confusion due to the placement of the variables, so remember the definitions:

public static void multiplyMM (float[] result, int resultOffset, float[] lhs, int lhsOffset, float[] rhs, int rhsOffset)

public static void arraycopy (Object src, int srcPos, Object dst, int dstPos, int length)

Note the position of source and destination for each method call.

##### Trouble spots and pitfalls
• The accumulated matrix should be set to identity once when initialized, and should not be reset to identity each frame.
• Previous pointer positions must also be set on pointer down events, not only on pointer move events.
• Watch the order of parameters, and also watch out for corrupting your matrices. Android’s Matrix.multiplyMM states that “the result element values are undefined if the result elements overlap either the lhs or rhs elements.” Use temporary matrices to avoid this problem.
##### WebGL examples

The example on the left uses the simplest method of rotating, while the example on the right uses the accumulated rotations matrix.

 Your browser does not support the canvas tag. Your browser does not support the canvas tag.
##### Further exercises

What are the drawbacks of using a matrix to hold accumulated rotations and updating it every frame based on the movement delta for that frame? What other ways of rotation are there? Try experimenting, and see what else you can come up with!

```  // <![CDATA[
uniform mat4 u_MVPMatrix;      		// A constant representing the combined model/view/projection matrix.
uniform mat4 u_MVMatrix;       		// A constant representing the combined model/view matrix.

attribute vec4 a_Position;     		// Per-vertex position information we will pass in.
attribute vec4 a_Color;        		// Per-vertex color information we will pass in.
attribute vec3 a_Normal;       		// Per-vertex normal information we will pass in.

varying vec3 v_Position;       		// This will be passed into the fragment shader.
varying vec4 v_Color;          		// This will be passed into the fragment shader.
varying vec3 v_Normal;         		// This will be passed into the fragment shader.

// The entry point for our vertex shader.
void main()
{
// Transform the vertex into eye space.
v_Position = vec3(u_MVMatrix * a_Position);

// Pass through the color.
v_Color = a_Color;

// Transform the normal's orientation into eye space.
v_Normal = vec3(u_MVMatrix * vec4(a_Normal, 0.0));

// gl_Position is a special variable used to store the final position.
// Multiply the vertex by the matrix to get the final point in normalized screen coordinates.
gl_Position = u_MVPMatrix * a_Position;
}
// ]]&gt;// <![CDATA[
precision mediump float;       		// Set the default precision to medium. We don't need as high of a
// precision in the fragment shader.
uniform vec3 u_LightPos;       	    // The position of the light in eye space.

varying vec3 v_Position;			// Interpolated position for this fragment.
varying vec4 v_Color;          		// This is the color from the vertex shader interpolated across the
// triangle per fragment.
varying vec3 v_Normal;         		// Interpolated normal for this fragment.

// The entry point for our fragment shader.
void main()
{
// Will be used for attenuation.
float distance = length(u_LightPos - v_Position);

// Get a lighting direction vector from the light to the vertex.
vec3 lightVector = normalize(u_LightPos - v_Position);

// Calculate the dot product of the light vector and vertex normal. If the normal and light vector are
// pointing in the same direction then it will get max illumination.
float diffuse = max(dot(v_Normal, lightVector), 0.3);

diffuse = diffuse * (1.0 / (1.0 + (0.25 * distance * distance)));

// Multiply the color by the diffuse illumination level to get final output color.
gl_FragColor = v_Color * diffuse;
}
// ]]&gt;// <![CDATA[
function runWebGlProgram(canvasIdString, rotationMode) {
/**
* Lesson_two.js
*/

// We make use of the WebGL utility library, which was downloaded from here:
// https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/sdk/demos/common/webgl-utils.js
//
// It defines two functions which we use here:
//
// // Creates a WebGL context.
// WebGLUtils.setupWebGL(canvas);
//
// // Requests an animation callback. See: https://developer.mozilla.org/en/DOM/window.requestAnimationFrame
// window.requestAnimFrame(callback, node);
//
// We also make use of the glMatrix file which can be downloaded from here:
//

/** Hold a reference to the WebGLContext */
var gl = null;

/** Hold a reference to the canvas DOM object. */
var canvas = null;

/** Used for moving the cube. */
var deltaX = 0;
var deltaY = 0;

/**
* Store the model matrix. This matrix is used to move models from object space (where each model can be thought
* of being located at the center of the universe) to world space.
*/
var modelMatrix = mat4.create();

/**
* Store the view matrix. This can be thought of as our camera. This matrix transforms world space to eye space;
* it positions things relative to our eye.
*/
var viewMatrix = mat4.create();

/** Store the projection matrix. This is used to project the scene onto a 2D viewport. */
var projectionMatrix = mat4.create();

/** Allocate storage for the final combined matrix. This will be passed into the shader program. */
var mvpMatrix = mat4.create();

/** For rotations by the user. */
var currentRotationMatrix = mat4.create();
var accumulatedRotationMatrix = mat4.create();
mat4.identity(accumulatedRotationMatrix);

/**
* Stores a copy of the model matrix specifically for the light position.
*/
var lightModelMatrix = mat4.create();

/** Store our model data in a Float32Array buffer. */
var cubePositions;
var cubeColors;
var cubeNormals;

/** Store references to the vertex buffer objects (VBOs) that will be created. */
var cubePositionBufferObject;
var cubeColorBufferObject;
var cubeNormalBufferObject;

/** This will be used to pass in the transformation matrix. */
var mvpMatrixHandle;

/** This will be used to pass in the modelview matrix. */
var mvMatrixHandle;

/** This will be used to pass in the light position. */
var lightPosHandle;

/** This will be used to pass in model position information. */
var positionHandle;

/** This will be used to pass in model color information. */
var colorHandle;

/** This will be used to pass in model normal information. */
var normalHandle;

/** Size of the position data in elements. */
var positionDataSize = 3;

/** Size of the color data in elements. */
var colorDataSize = 4;

/** Size of the normal data in elements. */
var normalDataSize = 3;

/** Used to hold a light centered on the origin in model space. We need a 4th coordinate so we can get translations to work when
*  we multiply this by our transformation matrices. */
var lightPosInModelSpace = new Array(0, 0, 0, 1);

/** Used to hold the current position of the light in world space (after transformation via model matrix). */
var lightPosInWorldSpace = new Array(0, 0, 0, 0);

/** Used to hold the transformed position of the light in eye space (after transformation via modelview matrix) */
var lightPosInEyeSpace = new Array(0, 0, 0, 0);

/** This is a handle to our per-vertex cube shading program. */
var perVertexProgramHandle;

{
var error;

{

{
}

// Pass in the shader source.

// Get the compilation status.

// If the compilation failed, delete the shader.
if (!compiled)
{
}
}

{
throw("Error creating shader " + sourceScriptId + ": " + error);
}

}

// Helper function to link a program
{
// Create a program object and store the handle to it.
var programHandle = gl.createProgram();
var error;

if (programHandle != 0)
{
// Bind the vertex shader to the program.

// Bind the fragment shader to the program.

// Bind attributes
if (attributes)
{
for (i = 0; i < attributes.length; i++)
{
gl.bindAttribLocation(programHandle, i, attributes[i]);
}
}

// If the link failed, delete the program.
{
error = gl.getProgramInfoLog(programHandle);
gl.deleteProgram(programHandle);
programHandle = 0;
}
}

if (programHandle == 0)
{
throw("Error creating program:" + error);
}

return programHandle;
}

// Called when we have the context
function startRendering()
{
/* Configure viewport */

// Set the OpenGL viewport to the same size as the canvas.
gl.viewport(0, 0, canvas.clientWidth, canvas.clientHeight);

// Create a new perspective projection matrix. The height will stay the same
// while the width will vary as per aspect ratio.
var ratio = canvas.clientWidth / canvas.clientHeight;
var left = -ratio;
var right = ratio;
var bottom = -1.0;
var top = 1.0;
var near = 1.0;
var far = 10.0;

mat4.frustum(left, right, bottom, top, near, far, projectionMatrix);

/* Configure general parameters */

// Set the background clear color to black.
gl.clearColor(0.0, 0.0, 0.0, 0.0);

// Use culling to remove back faces.
gl.enable(gl.CULL_FACE);

// Enable depth testing
gl.enable(gl.DEPTH_TEST);

// Enable dithering
gl.enable(gl.DITHER);

/* Configure camera */
// Position the eye behind the origin.
var eyeX = 0.0;
var eyeY = 0.0;
var eyeZ = -0.5;

// We are looking toward the distance
var lookX = 0.0;
var lookY = 0.0;
var lookZ = -5.0;

// Set our up vector. This is where our head would be pointing were we holding the camera.
var upX = 0.0;
var upY = 1.0;
var upZ = 0.0;

// Set the view matrix. This matrix can be said to represent the camera position.
var eye = vec3.create();
eye = eyeX; eye = eyeY; eye = eyeZ;

var center = vec3.create();
center = lookX; center = lookY; center = lookZ;

var up = vec3.create();
up = upX; up = upY; up = upZ;

mat4.lookAt(eye, center, up, viewMatrix);

// Create a program object and store the handle to it.

// Create buffers in OpenGL's working memory.
cubePositionBufferObject = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, cubePositionBufferObject);
gl.bufferData(gl.ARRAY_BUFFER, cubePositions, gl.STATIC_DRAW);

cubeColorBufferObject = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, cubeColorBufferObject);
gl.bufferData(gl.ARRAY_BUFFER, cubeColors, gl.STATIC_DRAW);

cubeNormalBufferObject = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, cubeNormalBufferObject);
gl.bufferData(gl.ARRAY_BUFFER, cubeNormals, gl.STATIC_DRAW);

// Tell the browser we want render() to be called whenever it's time to draw another frame.
window.requestAnimFrame(render, canvas);
}

// Callback called each time the browser wants us to draw another frame
function render(time)
{
// Clear the canvas
gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);

// Do a complete rotation every 10 seconds.
var time = Date.now() % 10000;
var angleInDegrees = (360.0 / 10000.0) * time;
var angleInRadians = angleInDegrees / 57.2957795;

// Set our per-vertex lighting program.
gl.useProgram(perVertexProgramHandle);

// Set program handles for cube drawing.
mvpMatrixHandle = gl.getUniformLocation(perVertexProgramHandle, "u_MVPMatrix");
mvMatrixHandle = gl.getUniformLocation(perVertexProgramHandle, "u_MVMatrix");
lightPosHandle = gl.getUniformLocation(perVertexProgramHandle, "u_LightPos");
positionHandle = gl.getAttribLocation(perVertexProgramHandle, "a_Position");
colorHandle = gl.getAttribLocation(perVertexProgramHandle, "a_Color");
normalHandle = gl.getAttribLocation(perVertexProgramHandle, "a_Normal");

var v = vec3.create();

// Calculate position of the light. Rotate and then push into the distance.
mat4.identity(lightModelMatrix);
v = 0; v = 0; v = -5;
mat4.translate(lightModelMatrix, v);

v = 0; v = 0; v = 2;
mat4.translate(lightModelMatrix, v);

mat4.multiplyVec4(lightModelMatrix, lightPosInModelSpace, lightPosInWorldSpace);
mat4.multiplyVec4(viewMatrix, lightPosInWorldSpace, lightPosInEyeSpace);

// Draw a cubes.
mat4.identity(modelMatrix);
v = 0; v = 0; v = -5;
mat4.translate(modelMatrix, v);

if (rotationMode == 1)
{
mat4.rotateX(modelMatrix, deltaY / 100);
mat4.rotateY(modelMatrix, deltaX / 100);
}
else
{
mat4.identity(currentRotationMatrix);
mat4.rotateX(currentRotationMatrix, deltaY / 100);
mat4.rotateY(currentRotationMatrix, deltaX / 100);
deltaX = 0;
deltaY = 0;

mat4.multiply(currentRotationMatrix, accumulatedRotationMatrix, accumulatedRotationMatrix);
mat4.multiply(modelMatrix, accumulatedRotationMatrix);
}

drawCube();

// Send the commands to WebGL
gl.flush();

// Request another frame
window.requestAnimFrame(render, canvas);
}

/**
* Draws a cube.
*/
function drawCube()
{
// Pass in the position information
gl.enableVertexAttribArray(positionHandle);
gl.bindBuffer(gl.ARRAY_BUFFER, cubePositionBufferObject);
gl.vertexAttribPointer(positionHandle, positionDataSize, gl.FLOAT, false, 0, 0);

// Pass in the color information
gl.enableVertexAttribArray(colorHandle);
gl.bindBuffer(gl.ARRAY_BUFFER, cubeColorBufferObject);
gl.vertexAttribPointer(colorHandle, colorDataSize, gl.FLOAT, false, 0, 0);

// Pass in the normal information
gl.enableVertexAttribArray(normalHandle);
gl.bindBuffer(gl.ARRAY_BUFFER, cubeNormalBufferObject);
gl.vertexAttribPointer(normalHandle, normalDataSize, gl.FLOAT, false, 0, 0);

// This multiplies the view matrix by the model matrix, and stores the result in the MVP matrix
// (which currently contains model * view).
mat4.multiply(viewMatrix, modelMatrix, mvpMatrix);

// Pass in the modelview matrix.
gl.uniformMatrix4fv(mvMatrixHandle, false, mvpMatrix);

// This multiplies the modelview matrix by the projection matrix, and stores the result in the MVP matrix
// (which now contains model * view * projection).
mat4.multiply(projectionMatrix, mvpMatrix, mvpMatrix);

// Pass in the combined matrix.
gl.uniformMatrix4fv(mvpMatrixHandle, false, mvpMatrix);

// Pass in the light position in eye space.
gl.uniform3f(lightPosHandle, lightPosInEyeSpace, lightPosInEyeSpace, lightPosInEyeSpace);

// Draw the cube.
gl.drawArrays(gl.TRIANGLES, 0, 36);
}

// Main entry point
function main()
{
// Try to get a WebGL context
canvas = document.getElementById(canvasIdString);

// We don't need alpha blending.
// See https://www.khronos.org/registry/webgl/specs/1.0/ Section 5.2 for more info on parameters and defaults.
gl = WebGLUtils.setupWebGL(canvas, {alpha: false});

if (gl != null)
{
// // Define points for a cube.

// X, Y, Z
cubePositions = new Float32Array
([
// In OpenGL counter-clockwise winding is default. This means that when we look at a triangle,
// if the points are counter-clockwise we are looking at the "front". If not we are looking at
// the back. OpenGL has an optimization where all back-facing triangles are culled, since they
// usually represent the backside of an object and aren't visible anyways.

// Front face
-1.0, 1.0, 1.0,
-1.0, -1.0, 1.0,
1.0, 1.0, 1.0,
-1.0, -1.0, 1.0,
1.0, -1.0, 1.0,
1.0, 1.0, 1.0,

// Right face
1.0, 1.0, 1.0,
1.0, -1.0, 1.0,
1.0, 1.0, -1.0,
1.0, -1.0, 1.0,
1.0, -1.0, -1.0,
1.0, 1.0, -1.0,

// Back face
1.0, 1.0, -1.0,
1.0, -1.0, -1.0,
-1.0, 1.0, -1.0,
1.0, -1.0, -1.0,
-1.0, -1.0, -1.0,
-1.0, 1.0, -1.0,

// Left face
-1.0, 1.0, -1.0,
-1.0, -1.0, -1.0,
-1.0, 1.0, 1.0,
-1.0, -1.0, -1.0,
-1.0, -1.0, 1.0,
-1.0, 1.0, 1.0,

// Top face
-1.0, 1.0, -1.0,
-1.0, 1.0, 1.0,
1.0, 1.0, -1.0,
-1.0, 1.0, 1.0,
1.0, 1.0, 1.0,
1.0, 1.0, -1.0,

// Bottom face
1.0, -1.0, -1.0,
1.0, -1.0, 1.0,
-1.0, -1.0, -1.0,
1.0, -1.0, 1.0,
-1.0, -1.0, 1.0,
-1.0, -1.0, -1.0
]);

// R, G, B, A
cubeColors = new Float32Array
([
// Front face (red)
1.0, 0.0, 0.0, 1.0,
1.0, 0.0, 0.0, 1.0,
1.0, 0.0, 0.0, 1.0,
1.0, 0.0, 0.0, 1.0,
1.0, 0.0, 0.0, 1.0,
1.0, 0.0, 0.0, 1.0,

// Right face (green)
0.0, 1.0, 0.0, 1.0,
0.0, 1.0, 0.0, 1.0,
0.0, 1.0, 0.0, 1.0,
0.0, 1.0, 0.0, 1.0,
0.0, 1.0, 0.0, 1.0,
0.0, 1.0, 0.0, 1.0,

// Back face (blue)
0.0, 0.0, 1.0, 1.0,
0.0, 0.0, 1.0, 1.0,
0.0, 0.0, 1.0, 1.0,
0.0, 0.0, 1.0, 1.0,
0.0, 0.0, 1.0, 1.0,
0.0, 0.0, 1.0, 1.0,

// Left face (yellow)
1.0, 1.0, 0.0, 1.0,
1.0, 1.0, 0.0, 1.0,
1.0, 1.0, 0.0, 1.0,
1.0, 1.0, 0.0, 1.0,
1.0, 1.0, 0.0, 1.0,
1.0, 1.0, 0.0, 1.0,

// Top face (cyan)
0.0, 1.0, 1.0, 1.0,
0.0, 1.0, 1.0, 1.0,
0.0, 1.0, 1.0, 1.0,
0.0, 1.0, 1.0, 1.0,
0.0, 1.0, 1.0, 1.0,
0.0, 1.0, 1.0, 1.0,

// Bottom face (magenta)
1.0, 0.0, 1.0, 1.0,
1.0, 0.0, 1.0, 1.0,
1.0, 0.0, 1.0, 1.0,
1.0, 0.0, 1.0, 1.0,
1.0, 0.0, 1.0, 1.0,
1.0, 0.0, 1.0, 1.0
]);

// X, Y, Z
// The normal is used in light calculations and is a vector which points
// orthogonal to the plane of the surface. For a cube model, the normals
// should be orthogonal to the points of each face.
cubeNormals = new Float32Array
([
// Front face
0.0, 0.0, 1.0,
0.0, 0.0, 1.0,
0.0, 0.0, 1.0,
0.0, 0.0, 1.0,
0.0, 0.0, 1.0,
0.0, 0.0, 1.0,

// Right face
1.0, 0.0, 0.0,
1.0, 0.0, 0.0,
1.0, 0.0, 0.0,
1.0, 0.0, 0.0,
1.0, 0.0, 0.0,
1.0, 0.0, 0.0,

// Back face
0.0, 0.0, -1.0,
0.0, 0.0, -1.0,
0.0, 0.0, -1.0,
0.0, 0.0, -1.0,
0.0, 0.0, -1.0,
0.0, 0.0, -1.0,

// Left face
-1.0, 0.0, 0.0,
-1.0, 0.0, 0.0,
-1.0, 0.0, 0.0,
-1.0, 0.0, 0.0,
-1.0, 0.0, 0.0,
-1.0, 0.0, 0.0,

// Top face
0.0, 1.0, 0.0,
0.0, 1.0, 0.0,
0.0, 1.0, 0.0,
0.0, 1.0, 0.0,
0.0, 1.0, 0.0,
0.0, 1.0, 0.0,

// Bottom face
0.0, -1.0, 0.0,
0.0, -1.0, 0.0,
0.0, -1.0, 0.0,
0.0, -1.0, 0.0,
0.0, -1.0, 0.0,
0.0, -1.0, 0.0
]);

startRendering();
}
}

// Execute the main entry point
main();

var mouseIsDown = false;
var previousX;
var previousY;

{
mouseIsDown = true;
previousX = e.screenX;
previousY = e.screenY;
});

document.addEventListener("mouseup", function(e) { mouseIsDown = false; });

{
if (mouseIsDown) {
deltaX += e.screenX - previousX;
deltaY += e.screenY - previousY;
}

previousX = e.screenX;
previousY = e.screenY;
});
}

runWebGlProgram("canvas", 2);
runWebGlProgram("firstCanvas", 1);
// ]]&gt;```

## Listening to Android Touch Events, and Acting on Them

We started listening to touch events in Android Lesson Five: An Introduction to Blending, and in that lesson, we listened to touch events and used them to change our OpenGL state.

To listen to touch events, you first need to subclass GLSurfaceView and create your own custom view. In that view, you create a default constructor that calls the superclass, create a new method to take in a specific renderer (LessonFiveRenderer in this case) instead of the generic interface, and override onTouchEvent(). We pass in a concrete renderer class, because we will be calling specific methods on that class in the onTouchEvent() method.

On Android, the OpenGL rendering is done in a separate thread, so we’ll also look at how we can safely dispatch these calls from the main UI thread that is listening to the touch events, over to the separate renderer thread.

```public class LessonFiveGLSurfaceView extends GLSurfaceView
{
private LessonFiveRenderer mRenderer;

public LessonFiveGLSurfaceView(Context context)
{
super(context);
}

@Override
public boolean onTouchEvent(MotionEvent event)
{
if (event != null)
{
if (event.getAction() == MotionEvent.ACTION_DOWN)
{
if (mRenderer != null)
{
// Ensure we call switchMode() on the OpenGL thread.
// queueEvent() is a method of GLSurfaceView that will do this for us.
queueEvent(new Runnable()
{
@Override
public void run()
{
mRenderer.switchMode();
}
});

return true;
}
}
}

return super.onTouchEvent(event);
}

// Hides superclass method.
public void setRenderer(LessonFiveRenderer renderer)
{
mRenderer = renderer;
super.setRenderer(renderer);
}
}```

And the implementation of switchMode() in LessonFiveRenderer:

```public void switchMode()
{
mBlending = !mBlending;

if (mBlending)
{
// No culling of back faces
GLES20.glDisable(GLES20.GL_CULL_FACE);

// No depth testing
GLES20.glDisable(GLES20.GL_DEPTH_TEST);

// Enable blending
GLES20.glEnable(GLES20.GL_BLEND);
GLES20.glBlendFunc(GLES20.GL_ONE, GLES20.GL_ONE);
}
else
{
// Cull back faces
GLES20.glEnable(GLES20.GL_CULL_FACE);

// Enable depth testing
GLES20.glEnable(GLES20.GL_DEPTH_TEST);

// Disable blending
GLES20.glDisable(GLES20.GL_BLEND);
}
}```

Let’s look a little bit more closely at LessonFiveGLSurfaceView::onTouchEvent(). Something important to remember is that touch events run on the UI thread, while GLSurfaceView creates the OpenGL ES context in a separate thread, which means that our renderer’s callbacks also run in a separate thread. This is an important point to remember, because we can’t call OpenGL from another thread and just expect things to work.

Thankfully, the guys that wrote GLSurfaceView also thought of this, and provided a queueEvent() method that you can use to call stuff on the OpenGL thread. So, when we want to turn blending on and off by tapping the screen, we make sure that we’re calling the OpenGL stuff on the right thread by using queueEvent() in the UI thread.

Further exercises

How would you listen to keyboard events, or other system events, and show an update in the OpenGL context? ## Android Lesson Six: An Introduction to Texture Filtering In this lesson, we will introduce the different types of basic texture filtering modes and how to use them, including nearest-neighbour filtering, bilinear filtering, and trilinear filtering using mipmaps.

You’ll learn how to make your textures appear more smooth, as well as the drawbacks that come from smoothing. There are also different ways of rotating an object, one of which is used in this lesson.

#### Assumptions and prerequisites

It’s highly recommended to understand the basics of texture mapping in OpenGL ES, covered in the lesson Android Lesson Four: Introducing Basic Texturing.

#### What is texture filtering?

Textures in OpenGL are made up of arrays of elements known as texels, which contain colour and alpha values. This corresponds with the display, which is made up of a bunch of pixels and displays a different colour at each point. In OpenGL, textures are applied to triangles and drawn on the screen, so these textures can be drawn in various sizes and orientation. The texture filtering options in OpenGL tell it how to filter the texels onto the pixels of the device, depending on the case.

There are three cases:

• Each texel maps onto more than one pixel. This is known as magnification.
• Each texel maps exactly onto one pixel. Filtering doesn’t apply in this case.
• Each texel maps onto less than one pixel. This is known as minification.

OpenGL lets us assign a filter for both magnification and minification, and lets us use nearest-neighbour, bilinear filtering, or trilinear filtering. I will explain what these mean further below.

##### Magnification and minification

Here is a visualization of both magnification and minification with nearest-neighbour rendering, using the cute Android that shows up when you have your USB connected to your Android device: Magnification As you can see, the texels of the image are easily visible, as they now cover many of the pixels on your display.

Minification With minification, many of the details are lost as many of the texels cannot be rendered onto the limited pixels available.

##### Texture filtering modes

Bilinear interpolation

The texels of a texture are clearly visible as large squares in the magnification example when no interpolation between the texel values are done. When rendering is done in nearest-neighbour mode, the pixel is assigned the value of the nearest texel.

The rendering quality can be dramatically improved by switching to bilinear interpolation. Instead of assigning the values of a group of pixels to the same nearby texel value, these values will instead be linearly interpolated between the neighbouring four texels. Each pixel will be smoothed out and the resulting image will look much smoother: Some blockiness is still apparent, but the image looks much smoother than before. People who played 3D games back in the days before 3D accelerated cards came out will remember that this was the defining feature between a software-rendered game and a hardware-accelerated game: software-rendered games simply did not have the processing budget to do smoothing, so everything appeared blocky and jagged. Things suddenly got smooth once people starting using graphics accelerators.

Bilinear interpolation is mostly useful for magnification. It can also be used for minification, but beyond a certain point and we run into the same problem that we are trying to cram far too many texels onto the same pixel. OpenGL will only use at most 4 texels to render a pixel, so a lot of information is still being lost.

If we look at a detailed texture with bilinear interpolation being applied, it will look very noisy when we see it moving in the distance, since a different set of texels will be selected each frame.

Mipmapping

How can we minify textures without introducing noise and use all of the texels? This can be done by generating a set of optimized textures at different sizes which we can then use at runtime. Since these textures are pre-generated, they can be filtered using more expensive techniques that use all of the texels, and at runtime OpenGL will select the most appropriate level based on the final size of the texture on the screen. The resulting image can have more detail, less noise, and look better overall. Although a bit more memory will be used, rendering can also be faster, as the smaller levels can be more easily kept in the GPU’s texture cache. Let’s take a closer look at the resulting image at 1/8th of its original size, using bilinear filtering without and with mipmaps; the image has been expanded for clarity:

Bilinear filtering without mipmaps Bilinear filtering with mipmaps The version using mipmaps has vastly more detail. Because of the pre-processing of the image into separate levels, all of the texels end up getting used in the final image.

Trilinear filtering

When using mipmaps with bilinear filtering, sometimes a noticeable jump or line can be seen in the rendered scene where OpenGL switches between different mipmap levels of the texture. This will be pointed out a bit further below when comparing the different OpenGL texture filtering modes.

Trilinear filtering solves this problem by also interpolating between the different mipmap levels, so that a total of 8 texels will be used to interpolate the final pixel value, resulting in a smoother image.

#### OpenGL texture filtering modes

OpenGL has two parameters that can be set:

• GL_TEXTURE_MIN_FILTER
• GL_TEXTURE_MAG_FILTER

These correspond to the minification and magnification described earlier. GL_TEXTURE_MIN_FILTER accepts the following options:

• GL_NEAREST
• GL_LINEAR
• GL_NEAREST_MIPMAP_NEAREST
• GL_NEAREST_MIPMAP_LINEAR
• GL_LINEAR_MIPMAP_NEAREST
• GL_LINEAR_MIPMAP_LINEAR

GL_TEXTURE_MAG_FILTER accepts the following options:

• GL_NEAREST
• GL_LINEAR

GL_NEAREST corresponds to nearest-neighbour rendering, GL_LINEAR corresponds to bilinear filtering, GL_LINEAR_MIPMAP_NEAREST corresponds to bilinear filtering with mipmaps, and GL_LINEAR_MIPMAP_LINEAR corresponds to trilinear filtering. Graphical examples and further explanation of the most common options are visible further down in this lesson.

How to set a texture filtering mode

We first need to bind the texture, then we can set the appropriate filter parameter on that texture:

```GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, mTextureHandle);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER, filter);```

How to generate mipmaps

This is really easy. After loading the texture into OpenGL (See Android Lesson Four: Introducing Basic Texturing for more information on how to do this), while the texture is still bound, we can simply call:

`GLES20.glGenerateMipmap(GLES20.GL_TEXTURE_2D);`

This will generate all of the mipmap levels for us, and these levels will get automatically used depending on the texture filter set.

#### How does it look?

Here are some screenshots of the most common combinations available. The effects are more dramatic when you see it in motion, so I recommend downloading the app and giving it a shot.

Nearest-neighbour rendering

This mode is reminiscent of older software-rendered 3D games.

GL_TEXTURE_MIN_FILTER = GL_NEAREST
GL_TEXTURE_MAG_FILTER = GL_NEAREST Bilinear filtering, with mipmaps

This mode was used by many of the first games that supported 3D acceleration and is an efficient way of smoothing textures on Android phones today.

GL_TEXTURE_MIN_FILTER = GL_LINEAR_MIPMAP_NEAREST
GL_TEXTURE_MAG_FILTER = GL_LINEAR It’s hard to see on this static image, but when things are in motion, you might notice horizontal bands where the rendered pixels switch between mipmap levels.

Trilinear filtering

This mode improves on the render quality of bilinear filtering with mipmaps, by interpolating between the mipmap levels.

GL_TEXTURE_MIN_FILTER = GL_LINEAR_MIPMAP_LINEAR
GL_TEXTURE_MAG_FILTER = GL_LINEAR The pixels are completely smoothed between near and far distances; in fact, the textures may now appear too smooth at oblique angles. Anisotropic filtering is a more advanced technique that is supported by some mobile GPUs and can be used to improve the final results beyond what trilinear filtering can deliver.

##### Further exercises

What sort of effects can you achieve with the other modes? For example, when would you use something like GL_NEAREST_MIPMAP_LINEAR?

#### Wrapping up

The full source code for this lesson can be downloaded from the project site on GitHub.

A compiled version of the lesson can also be downloaded directly from the Android Market: I hope you enjoyed this lesson, and thanks for stopping by! ## OpenGL ES Roundup, February 12, 2012 