Category Archives: Android

A screenshot of Lesson Seven, showing the grid of cubes.

Android Lesson Seven: An Introduction to Vertex Buffer Objects (VBOs)

A screenshot of Lesson Seven, showing the grid of cubes.In this lesson, we’ll introduce vertex buffer objects (VBOs), how to define them, and how to use them. Here is what we are going to cover:

  • How to define and render from vertex buffer objects.
  • The difference between using a single buffer with all the data packed in, or multiple buffers.
  • Problems and pitfalls, and what to do about them.

What are vertex buffer objects, and why use them?

Up until now, all of our lessons have been storing our object data in client-side memory, only transferring it into the GPU at render time. This is fine when there is not a lot of data to transfer, but as our scenes get more complex with more objects and triangles, this can impose an extra cost on the CPU and memory usage. What can we do about this? We can use vertex buffer objects. Instead of transferring vertex information from client memory every frame, the information will be transferred once and rendering will then be done from this graphics memory cache.

Assumptions and prerequisites

Please read Android Lesson One: Getting Started for an intro on how to upload the vertices from client-side memory. This understanding of how OpenGL ES works with the vertex arrays will be crucial to understanding this lesson.

Understanding client-side buffers in more detail

Once you understand how to render using client-side memory, it’s actually not too hard to switch to using VBOs. The main difference is that there is an additional step to upload the data into graphics memory, and an additional call to bind to this buffer when rendering.

This lesson has been setup to use four different modes:

  • Client side, separate buffers.
  • Client side, packed buffer.
  • Vertex buffer object, separate buffers.
  • Vertex buffer object, packed buffers.

Whether we are using vertex buffer objects or not, we need to first store our data in a client-side direct buffer. Recall from lesson one that OpenGL ES is a native system library, whereas Dalvik Java runs in a virtual machine. To bridge the gap, we need to use a set of special buffer classes to allocate memory on the native heap and make it accessible to OpenGL:

// Java array.
float[] cubePositions;
...
// Floating-point buffer
final FloatBuffer cubePositionsBuffer;
...

// Allocate a direct block of memory on the native heap,
// size in bytes is equal to cubePositions.length * BYTES_PER_FLOAT.
// BYTES_PER_FLOAT is equal to 4, since a float is 32-bits, or 4 bytes.
cubePositionsBuffer = ByteBuffer.allocateDirect(cubePositions.length * BYTES_PER_FLOAT)

// Floats can be in big-endian or little-endian order.
// We want the same as the native platform.
.order(ByteOrder.nativeOrder())

// Give us a floating-point view on this byte buffer.
.asFloatBuffer();

Transferring data from the Java heap to the native heap is then a matter of a couple calls:

// Copy data from the Java heap to the native heap.
cubePositionsBuffer.put(cubePositions)

// Reset the buffer position to the beginning of the buffer.
.position(0);

What is the purpose of the buffer position? Normally, Java does not give us a way to specify arbitrary locations in memory using pointer arithmetic. However, setting the position of the buffer is functionally equivalent to changing the value of a pointer to a block of memory. By changing the position, we can pass arbitrary memory locations within our buffer to OpenGL calls. This will come in handy when we work with packed buffers.

Once the data is on the native heap, we no longer need to keep the float[] array around, and we can let the garbage collector clean it up.

Rendering with client-side buffers is straightforward to setup. We just need to enable using vertex arrays on that attribute, and pass a pointer to our data:

// Pass in the position information
GLES20.glEnableVertexAttribArray(mPositionHandle);
GLES20.glVertexAttribPointer(mPositionHandle, POSITION_DATA_SIZE,
	GLES20.GL_FLOAT, false, 0, mCubePositions);

Explanation of the parameters to glVertexAttribPointer:

  • mPositionHandle: The OpenGL index of the position attribute of our shader program.
  • POSITION_DATA_SIZE: How many elements (floats) define this attribute.
  • GL_FLOAT: The type of each element.
  • false: Should fixed-point data be normalized? Not applicable since we are using floating-point data.
  • 0: The stride. Set to 0 to mean that the positions should be read sequentially.
  • mCubePositions: The pointer to our buffer, containing all of the positional data.
Working with packed buffers

Working with packed buffers is very similar, except that instead of using a buffer each for positions, normals, etc… one buffer will contain all of this data. The difference looks like this:

Using separate buffers

positions = X,Y,Z, X, Y, Z, X, Y, Z, …
colors = R, G, B, A, R, G, B, A, …
textureCoordinates = S, T, S, T, S, T, …

Using a packed buffer

buffer = X, Y, Z, R, G, B, A, S, T, …

The advantage to using packed buffers is that it should be more efficient for the GPU to render, since all of the information needed to render a triangle is located within the same block of memory. The disadvantage is that it may be more difficult and slower to update, if you are using dynamic data.

When we use packed buffers, we need to change our rendering calls in a couple of ways. First, we need to tell OpenGL the stride, or how many bytes define a vertex.

final int stride = (POSITION_DATA_SIZE + NORMAL_DATA_SIZE + TEXTURE_COORDINATE_DATA_SIZE)
	* BYTES_PER_FLOAT;

// Pass in the position information
mCubeBuffer.position(0);
GLES20.glEnableVertexAttribArray(mPositionHandle);
GLES20.glVertexAttribPointer(mPositionHandle, POSITION_DATA_SIZE,
	GLES20.GL_FLOAT, false, stride, mCubeBuffer);

// Pass in the normal information
mCubeBuffer.position(POSITION_DATA_SIZE);
GLES20.glEnableVertexAttribArray(mNormalHandle);
GLES20.glVertexAttribPointer(mNormalHandle, NORMAL_DATA_SIZE,
	GLES20.GL_FLOAT, false, stride, mCubeBuffer);
...

The stride tells OpenGL ES how far it needs to go to find the same attribute for the next vertex. For example, if element 0 is the beginning of the position for the first vertex, and there are 8 elements per vertex, then the stride will be equal to 8 elements, or 32 bytes. The position for the next vertex will be found at element 8, and the next vertex after that at element 16, and so on.

Keep in mind that the value of the stride passed to glVertexAttribPointer should be in bytes, not elements, so remember to do that conversion.

Notice that we also change the start position of the buffer when we switch from specifying the positions to the normals. This is the pointer arithmetic I was referring to before, and this is how we can do it in Java when working with OpenGL ES. We’re still working with the same buffer, mCubeBuffer, but we tell OpenGL to start reading in the normals at the first element after the position. Again, we pass in the stride to tell OpenGL that the next normal will be found 8 elements or 32 bytes later.

Dalvik and memory on the native heap

If you allocate a lot of memory on the native heap and release it, you will probably run into the beloved OutOfMemoryError, sooner or later. There are a couple of reasons behind that:

  1. You might think that you’ve released the memory by letting the reference go out of scope, but native memory seems to take a few extra GC cycles to be completely cleaned up, and Dalvik will throw an exception if there is not enough free memory available and the native memory has not yet been released.
  2. The native heap can become fragmented. Calls to allocateDirect() will inexplicably fail, even though there appears to be plenty of memory available. Sometimes it helps to make a smaller allocation, free it, and then try the larger allocation again.

What can you do about these problems? Not much, other than hoping that Google improves the behaviour of Dalvik in future editions (they’ve added a largeHeap parameter to 3.0+), or manage the heap yourself by doing your allocations in native code or allocating a huge block upfront, and spinning off buffers based off of that.

Moving to vertex buffer objects

Now that we’ve reviewed working with client-side buffers, let’s move on to vertex buffer objects! First, we need to review a few very important points:

1. Buffers must be created within a valid OpenGL context.

This might seem like an obvious point, but it’s just a reminder that you have to wait until onSurfaceCreated(), and you have to take care that the OpenGL ES calls are done on the GL thread. See this document: OpenGL ES Programming Guide for iOS. It might be written for iOS, but the behaviour of OpenGL ES is similar on Android.

2. Improper use of vertex buffer objects will crash the graphics driver.

You need to be careful with the data you pass around when you use vertex buffer objects. Improper values will cause a native crash in the OpenGL ES system library or in the graphics driver library. On my Nexus S, some games freeze up my phone completely or cause it to reboot, because the graphics driver is crashing on their commands. Not all crashes will lock up your device, but at a minimum you will not see the “This application has stopped working” dialogue. Your activity will restart without warning, and the only info you’ll get might be a native debug trace in the logs.

3. The OpenGL ES bindings are broken on Froyo (2.2), and incomplete/unavailable in earlier versions.

This is the most unfortunate and most important point to consider. For some reason, Google really dropped the ball when it comes to OpenGL ES 2 support on Froyo. The mappings are incomplete, and several crucial functions needed to use vertex buffer objects are unavailable and cannot be used from Java code, at least with the standard SDK.

I don’t know if it’s because they didn’t run their unit tests, or if the developer was sloppy with their code generation tools, or if everyone was on 8 cups of coffee and burning the midnight oil to get things out the door. I don’t know why the API is broken, but the fact is that it’s broken.

There are three solutions to this problem:

  1. Target Gingerbread (2.3) and higher.
  2. Don’t use vertex buffer objects.
  3. Use your own Java Native Interface (JNI) library to interface with the native OpenGL ES system libraries.

I find option 1 to be unacceptable, since a full quarter of devices out there still run on Froyo as of the time of this writing. Option 2 works, but is kind of silly.

The option I recommend, and that I have decided to go with, is to use your own JNI bindings. For this lesson I have decided to go with the bindings generously provided by the guys who created libgdx, a cross-platform game development library licensed under the Apache License 2.0. You need to use the following files to make it work:

  • /libs/armeabi/libandroidgl20.so
  • /libs/armeabi-v7a/libandroidgl20.so
  • src/com/badlogic/gdx/backends/android/AndroidGL20.java
  • src/com/badlogic/gdx/graphics/GL20.java
  • src/com/badlogic/gdx/graphics/GLCommon.java

You might notice that this excludes Android platforms that do not run on ARM, and you’d be right. It would probably be possible to compile your own bindings for those platforms if you want to have VBO support on Froyo, though that is out of the scope of this lesson.

Using the bindings is as simple as these lines of code:

AndroidGL20 mGlEs20 = new AndroidGL20();
...
mGlEs20.glVertexAttribPointer(mPositionHandle, POSITION_DATA_SIZE, GLES20.GL_FLOAT, false, 0, 0);
...

You only need to call the custom binding where the SDK-provided binding is incomplete. I use the custom bindings to fill in the holes where the official one is missing functions.

Uploading vertex data to the GPU.

To upload data to the GPU, we need to follow the same steps in creating a client-side buffer as before:

...
cubePositionsBuffer = ByteBuffer.allocateDirect(cubePositions.length * BYTES_PER_FLOAT)
.order(ByteOrder.nativeOrder()).asFloatBuffer();
cubePositionsBuffer.put(cubePositions).position(0);
...

Once we have the client-side buffer, we can create a vertex buffer object and upload data from client memory to the GPU with the following commands:

// First, generate as many buffers as we need.
// This will give us the OpenGL handles for these buffers.
final int buffers[] = new int[3];
GLES20.glGenBuffers(3, buffers, 0);

// Bind to the buffer. Future commands will affect this buffer specifically.
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, buffers[0]);

// Transfer data from client memory to the buffer.
// We can release the client memory after this call.
GLES20.glBufferData(GLES20.GL_ARRAY_BUFFER, cubePositionsBuffer.capacity() * BYTES_PER_FLOAT,
	cubePositionsBuffer, GLES20.GL_STATIC_DRAW);

// IMPORTANT: Unbind from the buffer when we're done with it.
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, 0);

Once data has been uploaded to OpenGL ES, we can release the client-side memory as we no longer need to keep it around. Here is an explanation of glBufferData:

  • GL_ARRAY_BUFFER: This buffer contains an array of vertex data.
  • cubePositionsBuffer.capacity() * BYTES_PER_FLOAT: The number of bytes this buffer should contain.
  • cubePositionsBuffer: The source that will be copied to this vertex buffer object.
  • GL_STATIC_DRAW: The buffer will not be updated dynamically.

Our call to glVertexAttribPointer looks a little bit different, as the last parameter is now an offset rather than a pointer to our client-side memory:

// Pass in the position information
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, mCubePositionsBufferIdx);
GLES20.glEnableVertexAttribArray(mPositionHandle);
mGlEs20.glVertexAttribPointer(mPositionHandle, POSITION_DATA_SIZE, GLES20.GL_FLOAT, false, 0, 0);
...

Like before, we bind to the buffer, then enable the vertex array. Since the buffer is already bound, we only need to tell OpenGL the offset to start at when reading from the buffer. Since we are using separate buffers, we pass in an offset of 0. Notice also that we are using our custom binding to call glVertexAttribPointer, since the official SDK is missing this specific function call.

Once we are done drawing with our buffer, we should unbind from it:

GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, 0);

When we no longer want to keep our buffers around, we can free the memory:

final int[] buffersToDelete = new int[] { mCubePositionsBufferIdx, mCubeNormalsBufferIdx,
	mCubeTexCoordsBufferIdx };
GLES20.glDeleteBuffers(buffersToDelete.length, buffersToDelete, 0);
Packed vertex buffer objects

We can also use a single, packed vertex buffer object to hold all of our vertex data. The creation of a packed buffer is the same as above, with the only difference being that we start from a packed client-side buffer. Rendering from the packed buffer is also the same, except we need to pass in a stride and an offset, like when using packed buffers in client-side memory:

final int stride = (POSITION_DATA_SIZE + NORMAL_DATA_SIZE + TEXTURE_COORDINATE_DATA_SIZE)
	* BYTES_PER_FLOAT;

// Pass in the position information
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, mCubeBufferIdx);
GLES20.glEnableVertexAttribArray(mPositionHandle);
mGlEs20.glVertexAttribPointer(mPositionHandle, POSITION_DATA_SIZE, 
	GLES20.GL_FLOAT, false, stride, 0);

// Pass in the normal information
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, mCubeBufferIdx);
GLES20.glEnableVertexAttribArray(mNormalHandle);
mGlEs20.glVertexAttribPointer(mNormalHandle, NORMAL_DATA_SIZE, 
	GLES20.GL_FLOAT, false, stride, POSITION_DATA_SIZE * BYTES_PER_FLOAT);
...

Notice that the offset needs to be specified in bytes. The same considerations of unbinding and deleting the buffer apply, as before.

Putting it all together

This lesson is setup so that it builds a cube of cubes, with the same number of cubes in each dimension. It will build a cube of cubes between 1x1x1 cubes, and 16x16x16 cubes. Since each cube shares the same normal and texture data, this data will be copied repeatedly when we initialize our client-side buffer. All of the cubes will end up inside the same buffer objects.

You can view the code for the lesson and view an example of rendering with and without VBOs, and with and without packed buffers. Check the code to see how some of the following was handled:

  • Posting events from the OpenGL thread back to the main UI thread, via runOnUiThread.
  • Generating the vertex data asynchronously.
  • Handling out of memory errors.
  • We removed the call to glEnable(GL_TEXTURE_2D), since that is actually an invalid enum on OpenGL ES 2. This is a hold over from the fixed pipeline days; In OpenGL ES 2 this stuff is handled by shaders, so no need to use a glEnable/glDisable.
  • How to render using different paths, without adding too many if statements and conditions.
Further exercises

When would you use vertex buffers and when is it better to stream data from client memory? What are some of the drawbacks of using vertex buffer objects? How would you improve the asynchronous loading code?

Wrapping up

The full source code for this lesson can be downloaded from the project site on GitHub. A compiled version of the lesson can also be downloaded directly from the Android Market:

QR code for link to the app on the Android Market.

Thanks for stopping by, and please feel free to check out the code and share your comments below. A special thanks goes out again to the guys at libgdx for generously providing the source code and libraries for their OpenGL ES 2 bindings for Android 2.2!

Enhanced by Zemanta
Share

Rotating An Object With Touch Events

Rotating an object in 3D is a neat way of letting your users interact with the scene, but the math can be tricky to get right. In this article, I’ll take a look at a simple way to rotate an object based on the touch events, and how to work around the main drawback of this method.

Simple rotations.

This is the easiest way to rotate an object based on touch movement. Here is example pseudocode:

Matrix.setIdentity(modelMatrix);

... do other translations here ...

Matrix.rotateX(totalYMovement);
Matrix.rotateY(totalXMovement);

This is done every frame.

To rotate an object up or down, we rotate it around the X-axis, and to rotate an object left or right, we rotate it around the Y axis. We could also rotate an object around the Z axis if we wanted it to spin around.

How to make the rotation appear relative to the user’s point of view.

The main problem with the simple way of rotating is that the object is being rotated in relation to itself, instead of in relation to the user’s point of view. If you rotate left and right from a point of zero rotation, the cube will rotate as you expect, but what if you then rotate it up or down 180 degrees? Trying to rotate the cube left or right will now rotate it in the opposite direction!

One easy way to work around this problem is to keep a second matrix around that will store all of the accumulated rotations.

Here’s what we need to do:

  1. Every frame, calculate the delta between the last position of the pointer, and the current position of the pointer. This delta will be used to rotate our accumulated rotation matrix.
  2. Use this matrix to rotate the cube.

What this means is that drags left, right, up, and down will always move the cube in the direction that we expect.

Android Code

The code examples here are written for Android, but can easily be adapted to any platform running OpenGL ES. The code is based on Android Lesson Six: An Introduction to Texture Filtering.

In LessonSixGLSurfaceView.java, we declare a few member variables:

private float mPreviousX;
private float mPreviousY;

private float mDensity;

We will store the previous pointer position each frame, so that we can calculate the relative movement left, right, up, or down. We also store the screen density so that drags across the screen can move the object a consistent amount across devices, regardless of the pixel density.

Here’s how to get the pixel density:

final DisplayMetrics displayMetrics = new DisplayMetrics();
getWindowManager().getDefaultDisplay().getMetrics(displayMetrics);
density = displayMetrics.density

Then we add our touch event handler to our custom GLSurfaceView:

public boolean onTouchEvent(MotionEvent event)
{
	if (event != null)
	{
		float x = event.getX();
		float y = event.getY();

		if (event.getAction() == MotionEvent.ACTION_MOVE)
		{
			if (mRenderer != null)
			{
				float deltaX = (x - mPreviousX) / mDensity / 2f;
				float deltaY = (y - mPreviousY) / mDensity / 2f;

				mRenderer.mDeltaX += deltaX;
				mRenderer.mDeltaY += deltaY;
			}
		}

		mPreviousX = x;
		mPreviousY = y;

		return true;
	}
	else
	{
		return super.onTouchEvent(event);
	}
}

Every frame, we compare the current pointer position with the previous, and use that to calculate the delta offset. We then divide that delta offset by the pixel density and a slowing factor of 2.0f to get our final delta values. We apply those directly to the renderer to a couple of public variables that we have also declared as volatile, so that they can be updated between threads.

Remember, on Android, the OpenGL renderer runs in a different thread than the UI event handler thread, and there is a slim chance that the other thread fires in-between the X and Y variable assignments (there are also additional points of contention with the += syntax). I have left the code like this to bring up this point; as an exercise for the reader I leave it to you to add synchronized statements around the public variable read and write pairs instead of using volatile variables.

First, let’s add a couple of matrices and initialize them:

/** Store the accumulated rotation. */
private final float[] mAccumulatedRotation = new float[16];

/** Store the current rotation. */
private final float[] mCurrentRotation = new float[16];
@Override
public void onSurfaceCreated(GL10 glUnused, EGLConfig config)
{

    ...

    // Initialize the accumulated rotation matrix
     Matrix.setIdentityM(mAccumulatedRotation, 0);
}

Here’s what our matrix code looks like in the onDrawFrame method:

// Draw a cube.
// Translate the cube into the screen.
Matrix.setIdentityM(mModelMatrix, 0);
Matrix.translateM(mModelMatrix, 0, 0.0f, 0.8f, -3.5f);

// Set a matrix that contains the current rotation.
Matrix.setIdentityM(mCurrentRotation, 0);
Matrix.rotateM(mCurrentRotation, 0, mDeltaX, 0.0f, 1.0f, 0.0f);
Matrix.rotateM(mCurrentRotation, 0, mDeltaY, 1.0f, 0.0f, 0.0f);
mDeltaX = 0.0f;
mDeltaY = 0.0f;

// Multiply the current rotation by the accumulated rotation, and then set the accumulated
// rotation to the result.
Matrix.multiplyMM(mTemporaryMatrix, 0, mCurrentRotation, 0, mAccumulatedRotation, 0);
System.arraycopy(mTemporaryMatrix, 0, mAccumulatedRotation, 0, 16);

// Rotate the cube taking the overall rotation into account.
Matrix.multiplyMM(mTemporaryMatrix, 0, mModelMatrix, 0, mAccumulatedRotation, 0);
System.arraycopy(mTemporaryMatrix, 0, mModelMatrix, 0, 16);
  1. First we translate the cube.
  2. Then we build a matrix that will contain the current amount of rotation, between this frame and the preceding frame.
  3. We then multiply this matrix with the accumulated rotation, and assign the accumulated rotation to the result. The accumulated rotation contains the result of all of our rotations since the beginning.
  4. Now that we’ve updated the accumulated rotation matrix with the most recent rotation, we finally rotate the cube by multiplying the model matrix with our rotation matrix, and then we set the model matrix to the result.

The above code might look a bit confusion due to the placement of the variables, so remember the definitions:

public static void multiplyMM (float[] result, int resultOffset, float[] lhs, int lhsOffset, float[] rhs, int rhsOffset)

public static void arraycopy (Object src, int srcPos, Object dst, int dstPos, int length)

Note the position of source and destination for each method call.

Trouble spots and pitfalls
  • The accumulated matrix should be set to identity once when initialized, and should not be reset to identity each frame.
  • Previous pointer positions must also be set on pointer down events, not only on pointer move events.
  • Watch the order of parameters, and also watch out for corrupting your matrices. Android’s Matrix.multiplyMM states that “the result element values are undefined if the result elements overlap either the lhs or rhs elements.” Use temporary matrices to avoid this problem.
WebGL examples

The example on the left uses the simplest method of rotating, while the example on the right uses the accumulated rotations matrix.

Your browser does not support the canvas tag. Your browser does not support the canvas tag.
Further exercises

What are the drawbacks of using a matrix to hold accumulated rotations and updating it every frame based on the movement delta for that frame? What other ways of rotation are there? Try experimenting, and see what else you can come up with!

  
Share
Basic blending (additive blending of RGB cubes).

Listening to Android Touch Events, and Acting on Them

Basic blending (additive blending of RGB cubes).

Basic blending.

We started listening to touch events in Android Lesson Five: An Introduction to Blending, and in that lesson, we listened to touch events and used them to change our OpenGL state.

To listen to touch events, you first need to subclass GLSurfaceView and create your own custom view. In that view, you create a default constructor that calls the superclass, create a new method to take in a specific renderer (LessonFiveRenderer in this case) instead of the generic interface, and override onTouchEvent(). We pass in a concrete renderer class, because we will be calling specific methods on that class in the onTouchEvent() method.

On Android, the OpenGL rendering is done in a separate thread, so we’ll also look at how we can safely dispatch these calls from the main UI thread that is listening to the touch events, over to the separate renderer thread.

public class LessonFiveGLSurfaceView extends GLSurfaceView
{
	private LessonFiveRenderer mRenderer;

	public LessonFiveGLSurfaceView(Context context)
	{
		super(context);
	}

	@Override
	public boolean onTouchEvent(MotionEvent event)
	{
		if (event != null)
		{
			if (event.getAction() == MotionEvent.ACTION_DOWN)
			{
				if (mRenderer != null)
				{
					// Ensure we call switchMode() on the OpenGL thread.
					// queueEvent() is a method of GLSurfaceView that will do this for us.
					queueEvent(new Runnable()
					{
						@Override
						public void run()
						{
							mRenderer.switchMode();
						}
					});

					return true;
				}
			}
		}

		return super.onTouchEvent(event);
	}

	// Hides superclass method.
	public void setRenderer(LessonFiveRenderer renderer)
	{
		mRenderer = renderer;
		super.setRenderer(renderer);
	}
}

And the implementation of switchMode() in LessonFiveRenderer:

public void switchMode()
{
	mBlending = !mBlending;

	if (mBlending)
	{
		// No culling of back faces
		GLES20.glDisable(GLES20.GL_CULL_FACE);

		// No depth testing
		GLES20.glDisable(GLES20.GL_DEPTH_TEST);

		// Enable blending
		GLES20.glEnable(GLES20.GL_BLEND);
		GLES20.glBlendFunc(GLES20.GL_ONE, GLES20.GL_ONE);
	}
	else
	{
		// Cull back faces
		GLES20.glEnable(GLES20.GL_CULL_FACE);

		// Enable depth testing
		GLES20.glEnable(GLES20.GL_DEPTH_TEST);

		// Disable blending
		GLES20.glDisable(GLES20.GL_BLEND);
	}
}

Let’s look a little bit more closely at LessonFiveGLSurfaceView::onTouchEvent(). Something important to remember is that touch events run on the UI thread, while GLSurfaceView creates the OpenGL ES context in a separate thread, which means that our renderer’s callbacks also run in a separate thread. This is an important point to remember, because we can’t call OpenGL from another thread and just expect things to work.

Thankfully, the guys that wrote GLSurfaceView also thought of this, and provided a queueEvent() method that you can use to call stuff on the OpenGL thread. So, when we want to turn blending on and off by tapping the screen, we make sure that we’re calling the OpenGL stuff on the right thread by using queueEvent() in the UI thread.

Further exercises

How would you listen to keyboard events, or other system events, and show an update in the OpenGL context?

Enhanced by Zemanta
Share
trilinear-filtering

Android Lesson Six: An Introduction to Texture Filtering

In this lesson, we will introduce the different types of basic texture filtering modes and how to use them, including nearest-neighbour filtering, bilinear filtering, and trilinear filtering using mipmaps.

You’ll learn how to make your textures appear more smooth, as well as the drawbacks that come from smoothing. There are also different ways of rotating an object, one of which is used in this lesson.

Assumptions and prerequisites

It’s highly recommended to understand the basics of texture mapping in OpenGL ES, covered in the lesson Android Lesson Four: Introducing Basic Texturing.

What is texture filtering?

Textures in OpenGL are made up of arrays of elements known as texels, which contain colour and alpha values. This corresponds with the display, which is made up of a bunch of pixels and displays a different colour at each point. In OpenGL, textures are applied to triangles and drawn on the screen, so these textures can be drawn in various sizes and orientation. The texture filtering options in OpenGL tell it how to filter the texels onto the pixels of the device, depending on the case.

There are three cases:

  • Each texel maps onto more than one pixel. This is known as magnification.
  • Each texel maps exactly onto one pixel. Filtering doesn’t apply in this case.
  • Each texel maps onto less than one pixel. This is known as minification.

OpenGL lets us assign a filter for both magnification and minification, and lets us use nearest-neighbour, bilinear filtering, or trilinear filtering. I will explain what these mean further below.

Magnification and minification

Here is a visualization of both magnification and minification with nearest-neighbour rendering, using the cute Android that shows up when you have your USB connected to your Android device:

Magnification

As you can see, the texels of the image are easily visible, as they now cover many of the pixels on your display.

Minification

With minification, many of the details are lost as many of the texels cannot be rendered onto the limited pixels available.

Texture filtering modes

Bilinear interpolation

The texels of a texture are clearly visible as large squares in the magnification example when no interpolation between the texel values are done. When rendering is done in nearest-neighbour mode, the pixel is assigned the value of the nearest texel.

The rendering quality can be dramatically improved by switching to bilinear interpolation. Instead of assigning the values of a group of pixels to the same nearby texel value, these values will instead be linearly interpolated between the neighbouring four texels. Each pixel will be smoothed out and the resulting image will look much smoother:

Some blockiness is still apparent, but the image looks much smoother than before. People who played 3D games back in the days before 3D accelerated cards came out will remember that this was the defining feature between a software-rendered game and a hardware-accelerated game: software-rendered games simply did not have the processing budget to do smoothing, so everything appeared blocky and jagged. Things suddenly got smooth once people starting using graphics accelerators.

English: bilinear interpolation, upscaling of ...

Image via Wikipedia

Bilinear interpolation is mostly useful for magnification. It can also be used for minification, but beyond a certain point and we run into the same problem that we are trying to cram far too many texels onto the same pixel. OpenGL will only use at most 4 texels to render a pixel, so a lot of information is still being lost.

If we look at a detailed texture with bilinear interpolation being applied, it will look very noisy when we see it moving in the distance, since a different set of texels will be selected each frame.

Mipmapping

How can we minify textures without introducing noise and use all of the texels? This can be done by generating a set of optimized textures at different sizes which we can then use at runtime. Since these textures are pre-generated, they can be filtered using more expensive techniques that use all of the texels, and at runtime OpenGL will select the most appropriate level based on the final size of the texture on the screen.

The resulting image can have more detail, less noise, and look better overall. Although a bit more memory will be used, rendering can also be faster, as the smaller levels can be more easily kept in the GPU’s texture cache. Let’s take a closer look at the resulting image at 1/8th of its original size, using bilinear filtering without and with mipmaps; the image has been expanded for clarity:

Bilinear filtering without mipmaps

Bilinear filtering with mipmaps

The version using mipmaps has vastly more detail. Because of the pre-processing of the image into separate levels, all of the texels end up getting used in the final image.

Trilinear filtering

When using mipmaps with bilinear filtering, sometimes a noticeable jump or line can be seen in the rendered scene where OpenGL switches between different mipmap levels of the texture. This will be pointed out a bit further below when comparing the different OpenGL texture filtering modes.

Trilinear filtering solves this problem by also interpolating between the different mipmap levels, so that a total of 8 texels will be used to interpolate the final pixel value, resulting in a smoother image.

OpenGL texture filtering modes

OpenGL has two parameters that can be set:

  • GL_TEXTURE_MIN_FILTER
  • GL_TEXTURE_MAG_FILTER

These correspond to the minification and magnification described earlier. GL_TEXTURE_MIN_FILTER accepts the following options:

  • GL_NEAREST
  • GL_LINEAR
  • GL_NEAREST_MIPMAP_NEAREST
  • GL_NEAREST_MIPMAP_LINEAR
  • GL_LINEAR_MIPMAP_NEAREST
  • GL_LINEAR_MIPMAP_LINEAR

GL_TEXTURE_MAG_FILTER accepts the following options:

  • GL_NEAREST
  • GL_LINEAR

GL_NEAREST corresponds to nearest-neighbour rendering, GL_LINEAR corresponds to bilinear filtering, GL_LINEAR_MIPMAP_NEAREST corresponds to bilinear filtering with mipmaps, and GL_LINEAR_MIPMAP_LINEAR corresponds to trilinear filtering. Graphical examples and further explanation of the most common options are visible further down in this lesson.

How to set a texture filtering mode

We first need to bind the texture, then we can set the appropriate filter parameter on that texture:

GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, mTextureHandle);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER, filter);

How to generate mipmaps

This is really easy. After loading the texture into OpenGL (See Android Lesson Four: Introducing Basic Texturing for more information on how to do this), while the texture is still bound, we can simply call:

GLES20.glGenerateMipmap(GLES20.GL_TEXTURE_2D);

This will generate all of the mipmap levels for us, and these levels will get automatically used depending on the texture filter set.

How does it look?

Here are some screenshots of the most common combinations available. The effects are more dramatic when you see it in motion, so I recommend downloading the app and giving it a shot.

Nearest-neighbour rendering

This mode is reminiscent of older software-rendered 3D games.

GL_TEXTURE_MIN_FILTER = GL_NEAREST
GL_TEXTURE_MAG_FILTER = GL_NEAREST

Bilinear filtering, with mipmaps

This mode was used by many of the first games that supported 3D acceleration and is an efficient way of smoothing textures on Android phones today.

GL_TEXTURE_MIN_FILTER = GL_LINEAR_MIPMAP_NEAREST
GL_TEXTURE_MAG_FILTER = GL_LINEAR

It’s hard to see on this static image, but when things are in motion, you might notice horizontal bands where the rendered pixels switch between mipmap levels.

Trilinear filtering

This mode improves on the render quality of bilinear filtering with mipmaps, by interpolating between the mipmap levels.

GL_TEXTURE_MIN_FILTER = GL_LINEAR_MIPMAP_LINEAR
GL_TEXTURE_MAG_FILTER = GL_LINEAR

The pixels are completely smoothed between near and far distances; in fact, the textures may now appear too smooth at oblique angles. Anisotropic filtering is a more advanced technique that is supported by some mobile GPUs and can be used to improve the final results beyond what trilinear filtering can deliver.

Further exercises

What sort of effects can you achieve with the other modes? For example, when would you use something like GL_NEAREST_MIPMAP_LINEAR?

Wrapping up

The full source code for this lesson can be downloaded from the project site on GitHub.

A compiled version of the lesson can also be downloaded directly from the Android Market:

QR code for link to the app on the Android Market.

I hope you enjoyed this lesson, and thanks for stopping by!

Enhanced by Zemanta
Share

OpenGL ES Roundup, February 12, 2012

a single vanilla ice cream sandwich

Image via Wikipedia

If you haven’t checked it out yet, I recommend taking a look at the new Android Design website. There are a lot of resources and interesting information on developing attractive apps for Ice Cream Sandwich, Android’s newest platform. With Ice Cream Sandwich comes new changes, such as the deprecation of the menu bar.

Roundup

Question for you, dear reader: What do you all think about the current site design? I’d love to hear your thoughts and feedback, both positive and negative.

Enhanced by Zemanta
Share

Google Celebrates 10 Billion Downloads on the Market: Several Great Apps Available for Only 10 Cents Each!

Android Market

Image via Wikipedia

Android is booming very rapidly, and the market share has gone from nothing to dominating the market, within only a couple of years. I still remember when just a bit more than a year ago, competitors still thought that Android was only going to fill a niche market, and that it was “clunky” and “slow”. Well, its open nature has led to a rapid explosion of devices with Android, and rapidly improving speeds are getting rid of the Android “lag”.

From the Android Developer’s Blog:

The Android Market has recently crossed 10 billion downloads, and Google is celebrating:
“One billion is a pretty big number by any measurement. However, when it’s describing the speed at which something is growing, it’s simply amazing. This past weekend, thanks to Android users around the world, Android Market exceeded 10 billion app downloads—with a growth rate of one billion app downloads per month. We can’t wait to see where this accelerating growth takes us in 2012.”

“To celebrate this milestone, we partnered with some of the Android developers who contributed to this milestone to make a bunch of great Android apps available at an amazing price. Starting today for the next 10 days, we’ll have a new set of awesome apps available each day for only 10 cents each. Today, we are starting with Asphalt 6 HDColor & Draw for KidsEndomondo Sports Tracker ProFieldrunners HDGreat Little War GameMinecraftPaper CameraSketchbook MobileSoundhound Infinity and Swiftkey X.”

For 10 cents, why not try out a couple of apps? It’s amazing to see just how the quality of apps has improved over the past year, especially games. Unfortunately my Nexus S did reboot while playing one of the games, so the stability still isn’t all there, but I definitely do see an improvement in quality as compared to last year.

Check out the post for more details. It sounds like there will be a new set of apps at 10 cents each day!

Note: If you don’t see the apps at 10 cents on your phone, try going to the Android Market from your PC, instead. I had the same issue, where the app was listed at $4.99 on my phone and at 10 cents on the web. If you have a Google account linked to your phone, then you can just purchase from the web and Google will automatically send the app to your phone.

Enhanced by Zemanta
Share

OpenGL ES Roundup, October 4, 2011

Diney from db-in.com has a great introduction to shaders up at his site. He annotates his post with useful diagrams and also has a clear and neat introduction to tangent space.

Learning WebGL has a cool roundup of WebGL around the web.

The NeHe OpenGL site is alive! They are starting to post updates on a regular basis again.

Is 3D the future of Android? This article by The Droid Demos takes a look at the emerging 3D screen technologies.

Android and Me has a possible screenshot of the Nexus Prime, Google’s next flagship device which will run Ice Cream Sandwich.

Share
Basic blending (additive blending of RGB cubes).

Android Lesson Five: An Introduction to Blending

Basic blending (additive blending of RGB cubes).

Basic blending.

In this lesson we’ll take a look at the basics of blending in OpenGL. We’ll look at how to turn blending on and off, how to set different blending modes, and how different blending modes mimic real-life effects. In a later lesson, we’ll also look at how to use the alpha channel, how to use the depth buffer to render both translucent and opaque objects in the same scene, as well as when we need to sort objects by depth, and why.

We’ll also look at how to listen to touch events, and then change our rendering state based on that.

Assumptions and prerequisites

Each lesson in this series builds on the one before it. However, for this lesson it will be enough if you understand Android Lesson One: Getting Started. Although the code is based on the preceding lesson, the lighting and texturing portion has been removed for this lesson so we can focus on the blending.

Blending

Blending is the act of combining one color with a second in order to get a third color. We see blending all of the time in the real world: when light passes through glass, when it bounces off of a surface, and when a light source itself is superimposed on the background, such as the flare we see around a lit streetlight at night.

OpenGL has different blending modes we can use to reproduce this effect. In OpenGL, blending occurs in a late stage of the rendering process: it happens once the fragment shader has calculated the final output color of a fragment and it’s about to be written to the frame buffer. Normally that fragment just overwrites whatever was there before, but if blending is turned on, then that fragment is blended with what was there before.

By default, here’s what the OpenGL blending equation looks like when glBlendEquation() is set to the default, GL_FUNC_ADD:

output = (source factor * source fragment) + (destination factor * destination fragment)

There are also two other modes available in OpenGL ES 2, GL_FUNC_SUBTRACT and GL_FUNC_REVERSE_SUBTRACT. These may be covered in a future tutorial, however, I get an UnsupportedOperationException on the Nexus S when I try to call this function so it’s possible that this is not actually supported on the Android implementation. This isn’t the end of the world since there is plenty you can do already with GL_FUNC_ADD.

The source factor and destination factor are set using the function glBlendFunc(). An overview of a few common blend factors will be given below; more information  as well as an enumeration of the different possible factors is available at the Khronos online manual:

The documentation appears better in Firefox or if you have a MathML extension installed.

Clamping

OpenGL expects the input to be clamped in the range [0, 1] , and the output will also be clamped to the range [0, 1]. What this means in practice is that colors can shift in hue when you are doing blending. If you keep adding red (RGB = 1, 0, 0) to the frame buffer, the final color will stay red. However, if you add in just a little bit of green so that you are adding (RGB = 1, 0.1, 0) to the frame buffer, you will end up with yellow even though you started with a reddish hue! You can see this effect in the demo for this lesson when blending is turned on: the colors become oversaturated where different colors overlap.

Different types of blending and how they relate to different effects

Additive Color. Source: http://commons.wikimedia.org/wiki/File:AdditiveColor.svg

The RGB additive color model. Source: Wikipedia

Additive blending

Additive blending is the type of blending we do when we add different colors together and add the result. This is the way that our vision works together with light and this is how we can perceive millions of different colors on our monitors — they are really just blending three different primary colors together.

This type of blending has many uses in 3D rendering, such as in particle effects which appear to give off light or overlays such as the corona around a light, or a glow effect around a light saber.

Additive blending can be specified by calling glBlendFunc(GL_ONE, GL_ONE). This results in the blending equation output = (1 * source fragment) + (1 * destination fragment), which collapses into output = source fragment + destination fragment.

A lightmap applied to the first texture from http://pdtextures.blogspot.com/2008/03/first-set.html

An example of lightmapping.

Multiplicative blending

Multiplicative blending (also known as modulation) is another useful blending mode that represents the way that light behaves when it passes through a color filter, or bounces off of a lit object and enters our eyes. A red object appears red to us because when white light strikes the object, blue and green light is absorbed. Only the red light is reflected back toward our eyes. In the example to the left, we can see a surface that reflects some red and some green, but very little blue.

When multi-texturing is not available, multiplicative blending is used to implement lightmaps in games. The texture is multiplied by the lightmap in order to fill in the lit and shadowed areas.

Multiplicative blending can be specified by calling glBlendFunc(GL_DST_COLOR, GL_ZERO). This results in the blending equation output = (destination fragment * source fragment) + (0 * destination fragment), which collapses into output = source fragment * destination fragment.

An example of two textures blended together. Textures from http://pdtextures.blogspot.com/2008/03/first-set.html

An example of two textures interpolated together.

Interpolative blending

Interpolative blending combines multiplication and addition to give an interpolative effect. Unlike addition and modulation by themselves, this blending mode can also be draw-order dependent, so in some cases the results will only be correct if you draw the furthest translucent objects first, and then the closer ones afterwards. Even sorting wouldn’t be perfect, since it’s possible for triangles to overlap and intersect, but the resulting artifacts may be acceptable.

Interpolation is often useful to blend adjacent surfaces together, as well as do effects like tinted glass, or fade-in/fade-out. The image on the left shows two textures (textures from public domain textures) blended together using interpolation.

Interpolation is specified by calling glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA). This results in the blending equation output = (source alpha * source fragment) + ((1 – source alpha) * destination fragment). Here’s an example:

Imagine that we’re drawing a green (0r, 1g, 0b) object that is only 25% opaque. The object currently on the screen is red (1r, 0g, 0b) .

output = (source factor * source fragment) + (destination factor * destination fragment)
output = (source alpha * source fragment) + ((1 – source alpha) * destination fragment)
output = (0.25 * (0r, 1g, 0b)) + (0.75 * (1r, 0g, 0b))
output = (0r, 0.25g, 0b) + (0.75r, 0g, 0b)
output = (0.75r, 0.25g, 0b)

Notice that we don’t make any reference to the destination alpha, so the frame buffer itself doesn’t need an alpha channel, which gives us more bits for the color channels.

Using blending

For our lesson, our demo will show the cubes as if they were emitters of light, using additive blending. Something that emits light doesn’t need to be lit by other light sources, so there are no lights in this demo. I’ve also removed the texture, although it could have been neat to use one. The shader program for this lesson will be simple; we just need a shader that will pass out the color given to it.

Vertex shader
uniform mat4 u_MVPMatrix;		// A constant representing the combined model/view/projection matrix.

attribute vec4 a_Position;		// Per-vertex position information we will pass in.
attribute vec4 a_Color;			// Per-vertex color information we will pass in.

varying vec4 v_Color;			// This will be passed into the fragment shader.

// The entry point for our vertex shader.
void main()
{
	// Pass through the color.
	v_Color = a_Color;

	// gl_Position is a special variable used to store the final position.
	// Multiply the vertex by the matrix to get the final point in normalized screen coordinates.
	gl_Position = u_MVPMatrix * a_Position;
}
Fragment shader
precision mediump float;       	// Set the default precision to medium. We don't need as high of a
								// precision in the fragment shader.
varying vec4 v_Color;          	// This is the color from the vertex shader interpolated across the
  								// triangle per fragment.

// The entry point for our fragment shader.
void main()
{
	// Pass through the color
    gl_FragColor = v_Color;
}
Turning blending on

Turning blending on is as simple as making these function calls:

// No culling of back faces
GLES20.glDisable(GLES20.GL_CULL_FACE);

// No depth testing
GLES20.glDisable(GLES20.GL_DEPTH_TEST);

// Enable blending
GLES20.glEnable(GLES20.GL_BLEND);
GLES20.glBlendFunc(GLES20.GL_ONE, GLES20.GL_ONE);

We turn off the culling of back faces, because if a cube is translucent, then we can now see the back sides of the cube. We should draw them otherwise it might look quite strange. We turn off depth testing for the same reason.

Listening to touch events, and acting on them

You’ll notice when you run the demo that it’s possible to turn blending on and off by tapping on the screen. See the article “Listening to Android Touch Events, and Acting on Them” for more information.

Further exercises

The demo only uses additive blending at the moment. Try changing it to interpolative blending and re-adding the lights and textures. Does the draw order matter if you’re only drawing two translucent textures on a black background? When would it matter?

Wrapping up

The full source code for this lesson can be downloaded from the project site on GitHub.

A compiled version of the lesson can also be downloaded directly from the Android Market:

QR code for link to the app on the Android Market.

As always, please don’t hesitate to leave feedbacks or comments, and thanks for stopping by!

Enhanced by Zemanta
Share
Lesson Four: Basic Texturing

Android Lesson Four: Introducing Basic Texturing

Lesson Four: Basic Texturing

Basic texturing.

This is the fourth tutorial in our Android series. In this lesson, we’re going to add to what we learned in lesson three and learn how to add texturing.We’ll look at how to read an image from the application resources, load this image into OpenGL ES, and display it on the screen.

Follow along with me and you’ll understand basic texturing in no time flat!

Assumptions and prerequisites

Each lesson in this series builds on the lesson before it. This lesson is an extension of lesson three, so please be sure to review that lesson before continuing on. Here are the previous lessons in the series:

The basics of texturing

The art of texture mapping (along with lighting) is one of the most important parts of building up a realistic-looking 3D world. Without texture mapping, everything is smoothly shaded and looks quite artificial, like an old console game from the 90s.

The first games to start heavily using textures, such as Doom and Duke Nukem 3D, were able to greatly enhance the realism of the gameplay through the added visual impact — these were games that could start to truly scare us if played at night in the dark.

Here’s a look at a scene, without and with texturing:

Per fragment lighting; centered between four vertices of a square.

Per fragment lighting; centered between four vertices of a square.

Per fragment lighting with texturing: centered between four vertices of a square.

Adding texturing; centered between four vertices of a square.

 

In the image on the left, the scene is lit with per-pixel lighting and colored. Otherwise the scene appears very smooth. There are not many places in real-life where we would walk into a room full of smooth-shaded objects like this cube.In the image on the right, the same scene has now also been textured. The ambient lighting has also been increased because the use of textures darkens the overall scene, so this was done so you could also see the effects of texturing on the side cubes. The cubes have the same number of polygons as before, but they appear a lot more detailed with the new texture.

For those who are curious, the texture source is from public domain textures.

Texture coordinates

In OpenGL, texture coordinates are sometimes referred to in coordinates (s, t) instead of (x, y). (s, t) represents a texel on the texture, which is then mapped to the polygon. Another thing to note is that these texture coordinates are like other OpenGL coordinates: The t (or y) axis is pointing upwards, so that values get higher the higher you go.

In most computer images, the y axis is pointing downwards. This means that the top-left most corner of the image is (0, 0), and the y values increase the lower you go.In other words, the y-axis is flipped between OpenGL’s coordinate system and most computer images, and this is something you need to take into account.

OpenGL's texture coordinate system.

OpenGL's texture coordinate system.

The basics of texture mapping

In this lesson, we will look at regular 2D textures (GL_TEXTURE_2D) with red, green, and blue color information (GL_RGB). OpenGL ES also offers other texture modes that let you do different and more specialized effects. We’ll look at point sampling using GL_NEAREST. GL_LINEAR and mip-mapping will be covered in a future lesson.

Let’s start getting into the code and see how to start using basic texturing in Android!

Vertex shader

We’re going to take our per-pixel lighting shader from the previous lesson, and add texturing support. Here are the new changes:

attribute vec2 a_TexCoordinate; // Per-vertex texture coordinate information we will pass in.

...

varying vec2 v_TexCoordinate;   // This will be passed into the fragment shader.

...
// Pass through the texture coordinate.
v_TexCoordinate = a_TexCoordinate;

In the vertex shader, we add a new attribute of type vec2 (an array with two components) that will take in texture coordinate information as input. This will be per-vertex, like the position, color, and normal data. We also add a new varying that will pass this data through to the fragment shader via linear interpolation across the surface of the triangle.

Fragment shader
uniform sampler2D u_Texture;    // The input texture.

...

varying vec2 v_TexCoordinate; // Interpolated texture coordinate per fragment.

...

// Add attenuation.
 diffuse = diffuse * (1.0 / (1.0 + (0.10 * distance)));

...

// Add ambient lighting
 diffuse = diffuse + 0.3;

...

// Multiply the color by the diffuse illumination level and texture value to get final output color.

gl_FragColor = (v_Color * diffuse * texture2D(u_Texture, v_TexCoordinate));

We add a new uniform of type sampler2D to represent the actual texture data (as opposed to texture coordinates). The varying passes in the interpolated texture coordinates from the vertex shader, and we call texture2D(texture, textureCoordinate) to read in the value of the texture at the current coordinate. We then take this value and multiply it with the other terms to get the final output color.

Adding in a texture this way darkens the overall scene somewhat, so we also boost up the ambient lighting a bit and reduce the lighting attenuation.

Loading in the texture from an image file
	public static int loadTexture(final Context context, final int resourceId)
	{
		final int[] textureHandle = new int[1];

		GLES20.glGenTextures(1, textureHandle, 0);

		if (textureHandle[0] != 0)
		{
			final BitmapFactory.Options options = new BitmapFactory.Options();
			options.inScaled = false;	// No pre-scaling

			// Read in the resource
			final Bitmap bitmap = BitmapFactory.decodeResource(context.getResources(), resourceId, options);

			// Bind to the texture in OpenGL
			GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, textureHandle[0]);

			// Set filtering
			GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_NEAREST);
			GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_NEAREST);

			// Load the bitmap into the bound texture.
			GLUtils.texImage2D(GLES20.GL_TEXTURE_2D, 0, bitmap, 0);

			// Recycle the bitmap, since its data has been loaded into OpenGL.
			bitmap.recycle();
		}

		if (textureHandle[0] == 0)
		{
			throw new RuntimeException("Error loading texture.");
		}

		return textureHandle[0];
	}

This bit of code will read in a graphics file from your Android res folder and load it into OpenGL. I’ll explain what each part does.

We first need to ask OpenGL to create a new handle for us. This handle serves as a unique identifier, and we use it whenever we want to refer to the same texture in OpenGL.

final int[] textureHandle = new int[1];
GLES20.glGenTextures(1, textureHandle, 0);

The OpenGL method can be used to generate multiple handles at the same time; here we generate just one.

Once we have a texture handle, we use it to load the texture. First, we need to get the texture in a format that OpenGL will understand. We can’t just feed it raw data from a PNG or JPG, because it won’t understand that. The first step that we need to do is to decode the image file into an Android Bitmap object:

final BitmapFactory.Options options = new BitmapFactory.Options();
options.inScaled = false;	// No pre-scaling

// Read in the resource
final Bitmap bitmap = BitmapFactory.decodeResource(context.getResources(), resourceId, options);

By default, Android applies pre-scaling to bitmaps depending on the resolution of your device and which resource folder you placed the image in. We don’t want Android to scale our bitmap at all, so to be sure, we set inScaled to false.

// Bind to the texture in OpenGL
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, textureHandle[0]);

// Set filtering
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_NEAREST);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_NEAREST);

We then bind to the texture and set a couple of parameters. Binding to a texture tells OpenGL that subsequent OpenGL calls should affect this texture. We set the default filters to GL_NEAREST, which is the quickest and also the roughest form of filtering. All it does is pick the nearest texel at each point in the screen, which can lead to graphical artifacts and aliasing.

  • GL_TEXTURE_MIN_FILTER — This tells OpenGL what type of filtering to apply when drawing the texture smaller than the original size in pixels.
  • GL_TEXTURE_MAG_FILTER — This tells OpenGL what type of filtering to apply when magnifying the texture beyond its original size in pixels.
// Load the bitmap into the bound texture.
GLUtils.texImage2D(GLES20.GL_TEXTURE_2D, 0, bitmap, 0);

// Recycle the bitmap, since its data has been loaded into OpenGL.
bitmap.recycle();

Android has a very useful utility to load bitmaps directly into OpenGL. Once you’ve read in a resource into a Bitmap object, GLUtils.texImage2D() will take care of the rest. Here’s the method signature:

public static void texImage2D (int target, int level, Bitmap bitmap, int border)

We want a regular 2D bitmap, so we pass in GL_TEXTURE_2D as the first parameter. The second parameter is for mip-mapping, and lets you specify the image to use at each level. We’re not using mip-mapping here so we’ll put 0 which is the default level. We pass in the bitmap, and we’re not using the border so we pass in 0.

We then call recycle() on the original bitmap, which is an important step to free up memory. The texture has been loaded into OpenGL, so we don’t need to keep a copy of it lying around. Yes, Android apps run under a Dalvik VM that performs garbage collection, but Bitmap objects contain data that resides in native memory and they take a few cycles to be garbage collected if you don’t recycle them explicitly. This means that you could actually crash with an out of memory error if you forget to do this, even if you no longer hold any references to the bitmap.

Applying the texture to our scene

First, we need to add various members to the class to hold stuff we need for our texture:

/** Store our model data in a float buffer. */
private final FloatBuffer mCubeTextureCoordinates;

/** This will be used to pass in the texture. */
private int mTextureUniformHandle;

/** This will be used to pass in model texture coordinate information. */
private int mTextureCoordinateHandle;

/** Size of the texture coordinate data in elements. */
private final int mTextureCoordinateDataSize = 2;

/** This is a handle to our texture data. */
private int mTextureDataHandle;

We basically need to add new members to track what we added to the shaders, as well as hold a reference to our texture.

Defining the texture coordinates

We define our texture coordinates in the constructor:

// S, T (or X, Y)
// Texture coordinate data.
// Because images have a Y axis pointing downward (values increase as you move down the image) while
// OpenGL has a Y axis pointing upward, we adjust for that here by flipping the Y axis.
// What's more is that the texture coordinates are the same for every face.
final float[] cubeTextureCoordinateData =
{
		// Front face
		0.0f, 0.0f,
		0.0f, 1.0f,
		1.0f, 0.0f,
		0.0f, 1.0f,
		1.0f, 1.0f,
		1.0f, 0.0f,

...

The coordinate data might look a little confusing here. If you go back and look at how the position points are defined in Lesson 3, you’ll see that we define two triangles per face of the cube. The points are defined like this:

(Triangle 1)
Upper-left,
Lower-left,
Upper-right,

(Triangle 2)
Lower-left,
Lower-right,
Upper-right

The texture coordinates are pretty much the position coordinates for the front face, but with the Y axis flipped to compensate for the fact that in graphics images, the Y axis points in the opposite direction of OpenGL’s Y axis.

Setting up the texture

We load the texture in the onSurfaceCreated() method.

@Override
public void onSurfaceCreated(GL10 glUnused, EGLConfig config)
{

	...

	// The below glEnable() call is a holdover from OpenGL ES 1, and is not needed in OpenGL ES 2.
	// Enable texture mapping
	// GLES20.glEnable(GLES20.GL_TEXTURE_2D);

	...

	mProgramHandle = ShaderHelper.createAndLinkProgram(vertexShaderHandle, fragmentShaderHandle,
			new String[] {"a_Position",  "a_Color", "a_Normal", "a_TexCoordinate"});

	...

	// Load the texture
	mTextureDataHandle = TextureHelper.loadTexture(mActivityContext, R.drawable.bumpy_bricks_public_domain);

We pass in “a_TexCoordinate” as a new attribute to bind to in our shader program, and we load in our texture using the loadTexture() method we created above.

Using the texture

We also add some code to the onDrawFrame(GL10 glUnused) method.

@Override
public void onDrawFrame(GL10 glUnused)
{

	...

	mTextureUniformHandle = GLES20.glGetUniformLocation(mProgramHandle, "u_Texture");
	mTextureCoordinateHandle = GLES20.glGetAttribLocation(mProgramHandle, "a_TexCoordinate");

	// Set the active texture unit to texture unit 0.
	GLES20.glActiveTexture(GLES20.GL_TEXTURE0);

	// Bind the texture to this unit.
	GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, mTextureDataHandle);

	// Tell the texture uniform sampler to use this texture in the shader by binding to texture unit 0.
	GLES20.glUniform1i(mTextureUniformHandle, 0);

We get the shader locations for the texture data and texture coordinates. In OpenGL, textures need to be bound to texture units before they can be used in rendering. A texture unit is what reads in the texture and actually passes it through the shader so it can be displayed on the screen. Different graphics chips have a different number of texture units, so you’ll need to check if additional texture units exist before using them.

First, we tell OpenGL that we want to set the active texture unit to the first unit, texture unit 0. Our call to glBindTexture() will then automatically bind the texture to the first texture unit. Finally, we tell OpenGL that we want to bind the first texture unit to the mTextureUniformHandle, which refers to “u_Texture” in the fragment shader.

In short:

  1. Set the active texture unit.
  2. Bind a texture to this unit.
  3. Assign this unit to a texture uniform in the fragment shader.

Repeat for as many textures as you need.

Further exercises

Once you’ve made it this far, you’re done! Surely that wasn’t as bad as you expected… or was it? ;) As your next exercise, try to add multi-texturing by loading in another texture, binding it to another unit, and using it in the shader.

Review

Here is a review of the full shader code, as well as a new helper function that we added to read in the shader code from the resource folder instead of storing it as a Java String:

Vertex shader
uniform mat4 u_MVPMatrix;		// A constant representing the combined model/view/projection matrix.
uniform mat4 u_MVMatrix;		// A constant representing the combined model/view matrix.

attribute vec4 a_Position;		// Per-vertex position information we will pass in.
attribute vec4 a_Color;			// Per-vertex color information we will pass in.
attribute vec3 a_Normal;		// Per-vertex normal information we will pass in.
attribute vec2 a_TexCoordinate; // Per-vertex texture coordinate information we will pass in.

varying vec3 v_Position;		// This will be passed into the fragment shader.
varying vec4 v_Color;			// This will be passed into the fragment shader.
varying vec3 v_Normal;			// This will be passed into the fragment shader.
varying vec2 v_TexCoordinate;   // This will be passed into the fragment shader.

// The entry point for our vertex shader.
void main()
{
	// Transform the vertex into eye space.
	v_Position = vec3(u_MVMatrix * a_Position);

	// Pass through the color.
	v_Color = a_Color;

	// Pass through the texture coordinate.
	v_TexCoordinate = a_TexCoordinate;

	// Transform the normal's orientation into eye space.
    v_Normal = vec3(u_MVMatrix * vec4(a_Normal, 0.0));

	// gl_Position is a special variable used to store the final position.
	// Multiply the vertex by the matrix to get the final point in normalized screen coordinates.
	gl_Position = u_MVPMatrix * a_Position;
}
Fragment shader
precision mediump float;       	// Set the default precision to medium. We don't need as high of a
								// precision in the fragment shader.
uniform vec3 u_LightPos;       	// The position of the light in eye space.
uniform sampler2D u_Texture;    // The input texture.

varying vec3 v_Position;		// Interpolated position for this fragment.
varying vec4 v_Color;          	// This is the color from the vertex shader interpolated across the
  								// triangle per fragment.
varying vec3 v_Normal;         	// Interpolated normal for this fragment.
varying vec2 v_TexCoordinate;   // Interpolated texture coordinate per fragment.

// The entry point for our fragment shader.
void main()
{
	// Will be used for attenuation.
    float distance = length(u_LightPos - v_Position);

	// Get a lighting direction vector from the light to the vertex.
    vec3 lightVector = normalize(u_LightPos - v_Position);

	// Calculate the dot product of the light vector and vertex normal. If the normal and light vector are
	// pointing in the same direction then it will get max illumination.
    float diffuse = max(dot(v_Normal, lightVector), 0.0);

	// Add attenuation.
    diffuse = diffuse * (1.0 / (1.0 + (0.10 * distance)));

    // Add ambient lighting
    diffuse = diffuse + 0.3;

	// Multiply the color by the diffuse illumination level and texture value to get final output color.
    gl_FragColor = (v_Color * diffuse * texture2D(u_Texture, v_TexCoordinate));
  }
How to read in the shader from a text file in the raw resources folder
public static String readTextFileFromRawResource(final Context context,
			final int resourceId)
	{
		final InputStream inputStream = context.getResources().openRawResource(
				resourceId);
		final InputStreamReader inputStreamReader = new InputStreamReader(
				inputStream);
		final BufferedReader bufferedReader = new BufferedReader(
				inputStreamReader);

		String nextLine;
		final StringBuilder body = new StringBuilder();

		try
		{
			while ((nextLine = bufferedReader.readLine()) != null)
			{
				body.append(nextLine);
				body.append('\n');
			}
		}
		catch (IOException e)
		{
			return null;
		}

		return body.toString();
	}

Wrapping up

The full source code for this lesson can be downloaded from the project site on GitHub.

A compiled version of the lesson can also be downloaded directly from the Android Market:

QR code for link to the app on the Android Market.

As always, please don’t hesitate to leave feedbacks or comments, and thanks for stopping by!

Enhanced by Zemanta
Share

The Project Code Has Moved to Github!

Hi guys,

It’s been ages since I last posted an update, I know. I went away during the summer and neglected the site upon coming back, and now that I’m busy with school it’s been harder than ever to find the time to find an update. Excuses, excuses, I know. ;)

In any case, many of you were asking for a tutorial and demo on texturing, and this is what I’m going to talk about next. There also seems to be a lot more interest for Android tutorials rather than WebGL tutorials, so I will be focusing more time on Android. Let me know if you guys have other thoughts and suggestions.

The project source code is now moving to GitHub! The project page is located at http://learnopengles.github.com/Learn-OpenGLES-Tutorials/ and the repository is located at https://github.com/learnopengles/Learn-OpenGLES-Tutorials. The old repository at https://code.google.com/p/learn-opengles-tutorials/ will remain, but will no longer be updated going forward.

There was nothing wrong with the Google Code project site, and in fact I prefer the simplicity of Google’s interface, but I also prefer to develop using Git. Once you’ve gotten used to Git, it’s hard to go back to anything else. An advantage of GitHub is that it should be easier for others to fork and contribute to the project if they wish to.

As always, let me know your comments and thoughts. The code for Lesson 4 is already done, so I’ll start writing it up now and hopefully publish that soon!

Enhanced by Zemanta
Share