Android Lesson Seven: An Introduction to Vertex Buffer Objects (VBOs)

A screenshot of Lesson Seven, showing the grid of cubes.In this lesson, we’ll introduce vertex buffer objects (VBOs), how to define them, and how to use them. Here is what we are going to cover:

  • How to define and render from vertex buffer objects.
  • The difference between using a single buffer with all the data packed in, or multiple buffers.
  • Problems and pitfalls, and what to do about them.

What are vertex buffer objects, and why use them?

Up until now, all of our lessons have been storing our object data in client-side memory, only transferring it into the GPU at render time. This is fine when there is not a lot of data to transfer, but as our scenes get more complex with more objects and triangles, this can impose an extra cost on the CPU and memory usage. What can we do about this? We can use vertex buffer objects. Instead of transferring vertex information from client memory every frame, the information will be transferred once and rendering will then be done from this graphics memory cache.

Assumptions and prerequisites

Please read Android Lesson One: Getting Started for an intro on how to upload the vertices from client-side memory. This understanding of how OpenGL ES works with the vertex arrays will be crucial to understanding this lesson.

Understanding client-side buffers in more detail

Once you understand how to render using client-side memory, it’s actually not too hard to switch to using VBOs. The main difference is that there is an additional step to upload the data into graphics memory, and an additional call to bind to this buffer when rendering.

This lesson has been setup to use four different modes:

  • Client side, separate buffers.
  • Client side, packed buffer.
  • Vertex buffer object, separate buffers.
  • Vertex buffer object, packed buffers.

Whether we are using vertex buffer objects or not, we need to first store our data in a client-side direct buffer. Recall from lesson one that OpenGL ES is a native system library, whereas Java on Android runs in a virtual machine. To bridge the gap, we need to use a set of special buffer classes to allocate memory on the native heap and make it accessible to OpenGL:

// Java array.
float[] cubePositions;
...
// Floating-point buffer
final FloatBuffer cubePositionsBuffer;
...

// Allocate a direct block of memory on the native heap,
// size in bytes is equal to cubePositions.length * BYTES_PER_FLOAT.
// BYTES_PER_FLOAT is equal to 4, since a float is 32-bits, or 4 bytes.
cubePositionsBuffer = ByteBuffer.allocateDirect(cubePositions.length * BYTES_PER_FLOAT)

// Floats can be in big-endian or little-endian order.
// We want the same as the native platform.
.order(ByteOrder.nativeOrder())

// Give us a floating-point view on this byte buffer.
.asFloatBuffer();

Transferring data from the Java heap to the native heap is then a matter of a couple calls:

// Copy data from the Java heap to the native heap.
cubePositionsBuffer.put(cubePositions)

// Reset the buffer position to the beginning of the buffer.
.position(0);

What is the purpose of the buffer position? Normally, Java does not give us a way to specify arbitrary locations in memory using pointer arithmetic. However, setting the position of the buffer is functionally equivalent to changing the value of a pointer to a block of memory. By changing the position, we can pass arbitrary memory locations within our buffer to OpenGL calls. This will come in handy when we work with packed buffers.

Once the data is on the native heap, we no longer need to keep the float[] array around, and we can let the garbage collector clean it up.

Rendering with client-side buffers is straightforward to setup. We just need to enable using vertex arrays on that attribute, and pass a pointer to our data:

// Pass in the position information
GLES20.glEnableVertexAttribArray(mPositionHandle);
GLES20.glVertexAttribPointer(mPositionHandle, POSITION_DATA_SIZE,
	GLES20.GL_FLOAT, false, 0, mCubePositions);

Explanation of the parameters to glVertexAttribPointer:

  • mPositionHandle: The OpenGL index of the position attribute of our shader program.
  • POSITION_DATA_SIZE: How many elements (floats) define this attribute.
  • GL_FLOAT: The type of each element.
  • false: Should fixed-point data be normalized? Not applicable since we are using floating-point data.
  • 0: The stride. Set to 0 to mean that the positions should be read sequentially.
  • mCubePositions: The pointer to our buffer, containing all of the positional data.
Working with packed buffers

Working with packed buffers is very similar, except that instead of using a buffer each for positions, normals, etc… one buffer will contain all of this data. The difference looks like this:

Using separate buffers

positions = X,Y,Z, X, Y, Z, X, Y, Z, …
colors = R, G, B, A, R, G, B, A, …
textureCoordinates = S, T, S, T, S, T, …

Using a packed buffer

buffer = X, Y, Z, R, G, B, A, S, T, …

The advantage to using packed buffers is that it should be more efficient for the GPU to render, since all of the information needed to render a triangle is located within the same block of memory. The disadvantage is that it may be more difficult and slower to update, if you are using dynamic data.

When we use packed buffers, we need to change our rendering calls in a couple of ways. First, we need to tell OpenGL the stride, or how many bytes define a vertex.

final int stride = (POSITION_DATA_SIZE + NORMAL_DATA_SIZE + TEXTURE_COORDINATE_DATA_SIZE)
	* BYTES_PER_FLOAT;

// Pass in the position information
mCubeBuffer.position(0);
GLES20.glEnableVertexAttribArray(mPositionHandle);
GLES20.glVertexAttribPointer(mPositionHandle, POSITION_DATA_SIZE,
	GLES20.GL_FLOAT, false, stride, mCubeBuffer);

// Pass in the normal information
mCubeBuffer.position(POSITION_DATA_SIZE);
GLES20.glEnableVertexAttribArray(mNormalHandle);
GLES20.glVertexAttribPointer(mNormalHandle, NORMAL_DATA_SIZE,
	GLES20.GL_FLOAT, false, stride, mCubeBuffer);
...

The stride tells OpenGL ES how far it needs to go to find the same attribute for the next vertex. For example, if element 0 is the beginning of the position for the first vertex, and there are 8 elements per vertex, then the stride will be equal to 8 elements, or 32 bytes. The position for the next vertex will be found at element 8, and the next vertex after that at element 16, and so on.

Keep in mind that the value of the stride passed to glVertexAttribPointer should be in bytes, not elements, so remember to do that conversion.

Notice that we also change the start position of the buffer when we switch from specifying the positions to the normals. This is the pointer arithmetic I was referring to before, and this is how we can do it in Java when working with OpenGL ES. We’re still working with the same buffer, mCubeBuffer, but we tell OpenGL to start reading in the normals at the first element after the position. Again, we pass in the stride to tell OpenGL that the next normal will be found 8 elements or 32 bytes later.

Dalvik and memory on the native heap

If you allocate a lot of memory on the native heap and release it, you will probably run into the beloved OutOfMemoryError, sooner or later. There are a couple of reasons behind that:

  1. You might think that you’ve released the memory by letting the reference go out of scope, but native memory seems to take a few extra GC cycles to be completely cleaned up, and Dalvik will throw an exception if there is not enough free memory available and the native memory has not yet been released.
  2. The native heap can become fragmented. Calls to allocateDirect() will inexplicably fail, even though there appears to be plenty of memory available. Sometimes it helps to make a smaller allocation, free it, and then try the larger allocation again.

What can you do about these problems? Not much, other than hoping that Google improves the behaviour of Dalvik in future editions (they’ve added a largeHeap parameter to 3.0+), or manage the heap yourself by doing your allocations in native code or allocating a huge block upfront, and spinning off buffers based off of that.

Note: this information was originally written in early 2012, and now Android uses a different runtime called ART which may not suffer from these problems to the same degree.

Moving to vertex buffer objects

Now that we’ve reviewed working with client-side buffers, let’s move on to vertex buffer objects! First, we need to review a few very important points:

1. Buffers must be created within a valid OpenGL context.

This might seem like an obvious point, but it’s just a reminder that you have to wait until onSurfaceCreated(), and you have to take care that the OpenGL ES calls are done on the GL thread. See this document: OpenGL ES Programming Guide for iOS. It might be written for iOS, but the behaviour of OpenGL ES is similar on Android.

2. Improper use of vertex buffer objects will crash the graphics driver.

You need to be careful with the data you pass around when you use vertex buffer objects. Improper values will cause a native crash in the OpenGL ES system library or in the graphics driver library. On my Nexus S, some games freeze up my phone completely or cause it to reboot, because the graphics driver is crashing on their commands. Not all crashes will lock up your device, but at a minimum you will not see the “This application has stopped working” dialogue. Your activity will restart without warning, and the only info you’ll get might be a native debug trace in the logs.

3. The OpenGL ES bindings are broken on Froyo (2.2), and incomplete/unavailable in earlier versions.

This is the most unfortunate and most important point to consider. For some reason, Google really dropped the ball when it comes to OpenGL ES 2 support on Froyo. The mappings are incomplete, and several crucial functions needed to use vertex buffer objects are unavailable and cannot be used from Java code, at least with the standard SDK.

I don’t know if it’s because they didn’t run their unit tests, or if the developer was sloppy with their code generation tools, or if everyone was on 8 cups of coffee and burning the midnight oil to get things out the door. I don’t know why the API is broken, but the fact is that it’s broken.

There are three solutions to this problem:

  1. Target Gingerbread (2.3) and higher.
  2. Don’t use vertex buffer objects.
  3. Use your own Java Native Interface (JNI) library to interface with the native OpenGL ES system libraries.

I find option 1 to be unacceptable, since a full quarter of devices out there still run on Froyo as of the time of this writing. Option 2 works, but is kind of silly.

Note: This article was originally written in early 2012, when many devices were still on Froyo. As of 2017, this is no longer an issue and the most reasonable option is to target Gingerbread or later.

The option I recommend, and that I have decided to go with, is to use your own JNI bindings. For this lesson I have decided to go with the bindings generously provided by the guys who created libgdx, a cross-platform game development library licensed under the Apache License 2.0. You need to use the following files to make it work:

  • /libs/armeabi/libandroidgl20.so
  • /libs/armeabi-v7a/libandroidgl20.so
  • src/com/badlogic/gdx/backends/android/AndroidGL20.java
  • src/com/badlogic/gdx/graphics/GL20.java
  • src/com/badlogic/gdx/graphics/GLCommon.java

You might notice that this excludes Android platforms that do not run on ARM, and you’d be right. It would probably be possible to compile your own bindings for those platforms if you want to have VBO support on Froyo, though that is out of the scope of this lesson.

Using the bindings is as simple as these lines of code:

AndroidGL20 mGlEs20 = new AndroidGL20();
...
mGlEs20.glVertexAttribPointer(mPositionHandle, POSITION_DATA_SIZE, GLES20.GL_FLOAT, false, 0, 0);
...

You only need to call the custom binding where the SDK-provided binding is incomplete. I use the custom bindings to fill in the holes where the official one is missing functions.

Uploading vertex data to the GPU.

To upload data to the GPU, we need to follow the same steps in creating a client-side buffer as before:

...
cubePositionsBuffer = ByteBuffer.allocateDirect(cubePositions.length * BYTES_PER_FLOAT)
.order(ByteOrder.nativeOrder()).asFloatBuffer();
cubePositionsBuffer.put(cubePositions).position(0);
...

Once we have the client-side buffer, we can create a vertex buffer object and upload data from client memory to the GPU with the following commands:

// First, generate as many buffers as we need.
// This will give us the OpenGL handles for these buffers.
final int buffers[] = new int[3];
GLES20.glGenBuffers(3, buffers, 0);

// Bind to the buffer. Future commands will affect this buffer specifically.
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, buffers[0]);

// Transfer data from client memory to the buffer.
// We can release the client memory after this call.
GLES20.glBufferData(GLES20.GL_ARRAY_BUFFER, cubePositionsBuffer.capacity() * BYTES_PER_FLOAT,
	cubePositionsBuffer, GLES20.GL_STATIC_DRAW);

// IMPORTANT: Unbind from the buffer when we're done with it.
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, 0);

Once data has been uploaded to OpenGL ES, we can release the client-side memory as we no longer need to keep it around. Here is an explanation of glBufferData:

  • GL_ARRAY_BUFFER: This buffer contains an array of vertex data.
  • cubePositionsBuffer.capacity() * BYTES_PER_FLOAT: The number of bytes this buffer should contain.
  • cubePositionsBuffer: The source that will be copied to this vertex buffer object.
  • GL_STATIC_DRAW: The buffer will not be updated dynamically.

Our call to glVertexAttribPointer looks a little bit different, as the last parameter is now an offset rather than a pointer to our client-side memory:

// Pass in the position information
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, mCubePositionsBufferIdx);
GLES20.glEnableVertexAttribArray(mPositionHandle);
mGlEs20.glVertexAttribPointer(mPositionHandle, POSITION_DATA_SIZE, GLES20.GL_FLOAT, false, 0, 0);
...

Like before, we bind to the buffer, then enable the vertex array. Since the buffer is already bound, we only need to tell OpenGL the offset to start at when reading from the buffer. Since we are using separate buffers, we pass in an offset of 0. Notice also that we are using our custom binding to call glVertexAttribPointer, since the official SDK is missing this specific function call.

Once we are done drawing with our buffer, we should unbind from it:

GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, 0);

When we no longer want to keep our buffers around, we can free the memory:

final int[] buffersToDelete = new int[] { mCubePositionsBufferIdx, mCubeNormalsBufferIdx,
	mCubeTexCoordsBufferIdx };
GLES20.glDeleteBuffers(buffersToDelete.length, buffersToDelete, 0);
Packed vertex buffer objects

We can also use a single, packed vertex buffer object to hold all of our vertex data. The creation of a packed buffer is the same as above, with the only difference being that we start from a packed client-side buffer. Rendering from the packed buffer is also the same, except we need to pass in a stride and an offset, like when using packed buffers in client-side memory:

final int stride = (POSITION_DATA_SIZE + NORMAL_DATA_SIZE + TEXTURE_COORDINATE_DATA_SIZE)
	* BYTES_PER_FLOAT;

// Pass in the position information
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, mCubeBufferIdx);
GLES20.glEnableVertexAttribArray(mPositionHandle);
mGlEs20.glVertexAttribPointer(mPositionHandle, POSITION_DATA_SIZE, 
	GLES20.GL_FLOAT, false, stride, 0);

// Pass in the normal information
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, mCubeBufferIdx);
GLES20.glEnableVertexAttribArray(mNormalHandle);
mGlEs20.glVertexAttribPointer(mNormalHandle, NORMAL_DATA_SIZE, 
	GLES20.GL_FLOAT, false, stride, POSITION_DATA_SIZE * BYTES_PER_FLOAT);
...

Notice that the offset needs to be specified in bytes. The same considerations of unbinding and deleting the buffer apply, as before.

Putting it all together

This lesson is setup so that it builds a cube of cubes, with the same number of cubes in each dimension. It will build a cube of cubes between 1x1x1 cubes, and 16x16x16 cubes. Since each cube shares the same normal and texture data, this data will be copied repeatedly when we initialize our client-side buffer. All of the cubes will end up inside the same buffer objects.

You can view the code for the lesson and view an example of rendering with and without VBOs, and with and without packed buffers. Check the code to see how some of the following was handled:

  • Posting events from the OpenGL thread back to the main UI thread, via runOnUiThread.
  • Generating the vertex data asynchronously.
  • Handling out of memory errors.
  • We removed the call to glEnable(GL_TEXTURE_2D), since that is actually an invalid enum on OpenGL ES 2. This is a hold over from the fixed pipeline days; In OpenGL ES 2 this stuff is handled by shaders, so no need to use a glEnable/glDisable.
  • How to render using different paths, without adding too many if statements and conditions.
Further exercises

When would you use vertex buffers and when is it better to stream data from client memory? What are some of the drawbacks of using vertex buffer objects? How would you improve the asynchronous loading code?

Wrapping up

The full source code for this lesson can be downloaded from the project site on GitHub.

Thanks for stopping by, and please feel free to check out the code and share your comments below. A special thanks goes out again to the guys at libgdx for generously providing the source code and libraries for their OpenGL ES 2 bindings for Android 2.2!

Enhanced by Zemanta

Google Celebrates 10 Billion Downloads on the Market: Several Great Apps Available for Only 10 Cents Each!

Android Market
Image via Wikipedia

Android is booming very rapidly, and the market share has gone from nothing to dominating the market, within only a couple of years. I still remember when just a bit more than a year ago, competitors still thought that Android was only going to fill a niche market, and that it was “clunky” and “slow”. Well, its open nature has led to a rapid explosion of devices with Android, and rapidly improving speeds are getting rid of the Android “lag”.

From the Android Developer’s Blog:

The Android Market has recently crossed 10 billion downloads, and Google is celebrating:
“One billion is a pretty big number by any measurement. However, when it’s describing the speed at which something is growing, it’s simply amazing. This past weekend, thanks to Android users around the world, Android Market exceeded 10 billion app downloads—with a growth rate of one billion app downloads per month. We can’t wait to see where this accelerating growth takes us in 2012.”

“To celebrate this milestone, we partnered with some of the Android developers who contributed to this milestone to make a bunch of great Android apps available at an amazing price. Starting today for the next 10 days, we’ll have a new set of awesome apps available each day for only 10 cents each. Today, we are starting with Asphalt 6 HDColor & Draw for KidsEndomondo Sports Tracker ProFieldrunners HDGreat Little War GameMinecraftPaper CameraSketchbook MobileSoundhound Infinity and Swiftkey X.”

For 10 cents, why not try out a couple of apps? It’s amazing to see just how the quality of apps has improved over the past year, especially games. Unfortunately my Nexus S did reboot while playing one of the games, so the stability still isn’t all there, but I definitely do see an improvement in quality as compared to last year.

Check out the post for more details. It sounds like there will be a new set of apps at 10 cents each day!

Note: If you don’t see the apps at 10 cents on your phone, try going to the Android Market from your PC, instead. I had the same issue, where the app was listed at $4.99 on my phone and at 10 cents on the web. If you have a Google account linked to your phone, then you can just purchase from the web and Google will automatically send the app to your phone.

Enhanced by Zemanta

Android Lesson Five: An Introduction to Blending

Basic blending (additive blending of RGB cubes).
Basic blending.

In this lesson we’ll take a look at the basics of blending in OpenGL. We’ll look at how to turn blending on and off, how to set different blending modes, and how different blending modes mimic real-life effects. In a later lesson, we’ll also look at how to use the alpha channel, how to use the depth buffer to render both translucent and opaque objects in the same scene, as well as when we need to sort objects by depth, and why.

We’ll also look at how to listen to touch events, and then change our rendering state based on that.

Assumptions and prerequisites

Each lesson in this series builds on the one before it. However, for this lesson it will be enough if you understand Android Lesson One: Getting Started. Although the code is based on the preceding lesson, the lighting and texturing portion has been removed for this lesson so we can focus on the blending.

Blending

Blending is the act of combining one color with a second in order to get a third color. We see blending all of the time in the real world: when light passes through glass, when it bounces off of a surface, and when a light source itself is superimposed on the background, such as the flare we see around a lit streetlight at night.

OpenGL has different blending modes we can use to reproduce this effect. In OpenGL, blending occurs in a late stage of the rendering process: it happens once the fragment shader has calculated the final output color of a fragment and it’s about to be written to the frame buffer. Normally that fragment just overwrites whatever was there before, but if blending is turned on, then that fragment is blended with what was there before.

By default, here’s what the OpenGL blending equation looks like when glBlendEquation() is set to the default, GL_FUNC_ADD:

output = (source factor * source fragment) + (destination factor * destination fragment)

There are also two other modes available in OpenGL ES 2, GL_FUNC_SUBTRACT and GL_FUNC_REVERSE_SUBTRACT. These may be covered in a future tutorial, however, I get an UnsupportedOperationException on the Nexus S when I try to call this function so it’s possible that this is not actually supported on the Android implementation. This isn’t the end of the world since there is plenty you can do already with GL_FUNC_ADD.

The source factor and destination factor are set using the function glBlendFunc(). An overview of a few common blend factors will be given below; more information  as well as an enumeration of the different possible factors is available at the Khronos online manual:

The documentation appears better in Firefox or if you have a MathML extension installed.

Clamping

OpenGL expects the input to be clamped in the range [0, 1] , and the output will also be clamped to the range [0, 1]. What this means in practice is that colors can shift in hue when you are doing blending. If you keep adding red (RGB = 1, 0, 0) to the frame buffer, the final color will stay red. However, if you add in just a little bit of green so that you are adding (RGB = 1, 0.1, 0) to the frame buffer, you will end up with yellow even though you started with a reddish hue! You can see this effect in the demo for this lesson when blending is turned on: the colors become oversaturated where different colors overlap.

Different types of blending and how they relate to different effects

Additive Color. Source: http://commons.wikimedia.org/wiki/File:AdditiveColor.svg
The RGB additive color model. Source: Wikipedia
Additive blending

Additive blending is the type of blending we do when we add different colors together and add the result. This is the way that our vision works together with light and this is how we can perceive millions of different colors on our monitors — they are really just blending three different primary colors together.

This type of blending has many uses in 3D rendering, such as in particle effects which appear to give off light or overlays such as the corona around a light, or a glow effect around a light saber.

Additive blending can be specified by calling glBlendFunc(GL_ONE, GL_ONE). This results in the blending equation output = (1 * source fragment) + (1 * destination fragment), which collapses into output = source fragment + destination fragment.

A lightmap applied to the first texture from http://pdtextures.blogspot.com/2008/03/first-set.html
An example of lightmapping.
Multiplicative blending

Multiplicative blending (also known as modulation) is another useful blending mode that represents the way that light behaves when it passes through a color filter, or bounces off of a lit object and enters our eyes. A red object appears red to us because when white light strikes the object, blue and green light is absorbed. Only the red light is reflected back toward our eyes. In the example to the left, we can see a surface that reflects some red and some green, but very little blue.

When multi-texturing is not available, multiplicative blending is used to implement lightmaps in games. The texture is multiplied by the lightmap in order to fill in the lit and shadowed areas.

Multiplicative blending can be specified by calling glBlendFunc(GL_DST_COLOR, GL_ZERO). This results in the blending equation output = (destination fragment * source fragment) + (0 * destination fragment), which collapses into output = source fragment * destination fragment.

An example of two textures blended together. Textures from http://pdtextures.blogspot.com/2008/03/first-set.html
An example of two textures interpolated together.
Interpolative blending

Interpolative blending combines multiplication and addition to give an interpolative effect. Unlike addition and modulation by themselves, this blending mode can also be draw-order dependent, so in some cases the results will only be correct if you draw the furthest translucent objects first, and then the closer ones afterwards. Even sorting wouldn’t be perfect, since it’s possible for triangles to overlap and intersect, but the resulting artifacts may be acceptable.

Interpolation is often useful to blend adjacent surfaces together, as well as do effects like tinted glass, or fade-in/fade-out. The image on the left shows two textures (textures from public domain textures) blended together using interpolation.

Interpolation is specified by calling glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA). This results in the blending equation output = (source alpha * source fragment) + ((1 – source alpha) * destination fragment). Here’s an example:

Imagine that we’re drawing a green (0r, 1g, 0b) object that is only 25% opaque. The object currently on the screen is red (1r, 0g, 0b) .

output = (source factor * source fragment) + (destination factor * destination fragment)
output = (source alpha * source fragment) + ((1 – source alpha) * destination fragment)
output = (0.25 * (0r, 1g, 0b)) + (0.75 * (1r, 0g, 0b))
output = (0r, 0.25g, 0b) + (0.75r, 0g, 0b)
output = (0.75r, 0.25g, 0b)

Notice that we don’t make any reference to the destination alpha, so the frame buffer itself doesn’t need an alpha channel, which gives us more bits for the color channels.

Using blending

For our lesson, our demo will show the cubes as if they were emitters of light, using additive blending. Something that emits light doesn’t need to be lit by other light sources, so there are no lights in this demo. I’ve also removed the texture, although it could have been neat to use one. The shader program for this lesson will be simple; we just need a shader that will pass out the color given to it.

Vertex shader
uniform mat4 u_MVPMatrix;		// A constant representing the combined model/view/projection matrix.

attribute vec4 a_Position;		// Per-vertex position information we will pass in.
attribute vec4 a_Color;			// Per-vertex color information we will pass in.

varying vec4 v_Color;			// This will be passed into the fragment shader.

// The entry point for our vertex shader.
void main()
{
	// Pass through the color.
	v_Color = a_Color;

	// gl_Position is a special variable used to store the final position.
	// Multiply the vertex by the matrix to get the final point in normalized screen coordinates.
	gl_Position = u_MVPMatrix * a_Position;
}
Fragment shader
precision mediump float;       	// Set the default precision to medium. We don't need as high of a
								// precision in the fragment shader.
varying vec4 v_Color;          	// This is the color from the vertex shader interpolated across the
  								// triangle per fragment.

// The entry point for our fragment shader.
void main()
{
	// Pass through the color
    gl_FragColor = v_Color;
}
Turning blending on

Turning blending on is as simple as making these function calls:

// No culling of back faces
GLES20.glDisable(GLES20.GL_CULL_FACE);

// No depth testing
GLES20.glDisable(GLES20.GL_DEPTH_TEST);

// Enable blending
GLES20.glEnable(GLES20.GL_BLEND);
GLES20.glBlendFunc(GLES20.GL_ONE, GLES20.GL_ONE);

We turn off the culling of back faces, because if a cube is translucent, then we can now see the back sides of the cube. We should draw them otherwise it might look quite strange. We turn off depth testing for the same reason.

Listening to touch events, and acting on them

You’ll notice when you run the demo that it’s possible to turn blending on and off by tapping on the screen. See the article “Listening to Android Touch Events, and Acting on Them” for more information.

Further exercises

The demo only uses additive blending at the moment. Try changing it to interpolative blending and re-adding the lights and textures. Does the draw order matter if you’re only drawing two translucent textures on a black background? When would it matter?

Wrapping up

The full source code for this lesson can be downloaded from the project site on GitHub.

As always, please don’t hesitate to leave feedbacks or comments, and thanks for stopping by!

Enhanced by Zemanta