## Finishing Up Our Native Air Hockey Project With Touch Events and Basic Collision Detection

In this post in the air hockey series, we’re going to wrap up our air hockey project and add touch event handling and basic collision detection with support for Android, iOS, and emscripten.

### Prerequisites

This lesson continues the air hockey project series, building upon the code from GitHub for ‘article-3-matrices-and-objects’. Here are the previous posts in this series:

### Updating our game code for touch interaction

The first thing we’ll do is update the core to add touch interaction to the game. We’ll first need to add some helper functions to a new core file called geometry.h.

#### geometry.h

Let’s start off with the following code:

```#include "linmath.h"
#include

typedef struct {
vec3 point;
vec3 vector;
} Ray;

typedef struct {
vec3 point;
vec3 normal;
} Plane;

typedef struct {
vec3 center;
} Sphere;```

These are a few `typedef`s that build upon linmath.h to add a few basic types that we’ll use in our code. Let’s wrap up geometry.h:

```static inline int sphere_intersects_ray(Sphere sphere, Ray ray);
static inline float distance_between(vec3 point, Ray ray);
static inline void ray_intersection_point(vec3 result, Ray ray, Plane plane);

static inline int sphere_intersects_ray(Sphere sphere, Ray ray) {
return 1;
return 0;
}

static inline float distance_between(vec3 point, Ray ray) {
vec3 p1_to_point;
vec3_sub(p1_to_point, point, ray.point);
vec3 p2_to_point;
vec3 translated_ray_point;
vec3_sub(p2_to_point, point, translated_ray_point);

// The length of the cross product gives the area of an imaginary
// parallelogram having the two vectors as sides. A parallelogram can be
// thought of as consisting of two triangles, so this is the same as
// twice the area of the triangle defined by the two vectors.
// http://en.wikipedia.org/wiki/Cross_product#Geometric_meaning
vec3 cross_product;
vec3_mul_cross(cross_product, p1_to_point, p2_to_point);
float area_of_triangle_times_two = vec3_len(cross_product);
float length_of_base = vec3_len(ray.vector);

// The area of a triangle is also equal to (base * height) / 2. In
// other words, the height is equal to (area * 2) / base. The height
// of this triangle is the distance from the point to the ray.
float distance_from_point_to_ray = area_of_triangle_times_two / length_of_base;
return distance_from_point_to_ray;
}

// http://en.wikipedia.org/wiki/Line-plane_intersection
// This also treats rays as if they were infinite. It will return a
// point full of NaNs if there is no intersection point.
static inline void ray_intersection_point(vec3 result, Ray ray, Plane plane) {
vec3 ray_to_plane_vector;
vec3_sub(ray_to_plane_vector, plane.point, ray.point);

float scale_factor = vec3_mul_inner(ray_to_plane_vector, plane.normal)
/ vec3_mul_inner(ray.vector, plane.normal);

vec3 intersection_point;
vec3 scaled_ray_vector;
vec3_scale(scaled_ray_vector, ray.vector, scale_factor);
memcpy(result, intersection_point, sizeof(intersection_point));
}```

We’ll do a line-sphere intersection test to see if we’ve touched the mallet using our fingers or a mouse. Once we’ve grabbed the mallet, we’ll do a line-plane intersection test to determine where to place the mallet on the board.

#### game.h

We’ll need two new function prototypes in game.h:

```void on_touch_press(float normalized_x, float normalized_y);
void on_touch_drag(float normalized_x, float normalized_y);```

game.c

Now we can begin the implementation in game.c. Add the following in the appropriate places to the top of the file:

```#include "geometry.h"
// ...
static const float puck_radius = 0.06f;
static const float mallet_radius = 0.08f;

static const float left_bound = -0.5f;
static const float right_bound = 0.5f;
static const float far_bound = -0.8f;
static const float near_bound = 0.8f;
// ...
static mat4x4 inverted_view_projection_matrix;

static int mallet_pressed;
static vec3 blue_mallet_position;
static vec3 previous_blue_mallet_position;
static vec3 puck_position;
static vec3 puck_vector;

static Ray convert_normalized_2D_point_to_ray(float normalized_x, float normalized_y);
static void divide_by_w(vec4 vector);
static float clamp(float value, float min, float max);```

We’ll now begin with the code for handling a touch press:

```void on_touch_press(float normalized_x, float normalized_y) {
Ray ray = convert_normalized_2D_point_to_ray(normalized_x, normalized_y);

// Now test if this ray intersects with the mallet by creating a
// bounding sphere that wraps the mallet.
Sphere mallet_bounding_sphere = (Sphere) {
{blue_mallet_position[0],
blue_mallet_position[1],
blue_mallet_position[2]},
mallet_height / 2.0f};

// If the ray intersects (if the user touched a part of the screen that
// intersects the mallet's bounding sphere), then set malletPressed =
// true.
mallet_pressed = sphere_intersects_ray(mallet_bounding_sphere, ray);
}

static Ray convert_normalized_2D_point_to_ray(float normalized_x, float normalized_y) {
// We'll convert these normalized device coordinates into world-space
// coordinates. We'll pick a point on the near and far planes, and draw a
// line between them. To do this transform, we need to first multiply by
// the inverse matrix, and then we need to undo the perspective divide.
vec4 near_point_ndc = {normalized_x, normalized_y, -1, 1};
vec4 far_point_ndc = {normalized_x, normalized_y,  1, 1};

vec4 near_point_world, far_point_world;
mat4x4_mul_vec4(near_point_world, inverted_view_projection_matrix, near_point_ndc);
mat4x4_mul_vec4(far_point_world, inverted_view_projection_matrix, far_point_ndc);

// Why are we dividing by W? We multiplied our vector by an inverse
// matrix, so the W value that we end up is actually the *inverse* of
// what the projection matrix would create. By dividing all 3 components
// by W, we effectively undo the hardware perspective divide.
divide_by_w(near_point_world);
divide_by_w(far_point_world);

// We don't care about the W value anymore, because our points are now
// in world coordinates.
vec3 near_point_ray = {near_point_world[0], near_point_world[1], near_point_world[2]};
vec3 far_point_ray = {far_point_world[0], far_point_world[1], far_point_world[2]};
vec3 vector_between;
vec3_sub(vector_between, far_point_ray, near_point_ray);
return (Ray) {
{near_point_ray[0], near_point_ray[1], near_point_ray[2]},
{vector_between[0], vector_between[1], vector_between[2]}};
}

static void divide_by_w(vec4 vector) {
vector[0] /= vector[3];
vector[1] /= vector[3];
vector[2] /= vector[3];
}```

This code first takes normalized touch coordinates which it receives from the Android, iOS or emscripten front ends, and then turns those touch coordinates into a 3D ray in world space. It then intersects the 3D ray with a bounding sphere for the mallet to see if we’ve touched the mallet.

Let’s continue with the code for handling a touch drag:

```void on_touch_drag(float normalized_x, float normalized_y) {
if (mallet_pressed == 0)
return;

Ray ray = convert_normalized_2D_point_to_ray(normalized_x, normalized_y);
// Define a plane representing our air hockey table.
Plane plane = (Plane) {{0, 0, 0}, {0, 1, 0}};

// Find out where the touched point intersects the plane
// representing our table. We'll move the mallet along this plane.
vec3 touched_point;
ray_intersection_point(touched_point, ray, plane);

memcpy(previous_blue_mallet_position, blue_mallet_position,
sizeof(blue_mallet_position));

// Clamp to bounds
blue_mallet_position[0] =
blue_mallet_position[1] = mallet_height / 2.0f;
blue_mallet_position[2] =

// Now test if mallet has struck the puck.
vec3 mallet_to_puck;
vec3_sub(mallet_to_puck, puck_position, blue_mallet_position);
float distance = vec3_len(mallet_to_puck);

// The mallet has struck the puck. Now send the puck flying
// based on the mallet velocity.
vec3_sub(puck_vector, blue_mallet_position, previous_blue_mallet_position);
}
}

static float clamp(float value, float min, float max) {
return fmin(max, fmax(value, min));
}```

Once we’ve grabbed the mallet, we move it across the air hockey table by intersecting the new touch point with the table to determine the new position on the table. We then move the mallet to that new position. We also check if the mallet has struck the puck, and if so, we use the movement distance to calculate the puck’s new velocity.

We next need to update the lines that initialize our objects inside `on_surface_created()` as follows:

```puck = create_puck(puck_radius, puck_height, 32, puck_color);
red_mallet = create_mallet(mallet_radius, mallet_height, 32, red);
blue_mallet = create_mallet(mallet_radius, mallet_height, 32, blue);

blue_mallet_position[0] = 0;
blue_mallet_position[1] = mallet_height / 2.0f;
blue_mallet_position[2] = 0.4f;
puck_position[0] = 0;
puck_position[1] = puck_height / 2.0f;
puck_position[2] = 0;
puck_vector[0] = 0;
puck_vector[1] = 0;
puck_vector[2] = 0;```

The new linmath.h has merged in the custom code we added to our matrix_helper.h, so we no longer need that file. As part of those changes, our perspective method call in `on_surface_changed()` now needs the angle entered in radians, so let’s update that method call as follows:

```mat4x4_perspective(projection_matrix, deg_to_radf(45),
(float) width / (float) height, 1.0f, 10.0f);```

We can then update `on_draw_frame()` to add the new movement code. Let’s first add the following to the top, right after the call to `glClear()`:

```// Translate the puck by its vector

// If the puck struck a side, reflect it off that side.
if (puck_position[0] < left_bound + puck_radius
|| puck_position[0] > right_bound - puck_radius) {
puck_vector[0] = -puck_vector[0];
vec3_scale(puck_vector, puck_vector, 0.9f);
}
if (puck_position[2] < far_bound + puck_radius
|| puck_position[2] > near_bound - puck_radius) {
puck_vector[2] = -puck_vector[2];
vec3_scale(puck_vector, puck_vector, 0.9f);
}

// Clamp the puck position.
puck_position[0] =
puck_position[2] =

// Friction factor
vec3_scale(puck_vector, puck_vector, 0.99f);```

This code will update the puck’s position and cause it to go bouncing around the table. We’ll also need to add the following after the call to `mat4x4_mul(view_projection_matrix, projection_matrix, view_matrix);`:

`mat4x4_invert(inverted_view_projection_matrix, view_projection_matrix);`

This sets up the inverted view projection matrix, which we need for turning the normalized touch coordinates back into world space coordinates.

Let’s finish up the changes to game.c by updating the following calls to `position_object_in_scene()`:

```position_object_in_scene(blue_mallet_position[0], blue_mallet_position[1],
blue_mallet_position[2]);
// ...
position_object_in_scene(puck_position[0], puck_position[1], puck_position[2]);```

### Adding touch events to Android

With these changes in place, we now need to link in the touch events from each platform. We’ll start off with Android:

#### MainActivity.java

In MainActivity.java, we first need to update the way that we create the renderer in `onCreate()`:

```final RendererWrapper rendererWrapper = new RendererWrapper(this);
// ...
glSurfaceView.setRenderer(rendererWrapper);```

```glSurfaceView.setOnTouchListener(new OnTouchListener() {
@Override
public boolean onTouch(View v, MotionEvent event) {
if (event != null) {
// Convert touch coordinates into normalized device
// coordinates, keeping in mind that Android's Y
// coordinates are inverted.
final float normalizedX = (event.getX() / (float) v.getWidth()) * 2 - 1;
final float normalizedY = -((event.getY() / (float) v.getHeight()) * 2 - 1);

if (event.getAction() == MotionEvent.ACTION_DOWN) {
glSurfaceView.queueEvent(new Runnable() {
@Override
public void run() {
rendererWrapper.handleTouchPress(normalizedX, normalizedY);
}});
} else if (event.getAction() == MotionEvent.ACTION_MOVE) {
glSurfaceView.queueEvent(new Runnable() {
@Override
public void run() {
rendererWrapper.handleTouchDrag(normalizedX, normalizedY);
}});
}

return true;
} else {
return false;
}
}});```

This touch listener takes the incoming touch events from the user, converts them into normalized coordinates in OpenGL’s normalized device coordinate space, and then calls the renderer wrapper which will pass the event on into our native code.

#### RendererWrapper.java

We’ll need to add the following to RendererWrapper.java:

```public void handleTouchPress(float normalizedX, float normalizedY) {
on_touch_press(normalizedX, normalizedY);
}

public void handleTouchDrag(float normalizedX, float normalizedY) {
on_touch_drag(normalizedX, normalizedY);
}

private static native void on_touch_press(float normalized_x, float normalized_y);

private static native void on_touch_drag(float normalized_x, float normalized_y);```

#### renderer_wrapper.c

We’ll also need to add the following to renderer_wrapper.c in our jni folder:

```JNIEXPORT void JNICALL Java_com_learnopengles_airhockey_RendererWrapper_on_1touch_1press(
JNIEnv* env, jclass cls, jfloat normalized_x, jfloat normalized_y) {
UNUSED(env);
UNUSED(cls);
on_touch_press(normalized_x, normalized_y);
}

JNIEXPORT void JNICALL Java_com_learnopengles_airhockey_RendererWrapper_on_1touch_1drag(
JNIEnv* env, jclass cls, jfloat normalized_x, jfloat normalized_y) {
UNUSED(env);
UNUSED(cls);
on_touch_drag(normalized_x, normalized_y);
}```

We now have everything in place for Android, and if we run the app, it should look similar to as seen below:

To add support for iOS, we need to update ViewController.m and add support for touch events. To do that and update the frame rate at the same time, let’s add the following to `viewDidLoad:` before the call to `[self setupGL]`:

```view.userInteractionEnabled = YES;
self.preferredFramesPerSecond = 60;```

To listen to the touch events, we need to override a few methods. Let’s add the following methods before `- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect`:

```static CGPoint getNormalizedPoint(UIView* view, CGPoint locationInView)
{
const float normalizedX = (locationInView.x / view.bounds.size.width) * 2.f - 1.f;
const float normalizedY = -((locationInView.y / view.bounds.size.height) * 2.f - 1.f);
return CGPointMake(normalizedX, normalizedY);
}

- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
[super touchesBegan:touches withEvent:event];
UITouch* touchEvent = [touches anyObject];
CGPoint locationInView = [touchEvent locationInView:self.view];
CGPoint normalizedPoint = getNormalizedPoint(self.view, locationInView);
on_touch_press(normalizedPoint.x, normalizedPoint.y);
}

- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event
{
[super touchesMoved:touches withEvent:event];
UITouch* touchEvent = [touches anyObject];
CGPoint locationInView = [touchEvent locationInView:self.view];
CGPoint normalizedPoint = getNormalizedPoint(self.view, locationInView);
on_touch_drag(normalizedPoint.x, normalizedPoint.y);
}

- (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event
{
[super touchesEnded:touches withEvent:event];
}

- (void)touchesCancelled:(NSSet *)touches withEvent:(UIEvent *)event
{
[super touchesCancelled:touches withEvent:event];
}```

This is similar to the Android code in that it takes the input touch event, converts it to OpenGL’s normalized device coordinate space, and then sends it on to our game code.

Our iOS app should look similar to the following image:

Adding support for emscripten is just as easy. Let’s first add the following to the top of main.c:

```static void handle_input();
// ...
int is_dragging;```

At the beginning of `do_frame()`, add a call to `handle_input();`:

```static void do_frame()
{
handle_input();
// ...```

Add the following for `handle_input`:

```static void handle_input()
{
glfwPollEvents();
const int left_mouse_button_state = glfwGetMouseButton(GLFW_MOUSE_BUTTON_1);
if (left_mouse_button_state == GLFW_PRESS) {
int x_pos, y_pos;
glfwGetMousePos(&x_pos, &y_pos);
const float normalized_x = ((float)x_pos / (float) width) * 2.f - 1.f;
const float normalized_y = -(((float)y_pos / (float) height) * 2.f - 1.f);

if (is_dragging == 0) {
is_dragging = 1;
on_touch_press(normalized_x, normalized_y);
} else {
on_touch_drag(normalized_x, normalized_y);
}
} else {
is_dragging = 0;
}
}```

This code sets `is_dragging` depending on whether we just clicked the primary mouse button or if we’re currently dragging the mouse. Depending on the case, we’ll call either `on_touch_press` or ` on_touch_drag`. The code to normalize the coordinates is the same as in Android and iOS, and indeed a case could be made to abstract out into the common game code, and just pass in the raw coordinates relative to the view size to that game code.

After compiling with emcc make, we should get output similar to the below:

### Exploring further

That concludes our air hockey project! The full source code for this lesson can be found at the GitHub project. You can find a more in-depth look at the concepts behind the project from the perspective of Java Android in OpenGL ES 2 for Android: A Quick-Start Guide. For exploring further, there are many things you could add, like improved graphics, support for sound, a simple AI, multiplayer (on the same device), scoring, or a menu system.

Whether you end up using a commercial cross-platform solution like Unity or Corona, or whether you decide to go the independent route, I hope this series was helpful to you and most importantly, that you enjoy your future projects ahead and have a lot of fun with them. 🙂

## Adding a 3d Perspective and Object Rendering to Our Air Hockey Project in Native C Code

For this post in the air hockey series, we’ll learn how to render our scene from a 3D perspective, as well as how to add a puck and two mallets to the scene. We’ll also see how easy it is to bring these changes to Android, iOS, and emscripten.

### Prerequisites

This lesson continues the air hockey project series, building upon the code from GitHub for ‘article-2-loading-png-file’. Here are the previous posts in this series:

### Adding support for a matrix library

The first thing we’ll do is add support for a matrix library so we can use the same matrix math on all three platforms, and then we’ll introduce the changes to our code from the top down. There are a lot of libraries out there, so I decided to use linmath.h by Wolfgang Draxinger for its simplicity and compactness. Since it’s on GitHub, we can easily add it to our project by running the following git command from the root airhockey/ folder:

`git submodule add https://github.com/datenwolf/linmath.h.git src/3rdparty/linmath`

### Updating our game code

We’ll introduce all of the changes from the top down, so let’s begin by replacing everything inside game.c as follows:

```#include "game.h"
#include "game_objects.h"
#include "asset_utils.h"
#include "buffer.h"
#include "image.h"
#include "linmath.h"
#include "math_helper.h"
#include "matrix.h"
#include "platform_gl.h"
#include "platform_asset_utils.h"
#include "program.h"
#include "texture.h"

static const float puck_height = 0.02f;
static const float mallet_height = 0.15f;

static Table table;
static Puck puck;
static Mallet red_mallet;
static Mallet blue_mallet;

static TextureProgram texture_program;
static ColorProgram color_program;

static mat4x4 projection_matrix;
static mat4x4 model_matrix;
static mat4x4 view_matrix;

static mat4x4 view_projection_matrix;
static mat4x4 model_view_projection_matrix;

static void position_table_in_scene();
static void position_object_in_scene(float x, float y, float z);```

We’ve added all of the new includes, constants, variables, and function declarations that we’ll need for our new game code. We’ll use `Table`, `Puck`, and `Mallet` to represent our drawable objects, `TextureProgram` and `ColorProgram` to represent our shader programs, and the `mat4x4` (a datatype from linmath.h) matrices for our OpenGL matrices. In our draw loop, we’ll call position_table_in_scene() to position the table, and position_object_in_scene() to position our other objects.

For those of you who have also followed the Java tutorials from OpenGL ES 2 for Android: A Quick-Start Guide, you’ll recognize that this has a lot in common with the air hockey project from the first part of the book. The code for that project can be freely downloaded from The Pragmatic Bookshelf.

#### `on_surface_created()`

```void on_surface_created() {
glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
glEnable(GL_DEPTH_TEST);

table = create_table(

vec4 puck_color = {0.8f, 0.8f, 1.0f, 1.0f};
vec4 red = {1.0f, 0.0f, 0.0f, 1.0f};
vec4 blue = {0.0f, 0.0f, 1.0f, 1.0f};

puck = create_puck(0.06f, puck_height, 32, puck_color);
red_mallet = create_mallet(0.08f, mallet_height, 32, red);
blue_mallet = create_mallet(0.08f, mallet_height, 32, blue);

texture_program = get_texture_program(build_program_from_assets(
color_program = get_color_program(build_program_from_assets(
}```

Our new `on_surface_created()` enables depth-testing, initializes the table, puck, and mallets, and loads in the shader programs.

#### `on_surface_changed(int width, int height)`

```void on_surface_changed(int width, int height) {
glViewport(0, 0, width, height);
mat4x4_perspective(projection_matrix, 45, (float) width / (float) height, 1, 10);
mat4x4_look_at(view_matrix, 0.0f, 1.2f, 2.2f, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f);
}```

Our new `on_surface_changed(int width, int height)` now takes in two parameters for the width and the height, and it sets up a projection matrix, and then sets up the view matrix to be slightly above and behind the origin, with an eye position of (0, 1.2, 2.2).

#### `on_draw_frame()`

```void on_draw_frame() {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
mat4x4_mul(view_projection_matrix, projection_matrix, view_matrix);

position_table_in_scene();
draw_table(&table, &texture_program, model_view_projection_matrix);

position_object_in_scene(0.0f, mallet_height / 2.0f, -0.4f);
draw_mallet(&red_mallet, &color_program, model_view_projection_matrix);

position_object_in_scene(0.0f, mallet_height / 2.0f, 0.4f);
draw_mallet(&blue_mallet, &color_program, model_view_projection_matrix);

// Draw the puck.
position_object_in_scene(0.0f, puck_height / 2.0f, 0.0f);
draw_puck(&puck, &color_program, model_view_projection_matrix);
}```

Our new `on_draw_frame()` positions and draws the table, mallets, and the puck.

Because we changed the definition of `on_surface_changed()`, we also have to change the declaration in game.h. Change `void on_surface_changed();` to `void on_surface_changed(int width, int height);`.

```static void position_table_in_scene() {
// The table is defined in terms of X & Y coordinates, so we rotate it
// 90 degrees to lie flat on the XZ plane.
mat4x4 rotated_model_matrix;
mat4x4_identity(model_matrix);
mat4x4_mul(
model_view_projection_matrix, view_projection_matrix, rotated_model_matrix);
}

static void position_object_in_scene(float x, float y, float z) {
mat4x4_identity(model_matrix);
mat4x4_translate_in_place(model_matrix, x, y, z);
mat4x4_mul(model_view_projection_matrix, view_projection_matrix, model_matrix);
}```

These functions update the matrices to let us position the table, puck, and mallets in the scene. We’ll define all of the extra functions that we need soon.

Now we’ll start drilling down into each part of the program and make the changes necessary for our game code to work. Let’s begin by updating our shaders. First, let’s rename our vertex shader shader.vsh to texture_shader.vsh and update it as follows:

```uniform mat4 u_MvpMatrix;

attribute vec4 a_Position;
attribute vec2 a_TextureCoordinates;

varying vec2 v_TextureCoordinates;

void main()
{
v_TextureCoordinates = a_TextureCoordinates;
gl_Position = u_MvpMatrix * a_Position;
}```

We’ll also need a new set of shaders to render our puck and mallets. Let’s add the following new shaders:

```uniform mat4 u_MvpMatrix;
attribute vec4 a_Position;
void main()
{
gl_Position = u_MvpMatrix * a_Position;
}```

```precision mediump float;
uniform vec4 u_Color;
void main()
{
gl_FragColor = u_Color;
}```

### Creating our game objects

Now we’ll add support for generating and drawing our game objects. Let’s begin with game_objects.h:

```#include "platform_gl.h"
#include "program.h"
#include "linmath.h"

typedef struct {
GLuint texture;
GLuint buffer;
} Table;

typedef struct {
vec4 color;
GLuint buffer;
int num_points;
} Puck;

typedef struct {
vec4 color;
GLuint buffer;
int num_points;
} Mallet;

Table create_table(GLuint texture);
void draw_table(const Table* table, const TextureProgram* texture_program, mat4x4 m);

Puck create_puck(float radius, float height, int num_points, vec4 color);
void draw_puck(const Puck* puck, const ColorProgram* color_program, mat4x4 m);

Mallet create_mallet(float radius, float height, int num_points, vec4 color);
void draw_mallet(const Mallet* mallet, const ColorProgram* color_program, mat4x4 m);```

We’ve defined three C structs to hold the data for our table, puck, and mallets, and we’ve declared functions to create and draw these objects.

#### Drawing a table

Let’s continue with game_objects.c:

```#include "game_objects.h"
#include "buffer.h"
#include "platform_gl.h"
#include "program.h"
#include "linmath.h"
#include <math.h>

// Triangle fan
// position X, Y, texture S, T
static const float table_data[] = { 0.0f,  0.0f, 0.5f, 0.5f,
-0.5f, -0.8f, 0.0f, 0.9f,
0.5f, -0.8f, 1.0f, 0.9f,
0.5f,  0.8f, 1.0f, 0.1f,
-0.5f,  0.8f, 0.0f, 0.1f,
-0.5f, -0.8f, 0.0f, 0.9f};

Table create_table(GLuint texture) {
return (Table) {texture,
create_vbo(sizeof(table_data), table_data, GL_STATIC_DRAW)};
}

void draw_table(const Table* table, const TextureProgram* texture_program, mat4x4 m)
{
glUseProgram(texture_program->program);

glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, table->texture);
glUniformMatrix4fv(texture_program->u_mvp_matrix_location, 1,
GL_FALSE, (GLfloat*)m);
glUniform1i(texture_program->u_texture_unit_location, 0);

glBindBuffer(GL_ARRAY_BUFFER, table->buffer);
glVertexAttribPointer(texture_program->a_position_location, 2, GL_FLOAT,
GL_FALSE, 4 * sizeof(GL_FLOAT), BUFFER_OFFSET(0));
glVertexAttribPointer(texture_program->a_texture_coordinates_location, 2, GL_FLOAT,
GL_FALSE, 4 * sizeof(GL_FLOAT), BUFFER_OFFSET(2 * sizeof(GL_FLOAT)));
glEnableVertexAttribArray(texture_program->a_position_location);
glEnableVertexAttribArray(texture_program->a_texture_coordinates_location);
glDrawArrays(GL_TRIANGLE_FAN, 0, 6);

glBindBuffer(GL_ARRAY_BUFFER, 0);
}```

After the imports, this is the code to create and draw the table data. This is essentially the same as what we had before, with the coordinates adjusted a bit to change the table into a rectangle.

#### Generating circles and cylinders

Before we can draw a puck or a mallet, we’ll need to add some helper functions to draw a circle or a cylinder. Let’s define those now:

```static inline int size_of_circle_in_vertices(int num_points) {
return 1 + (num_points + 1);
}

static inline int size_of_open_cylinder_in_vertices(int num_points) {
return (num_points + 1) * 2;
}```

We first need two helper functions to calculate the size of a circle or a cylinder in terms of vertices. A circle drawn as a triangle fan has one vertex for the center, `num_points` vertices around the circle, and one more vertex to close the circle. An open-ended cylinder drawn as a triangle strip doesn’t have a center point, but it does have two vertices for each point around the circle, and two more vertices to close off the circle.

```static inline int gen_circle(float* out, int offset,
float center_x, float center_y, float center_z,
{
out[offset++] = center_x;
out[offset++] = center_y;
out[offset++] = center_z;

int i;
for (i = 0; i <= num_points; ++i) {
float angle_in_radians = ((float) i / (float) num_points)
* ((float) M_PI * 2.0f);
out[offset++] = center_y;
}

return offset;
}```

This code will generate a circle, given a center point, a radius, and the number of points around the circle.

```static inline int gen_cylinder(float* out, int offset,
float center_x, float center_y, float center_z,
float height, float radius, int num_points)
{
const float y_start = center_y - (height / 2.0f);
const float y_end = center_y + (height / 2.0f);

int i;
for (i = 0; i <= num_points; i++) {
float angle_in_radians = ((float) i / (float) num_points)
* ((float) M_PI * 2.0f);

out[offset++] = x_position;
out[offset++] = y_start;
out[offset++] = z_position;

out[offset++] = x_position;
out[offset++] = y_end;
out[offset++] = z_position;
}

return offset;
}```

This code will generate the vertices for an open-ended cylinder. Note that for both the circle and the cylinder, the loop goes from 0 to `num_points`, so the first and last points around the circle are duplicated in order to close the loop around the circle.

#### Drawing a puck

Let’s add the code to generate and draw the puck:

```Puck create_puck(float radius, float height, int num_points, vec4 color)
{
float data[(size_of_circle_in_vertices(num_points)
+ size_of_open_cylinder_in_vertices(num_points)) * 3];

int offset = gen_circle(data, 0, 0.0f, height / 2.0f, 0.0f, radius, num_points);
gen_cylinder(data, offset, 0.0f, 0.0f, 0.0f, height, radius, num_points);

return (Puck) {{color[0], color[1], color[2], color[3]},
create_vbo(sizeof(data), data, GL_STATIC_DRAW),
num_points};
}```

A puck contains one open-ended cylinder, and a circle to top off that cylinder.

```void draw_puck(const Puck* puck, const ColorProgram* color_program, mat4x4 m)
{
glUseProgram(color_program->program);

glUniformMatrix4fv(color_program->u_mvp_matrix_location, 1, GL_FALSE, (GLfloat*)m);
glUniform4fv(color_program->u_color_location, 1, puck->color);

glBindBuffer(GL_ARRAY_BUFFER, puck->buffer);
glVertexAttribPointer(color_program->a_position_location, 3, GL_FLOAT,
GL_FALSE, 0, BUFFER_OFFSET(0));
glEnableVertexAttribArray(color_program->a_position_location);

int circle_vertex_count = size_of_circle_in_vertices(puck->num_points);
int cylinder_vertex_count = size_of_open_cylinder_in_vertices(puck->num_points);

glDrawArrays(GL_TRIANGLE_FAN, 0, circle_vertex_count);
glDrawArrays(GL_TRIANGLE_STRIP, circle_vertex_count, cylinder_vertex_count);
glBindBuffer(GL_ARRAY_BUFFER, 0);
}```

To draw the puck, we pass in the uniforms and attributes, and then we draw the circle as a triangle fan, and the cylinder as a triangle strip.

#### Drawing a mallet

Let’s continue with the code to create and draw a mallet:

```Mallet create_mallet(float radius, float height, int num_points, vec4 color)
{
float data[(size_of_circle_in_vertices(num_points) * 2
+ size_of_open_cylinder_in_vertices(num_points) * 2) * 3];

float base_height = height * 0.25f;
float handle_height = height * 0.75f;

int offset = gen_circle(data, 0, 0.0f, -base_height, 0.0f, radius, num_points);
offset = gen_circle(data, offset,
0.0f, height * 0.5f, 0.0f,
offset = gen_cylinder(data, offset,
0.0f, -base_height - base_height / 2.0f, 0.0f,
gen_cylinder(data, offset,
0.0f, height * 0.5f - handle_height / 2.0f, 0.0f,

return (Mallet) {{color[0], color[1], color[2], color[3]},
create_vbo(sizeof(data), data, GL_STATIC_DRAW),
num_points};
}```

A mallet contains two circles and two open-ended cylinders, positioned and sized so that the mallet’s base is wider and shorter than the mallet’s handle.

```void draw_mallet(const Mallet* mallet, const ColorProgram* color_program, mat4x4 m)
{
glUseProgram(color_program->program);

glUniformMatrix4fv(color_program->u_mvp_matrix_location, 1, GL_FALSE, (GLfloat*)m);
glUniform4fv(color_program->u_color_location, 1, mallet->color);

glBindBuffer(GL_ARRAY_BUFFER, mallet->buffer);
glVertexAttribPointer(color_program->a_position_location, 3, GL_FLOAT,
GL_FALSE, 0, BUFFER_OFFSET(0));
glEnableVertexAttribArray(color_program->a_position_location);

int circle_vertex_count = size_of_circle_in_vertices(mallet->num_points);
int cylinder_vertex_count = size_of_open_cylinder_in_vertices(mallet->num_points);
int start_vertex = 0;

glDrawArrays(GL_TRIANGLE_FAN, start_vertex, circle_vertex_count);
start_vertex += circle_vertex_count;
glDrawArrays(GL_TRIANGLE_FAN, start_vertex, circle_vertex_count);
start_vertex += circle_vertex_count;
glDrawArrays(GL_TRIANGLE_STRIP, start_vertex, cylinder_vertex_count);
start_vertex += cylinder_vertex_count;
glDrawArrays(GL_TRIANGLE_STRIP, start_vertex, cylinder_vertex_count);
glBindBuffer(GL_ARRAY_BUFFER, 0);
}```

Drawing the mallet is similar to drawing the puck, except that now we draw two circles and two cylinders.

We’ll need to add a helper function that we’re currently using in game.c; create a new header file called math_helper.h, and add the following code:

```#include <math.h>

static inline float deg_to_radf(float deg) {
return deg * (float)M_PI / 180.0f;
}```

Since C’s trigonometric functions expect passed-in values to be in radians, we’ll use this function to convert degrees into radians, where needed.

While linmath.h contains a lot of useful functions, there’s a few missing that we need for our game code. Create a new header file called matrix.h, and begin by adding the following code, all adapted from Android’s OpenGL `Matrix` class:

```#include "linmath.h"
#include <math.h>
#include <string.h>

/* Adapted from Android's OpenGL Matrix.java. */

static inline void mat4x4_perspective(mat4x4 m, float y_fov_in_degrees,
float aspect, float n, float f)
{
const float angle_in_radians = (float) (y_fov_in_degrees * M_PI / 180.0);
const float a = (float) (1.0 / tan(angle_in_radians / 2.0));

m[0][0] = a / aspect;
m[1][0] = 0.0f;
m[2][0] = 0.0f;
m[3][0] = 0.0f;

m[1][0] = 0.0f;
m[1][1] = a;
m[1][2] = 0.0f;
m[1][3] = 0.0f;

m[2][0] = 0.0f;
m[2][1] = 0.0f;
m[2][2] = -((f + n) / (f - n));
m[2][3] = -1.0f;

m[3][0] = 0.0f;
m[3][1] = 0.0f;
m[3][2] = -((2.0f * f * n) / (f - n));
m[3][3] = 0.0f;
}```

We’ll use `mat4x4_perspective()` to setup a perspective projection matrix.

```static inline void mat4x4_translate_in_place(mat4x4 m, float x, float y, float z)
{
int i;
for (i = 0; i < 4; ++i) {
m[3][i] += m[0][i] * x
+  m[1][i] * y
+  m[2][i] * z;
}
}```

This helper function lets us translate a matrix in place.

```static inline void mat4x4_look_at(mat4x4 m,
float eyeX, float eyeY, float eyeZ,
float centerX, float centerY, float centerZ,
float upX, float upY, float upZ)
{
// See the OpenGL GLUT documentation for gluLookAt for a description
// of the algorithm. We implement it in a straightforward way:

float fx = centerX - eyeX;
float fy = centerY - eyeY;
float fz = centerZ - eyeZ;

// Normalize f
vec3 f_vec = {fx, fy, fz};
float rlf = 1.0f / vec3_len(f_vec);
fx *= rlf;
fy *= rlf;
fz *= rlf;

// compute s = f x up (x means "cross product")
float sx = fy * upZ - fz * upY;
float sy = fz * upX - fx * upZ;
float sz = fx * upY - fy * upX;

// and normalize s
vec3 s_vec = {sx, sy, sz};
float rls = 1.0f / vec3_len(s_vec);
sx *= rls;
sy *= rls;
sz *= rls;

// compute u = s x f
float ux = sy * fz - sz * fy;
float uy = sz * fx - sx * fz;
float uz = sx * fy - sy * fx;

m[0][0] = sx;
m[0][1] = ux;
m[0][2] = -fx;
m[0][3] = 0.0f;

m[1][0] = sy;
m[1][1] = uy;
m[1][2] = -fy;
m[1][3] = 0.0f;

m[2][0] = sz;
m[2][1] = uz;
m[2][2] = -fz;
m[2][3] = 0.0f;

m[3][0] = 0.0f;
m[3][1] = 0.0f;
m[3][2] = 0.0f;
m[3][3] = 1.0f;

mat4x4_translate_in_place(m, -eyeX, -eyeY, -eyeZ);
}```

We can use `mat4x4_look_at()` like a camera, and use it to position the scene in a certain way.

We’re almost done the changes to our core code. Let’s wrap up those changes by adding the following code:

#### program.h

```#pragma once
#include "platform_gl.h"

typedef struct {
GLuint program;

GLint a_position_location;
GLint a_texture_coordinates_location;
GLint u_mvp_matrix_location;
GLint u_texture_unit_location;
} TextureProgram;

typedef struct {
GLuint program;

GLint a_position_location;
GLint u_mvp_matrix_location;
GLint u_color_location;
} ColorProgram;

TextureProgram get_texture_program(GLuint program);
ColorProgram get_color_program(GLuint program);```

#### program.c

```#include "program.h"
#include "platform_gl.h"

TextureProgram get_texture_program(GLuint program)
{
return (TextureProgram) {
program,
glGetAttribLocation(program, "a_Position"),
glGetAttribLocation(program, "a_TextureCoordinates"),
glGetUniformLocation(program, "u_MvpMatrix"),
glGetUniformLocation(program, "u_TextureUnit")};
}

ColorProgram get_color_program(GLuint program)
{
return (ColorProgram) {
program,
glGetAttribLocation(program, "a_Position"),
glGetUniformLocation(program, "u_MvpMatrix"),
glGetUniformLocation(program, "u_Color")};
}```

We first need to update Android.mk and add the following to `LOCAL_SRC_FILES`:

```				   \$(CORE_RELATIVE_PATH)/game_objects.c \
\$(CORE_RELATIVE_PATH)/program.c \```

We also need to add a new `LOCAL_C_INCLUDES`:

`LOCAL_C_INCLUDES += \$(PROJECT_ROOT_PATH)/3rdparty/linmath/`

We then need to update renderer_wrapper.c and change the call to `on_surface_changed();` to ` on_surface_changed(width, height);`. Once we’ve done that, we should be able to run the app on our Android device, and it should look similar to the following image:

For iOS, we just need to open up the Xcode project and add the necessary references to linmath.h and our new core files to the appropriate folder groups, and then we need to update ViewController.m and change `on_surface_changed();` to the following:

`on_surface_changed([[self view] bounds].size.width, [[self view] bounds].size.height);`

Once we run the app, it should look similar to the following image:

For emscripten, we need to update the Makefile and add the following lines to `SOURCES`:

```		  ../../core/game_objects.c \
../../core/program.c \```

We’ll also need to add the following lines to `OBJECTS`:

```		  ../../core/game_objects.o \
../../core/program.o \```

We then just need to update main.c, move the constants `width` and `height` from inside `init_gl()` to outside the function near the top of the file, and update the call to `on_surface_changed();` to `on_surface_changed(width, height);`. We can then build the file by calling `emmake make`, which should produce a file that looks as follows:

See how easy that was? Now that we have a minimal cross-platform framework in place, it’s very easy for us to bring changes to the core code across to each platform.

### Exploring further

The full source code for this lesson can be found at the GitHub project. In the next post, we’ll take a look at user input so we can move our mallet around the screen.

## Loading a PNG into Memory and Displaying It as a Texture with OpenGL ES 2, Using (Almost) the Same Code on iOS, Android, and Emscripten

In the last post in this series, we setup a system to render OpenGL to Android, iOS and the web via WebGL and emscripten. In this post, we’ll expand on that work and add support for PNG loading, shaders, and VBOs.

### TL;DR

We can put most of our common code into a core folder, and call into that core from a main loop in our platform-specific code. By taking advantage of open source libraries like libpng and zlib, most of our code can remain platform independent. In this post, we cover the new core code and the new Android platform-specific code.

### Prerequisites

Before we begin, you may want to check out the previous posts in this series so that you can get the right tools installed and configured on your local development machine:

You can setup a local git repository with all of the code by cloning ‘article-1-clearing-the-screen’ or by downloading it as a ZIP from GitHub: https://github.com/learnopengles/airhockey/tree/article-1-clearing-the-screen.

For a “friendlier” introduction to OpenGL ES 2 using Java as the development language of choice, you can also check out Android Lesson One: Getting Started or OpenGL ES 2 for Android: A Quick-Start Guide.

### Updating the platform-independent code

In this section, we’ll cover all of the new changes to the platform-independent core code that we’ll be making to support the new features. The first thing that we’ll do is move things around, so that they follow this new structure:

/src/common => rename to /src/core

/src/android => rename to /src/platform/android

/src/ios => rename to /src/platform/ios

/src/emscripten => rename to /src/platform/emscripten

We’ll also rename glwrapper.h to platform_gl.h for all platforms. This will help to keep our source code more organized as we add more features and source files.

To start off, let’s cover all of the source files that go into /src/core.

Let’s begin with buffer.h:

```#include "platform_gl.h"

#define BUFFER_OFFSET(i) ((void*)(i))

GLuint create_vbo(const GLsizeiptr size, const GLvoid* data, const GLenum usage);```

We’ll use `create_vbo` to upload data into a vertex buffer object. `BUFFER_OFFSET()` is a helper macro that we’ll use to pass the right offsets to `glVertexAttribPointer()`.

Let’s follow up with the implementation in buffer.c:

```#include "buffer.h"
#include "platform_gl.h"
#include <assert.h>
#include <stdlib.h>

GLuint create_vbo(const GLsizeiptr size, const GLvoid* data, const GLenum usage) {
assert(data != NULL);
GLuint vbo_object;
glGenBuffers(1, &vbo_object);
assert(vbo_object != 0);

glBindBuffer(GL_ARRAY_BUFFER, vbo_object);
glBufferData(GL_ARRAY_BUFFER, size, data, usage);
glBindBuffer(GL_ARRAY_BUFFER, 0);

return vbo_object;
}```

First, we generate a new OpenGL vertex buffer object, and then we bind to it and upload the data from `data` into the VBO. We also assert that the data is not null and that we successfully created a new vertex buffer object. Why do we assert instead of returning an error code? There are a couple of reasons for that:

1. In the context of a game, there isn’t really a reasonable course of action that we can take in the event that creating a new VBO fails. Something is going to fail to display properly, so our game experience isn’t going to be as intended. We would also never expect this to fail, unless we’re abusing the platform and trying to do too much for the target hardware.
2. Returning an error means that we now have to expand our code by handling the error and checking for the error at the other end, perhaps cascading that across several function calls. This adds a lot of maintenance burden with little gain.

I have been greatly influenced by this excellent series over at the Bitsquid blog:

`assert()` is only compiled into the program in debug mode by default, so in release mode, the application will just continue to run and might end up crashing on bad data. To avoid this, when going into production, you may want to create a special `assert()` that works in release mode and does a little bit more, perhaps showing a dialog box to the user before crashing and writing out a log to a file, so that it can be sent off to the developers.

```#include "platform_gl.h"

GLuint compile_shader(const GLenum type, const GLchar* source, const GLint length);
GLuint build_program(

/* Should be called just before using a program to draw, if validation is needed. */
GLint validate_program(const GLuint program);```

Here, we have methods to compile a shader and to link two shaders into an OpenGL shader program. We also have a helper method here for validating a program, if we want to do that for debugging reasons.

Let’s begin the implementation for shader.c:

```#include "shader.h"
#include "platform_gl.h"
#include "platform_log.h"
#include <assert.h>
#include <stdlib.h>
#include <string.h>

static void log_v_fixed_length(const GLchar* source, const GLint length) {
if (LOGGING_ON) {
char log_buffer[length + 1];
memcpy(log_buffer, source, length);
log_buffer[length] = '\0';

DEBUG_LOG_WRITE_V(TAG, log_buffer);
}
}

if (LOGGING_ON) {
GLint log_length;
GLchar log_buffer[log_length];

DEBUG_LOG_WRITE_V(TAG, log_buffer);
}
}

static void log_program_info_log(GLuint program_object_id) {
if (LOGGING_ON) {
GLint log_length;
glGetProgramiv(program_object_id, GL_INFO_LOG_LENGTH, &log_length);
GLchar log_buffer[log_length];
glGetProgramInfoLog(program_object_id, log_length, NULL, log_buffer);

DEBUG_LOG_WRITE_V(TAG, log_buffer);
}
}```

We’ve added some helper functions to help us log the shader and program info logs when logging is enabled. We’ll define `LOGGING_ON` and the other logging functions in other include files, soon. Let’s continue:

```GLuint compile_shader(const GLenum type, const GLchar* source, const GLint length) {
assert(source != NULL);
GLint compile_status;

if (LOGGING_ON) {
DEBUG_LOG_WRITE_D(TAG, "Results of compiling shader source:");
log_v_fixed_length(source, length);
}

assert(compile_status != 0);

}```

We create a new shader object, pass in the source, compile it, and if everything was successful, we then return the shader ID. Now we need a method for linking two shaders together into an OpenGL program:

```GLuint link_program(const GLuint vertex_shader, const GLuint fragment_shader) {
GLuint program_object_id = glCreateProgram();

assert(program_object_id != 0);

if (LOGGING_ON) {
log_program_info_log(program_object_id);
}

return program_object_id;
}```

To link the program, we pass in two OpenGL shader objects, one for the vertex shader and one for the fragment shader, and then we link them together. If all was successful, then we return the program object ID.

```GLuint build_program(

}```

This helper method method takes in the source for a vertex shader and a fragment shader, and returns the linked program object. Let’s add the second helper method:

```GLint validate_program(const GLuint program) {
if (LOGGING_ON) {
int validate_status;

glValidateProgram(program);
glGetProgramiv(program, GL_VALIDATE_STATUS, &validate_status);
DEBUG_LOG_PRINT_D(TAG, "Results of validating program: %d", validate_status);
log_program_info_log(program);
return validate_status;
}

return 0;
}```

We can use `validate_program()` for debugging purposes, if we want some extra info about a program during a specific moment in our rendering code.

Now we need some code to load in raw data into a texture. Let’s add the following into a new file called texture.h:

```#include "platform_gl.h"

const GLsizei width, const GLsizei height,
const GLenum type, const GLvoid* pixels);```

Let’s follow that up with the implementation in texture.c:

```#include "texture.h"
#include "platform_gl.h"
#include <assert.h>

const GLsizei width, const GLsizei height,
const GLenum type, const GLvoid* pixels) {
GLuint texture_object_id;
glGenTextures(1, &texture_object_id);
assert(texture_object_id != 0);

glBindTexture(GL_TEXTURE_2D, texture_object_id);

glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexImage2D(
GL_TEXTURE_2D, 0, type, width, height, 0, type, GL_UNSIGNED_BYTE, pixels);
glGenerateMipmap(GL_TEXTURE_2D);

glBindTexture(GL_TEXTURE_2D, 0);
return texture_object_id;
}```

This is pretty straightforward and not currently customized for special cases: it just loads in the raw data in `pixels` into the texture, assuming that each component is 8-bit. It then sets up the texture for trilinear mipmapping.

For this post, we’ll package our texture asset as a PNG file, and use libpng to decode the file into raw data. For that we’ll need to add some wrapper code around libpng so that we can decode a PNG file into raw data suitable for upload into an OpenGL texture.

Let’s create a new file called image.h, with the following contents:

```#include "platform_gl.h"

typedef struct {
const int width;
const int height;
const int size;
const GLenum gl_color_format;
const void* data;
} RawImageData;

/* Returns the decoded image data, or aborts if there's an error during decoding. */
RawImageData get_raw_image_data_from_png(const void* png_data, const int png_data_size);
void release_raw_image_data(const RawImageData* data);```

We’ll use `get_raw_image_data_from_png()` to read in the PNG data from `png_data` and return the raw data in a struct. When we no longer need to keep that raw data around, we can call `release_raw_image_data()` to release the associated resources.

Let’s start writing the implementation in image.c:

```#include "image.h"
#include "platform_log.h"
#include <assert.h>
#include <png.h>
#include <string.h>
#include <stdlib.h>

typedef struct {
const png_byte* data;
const png_size_t size;
} DataHandle;

typedef struct {
const DataHandle data;
png_size_t offset;

typedef struct {
const png_uint_32 width;
const png_uint_32 height;
const int color_type;
} PngInfo;```

We’ve started off with the includes and a few structs that we’ll be using locally. Let’s continue with a few function prototypes:

```static void read_png_data_callback(
png_structp png_ptr, png_byte* png_data, png_size_t read_length);
static PngInfo read_and_update_info(const png_structp png_ptr, const png_infop info_ptr);
const png_structp png_ptr, const png_infop info_ptr, const png_uint_32 height);
static GLenum get_gl_color_format(const int png_color_format);```

We’ll be using these as local helper functions. Now we can add the implementation for `get_raw_image_data_from_png()`:

```RawImageData get_raw_image_data_from_png(const void* png_data, const int png_data_size) {
assert(png_data != NULL && png_data_size > 8);
assert(png_check_sig((void*)png_data, 8));

PNG_LIBPNG_VER_STRING, NULL, NULL, NULL);
assert(png_ptr != NULL);
png_infop info_ptr = png_create_info_struct(png_ptr);
assert(info_ptr != NULL);

if (setjmp(png_jmpbuf(png_ptr))) {
}

const PngInfo png_info = read_and_update_info(png_ptr, info_ptr);
png_ptr, info_ptr, png_info.height);

return (RawImageData) {
png_info.width,
png_info.height,
raw_image.size,
get_gl_color_format(png_info.color_type),
raw_image.data};
}```

There’s a lot going on here, so let’s explain each part in turn:

```	assert(png_data != NULL && png_data_size > 8);
assert(png_check_sig((void*)png_data, 8));```

This checks that the PNG data is present and has a valid header.

```	png_structp png_ptr = png_create_read_struct(
PNG_LIBPNG_VER_STRING, NULL, NULL, NULL);
assert(png_ptr != NULL);
png_infop info_ptr = png_create_info_struct(png_ptr);
assert(info_ptr != NULL);```

This initializes the PNG structures that we’ll use to read in the rest of the data.

```	ReadDataHandle png_data_handle = (ReadDataHandle) {{png_data, png_data_size}, 0};

As the PNG data is parsed, libpng will call `read_png_data_callback()` for each part of the PNG file. Since we’re reading in the PNG file from memory, we’ll use `ReadDataHandle` to wrap this memory buffer so that we can read from it as if it were a file.

```	if (setjmp(png_jmpbuf(png_ptr))) {
}```

This is how libpng does its error handling. If something goes wrong, then `setjmp` will return true and we’ll enter the body of the if statement. We want to handle this like an assert, so we just crash the program. We’ll define the `CRASH` macro later on.

`	const PngInfo png_info = read_and_update_info(png_ptr, info_ptr);`

We’ll use one of our helper functions here to parse the PNG information, such as the color format, and convert the PNG into a format that we want.

```	const DataHandle raw_image = read_entire_png_image(
png_ptr, info_ptr, png_info.height);```

We’ll use another helper function here to read in and decode the PNG image data.

```	png_read_end(png_ptr, info_ptr);

return (RawImageData) {
png_info.width,
png_info.height,
raw_image.size,
get_gl_color_format(png_info.color_type),
raw_image.data};```

Once reading is complete, we clean up the PNG structures and then we return the data inside of a `RawImageData` struct.

Let’s define our helper methods now:

```static void read_png_data_callback(
png_structp png_ptr, png_byte* raw_data, png_size_t read_length) {
const png_byte* png_src = handle->data.data + handle->offset;

}```

`read_png_data_callback()` will be called by libpng to read from the memory buffer. To read from the right place in the memory buffer, we store an offset and we increase that offset every time that `read_png_data_callback()` is called.

```static PngInfo read_and_update_info(const png_structp png_ptr, const png_infop info_ptr)
{
png_uint_32 width, height;
int bit_depth, color_type;

png_get_IHDR(
png_ptr, info_ptr, &width, &height, &bit_depth, &color_type, NULL, NULL, NULL);

// Convert transparency to full alpha
if (png_get_valid(png_ptr, info_ptr, PNG_INFO_tRNS))
png_set_tRNS_to_alpha(png_ptr);

// Convert grayscale, if needed.
if (color_type == PNG_COLOR_TYPE_GRAY && bit_depth < 8)
png_set_expand_gray_1_2_4_to_8(png_ptr);

// Convert paletted images, if needed.
if (color_type == PNG_COLOR_TYPE_PALETTE)
png_set_palette_to_rgb(png_ptr);

// Add alpha channel, if there is none.
// Rationale: GL_RGBA is faster than GL_RGB on many GPUs)
if (color_type == PNG_COLOR_TYPE_PALETTE || color_type == PNG_COLOR_TYPE_RGB)

// Ensure 8-bit packing
if (bit_depth < 8)
png_set_packing(png_ptr);
else if (bit_depth == 16)
png_set_scale_16(png_ptr);

color_type = png_get_color_type(png_ptr, info_ptr);

return (PngInfo) {width, height, color_type};
}```

This helper function reads in the PNG data, and then it asks libpng to perform several transformations based on the PNG type:

• Transparency information is converted into a full alpha channel.
• Grayscale images are converted to 8-bit.
• Paletted images are converted to full RGB.
• RGB images get an alpha channel added, if none is present.
• Color channels are converted to 8-bit, if less than 8-bit or 16-bit.

The PNG is then updated with the new transformations and the new color type is stored into `color_type`.

For the next step, we’ll add a helper function to decode the PNG image data into raw image data:

```static DataHandle read_entire_png_image(
const png_structp png_ptr,
const png_infop info_ptr,
const png_uint_32 height)
{
const png_size_t row_size = png_get_rowbytes(png_ptr, info_ptr);
const int data_length = row_size * height;
assert(row_size > 0);

png_byte* raw_image = malloc(data_length);
assert(raw_image != NULL);

png_byte* row_ptrs[height];

png_uint_32 i;
for (i = 0; i < height; i++) {
row_ptrs[i] = raw_image + i * row_size;
}

return (DataHandle) {raw_image, data_length};
}```

First, we allocate a block of memory large enough to hold the decoded image data. Since libpng wants to decode things line by line, we also need to setup an array on the stack that contains a set of pointers into this image data, one pointer for each line. We can then call `png_read_image()` to decode all of the PNG data and then we return that as a `DataHandle`.

Let’s add the last helper method:

```static GLenum get_gl_color_format(const int png_color_format) {
assert(png_color_format == PNG_COLOR_TYPE_GRAY
|| png_color_format == PNG_COLOR_TYPE_RGB_ALPHA
|| png_color_format == PNG_COLOR_TYPE_GRAY_ALPHA);

switch (png_color_format) {
case PNG_COLOR_TYPE_GRAY:
return GL_LUMINANCE;
case PNG_COLOR_TYPE_RGB_ALPHA:
return GL_RGBA;
case PNG_COLOR_TYPE_GRAY_ALPHA:
return GL_LUMINANCE_ALPHA;
}

return 0;
}```

This function will read in the PNG color format and return the matching OpenGL color format. We expect that after the transformations that we did, the PNG color format will be either `PNG_COLOR_TYPE_GRAY`, `PNG_COLOR_TYPE_GRAY_ALPHA`, or `PNG_COLOR_TYPE_RGB_ALPHA`, so we assert against those types.

```void release_raw_image_data(const RawImageData* data) {
assert(data != NULL);
free((void*)data->data);
}```

We’ll call this when we’re done with the raw data and can return the associated memory to the heap.

### The benefits of using libpng versus platform-specific code

At this point, you might be asking why we simply didn’t use what each platform offers us, such as `BitmapFactory.decode???` on Android, where `???` is one of the decode methods. Using platform specific code means that we would have to duplicate the code for each platform, so on Android we would wrap some code around `BitmapFactory`, and on the other platforms we would do something else. This might be a good idea if the platform-specific code was better at the job; however, in personal testing on the Nexus 7, using `BitmapFactory` actually seems to be a lot slower than just using libpng directly.

Here were the timings I observed for loading a single PNG file from the assets folder and uploading it into an OpenGL texture:

```iPhone 5, libpng:       ~28ms
Nexus 7, libpng:        ~35ms
Nexus 7, BitmapFactory: ~93ms
```

To reduce possible sources of slowdown, I avoided JNI and had the Java code upload the data directly into a texture, and return the texture object ID to C. I also used `inScaled = false` and placed the image in the assets folder to avoid extra scaling; if someone has extra insight into this issue, I would definitely love to hear it! I can only surmise that there must be a lot of extra stuff going on behind the scenes, or that the overhead of doing this from Java using the Dalvik VM is just so great that it results in that much of a slowdown. The Nexus 7 is a powerful Android device, so these timings are going to be much worse on slower Android devices. Since libpng is faster than the platform-specific alternative, at least on Android, and since maintaining one set of code is easier than maintaining separate code for each platform, I’ve decided to just use libpng on all platforms for PNG image decoding.

Just for fun, here are the emscripten numbers on a MacBook Air with a 1.7 GHz Intel Core i5 and 4GB 1333 Mhz DDR3 RAM, loading an uncompressed HTML with embedded resources from the local filesystem:

Chrome 28, first time: ~318ms
Firefox 22: ~27ms

Interestingly enough, the code ran faster when it was compiled without the closure compiler and LLVM LTO.

#### Wrapping up the rest of the changes to the core folder

Let’s wrap up the rest of the changes to the core folder by adding the following files:

config.h:

`#define LOGGING_ON 1`

We’ll use this to control whether logging should be turned on or off.

macros.h:

`#define UNUSED(x) (void)(x)`

This will help us suppress compiler warnings related to unused parameters, which is useful for JNI methods which get called by Java.

asset_utils.h

```#include "platform_gl.h"

GLuint build_program_from_assets(

We’ll use these helper methods in game.c to make it easier to load in the texture and shaders.

asset_utils.c

```#include "asset_utils.h"
#include "image.h"
#include "platform_asset_utils.h"
#include "texture.h"
#include <assert.h>
#include <stdlib.h>

assert(relative_path != NULL);

const FileData png_file = get_asset_data(relative_path);
const RawImageData raw_image_data =
get_raw_image_data_from_png(png_file.data, png_file.data_length);
raw_image_data.width, raw_image_data.height,
raw_image_data.gl_color_format, raw_image_data.data);

release_raw_image_data(&raw_image_data);
release_asset_data(&png_file);

return texture_object_id;
}

GLuint build_program_from_assets(

const GLuint program_object_id = build_program(

return program_object_id;
}```

This is the implementation for asset_utils.h. We’ll use `load_png_asset_into_texture()` to load a PNG file from the assets folder into an OpenGL texture, and we’ll use `build_program_from_assets()` to load in two shaders from the assets folder and compile and link them into an OpenGL shader program.

#### Updating game.c

We’ll need to update game.c to use all of the new code that we’ve added. Delete everything that’s there and replace it with the following start to our new code:

```#include "game.h"
#include "asset_utils.h"
#include "buffer.h"
#include "image.h"
#include "platform_gl.h"
#include "platform_asset_utils.h"
#include "texture.h"

static GLuint texture;
static GLuint buffer;
static GLuint program;

static GLint a_position_location;
static GLint a_texture_coordinates_location;
static GLint u_texture_unit_location;

// position X, Y, texture S, T
static const float rect[] = {-1.0f, -1.0f, 0.0f, 0.0f,
-1.0f,  1.0f, 0.0f, 1.0f,
1.0f, -1.0f, 1.0f, 0.0f,
1.0f,  1.0f, 1.0f, 1.0f};```

We’ve added our includes, a few local variables to hold the OpenGL objects and shader attribute and uniform locations, and an array of floats which contains a set of positions and texture coordinates for a rectangle that will completely fill the screen. We’ll use that to draw our texture onto the screen.

Let’s continue the code:

```void on_surface_created() {
glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
}

void on_surface_changed() {
buffer = create_vbo(sizeof(rect), rect, GL_STATIC_DRAW);

a_position_location = glGetAttribLocation(program, "a_Position");
a_texture_coordinates_location =
glGetAttribLocation(program, "a_TextureCoordinates");
u_texture_unit_location = glGetUniformLocation(program, "u_TextureUnit");
}```

`glClearColor()` is just as we were doing it before. In `on_surface_changed()`, we load in a texture from textures/air_hockey_surface.png, we create a VBO from the data stored in `rect`, and then we build an OpenGL shader program from the shaders located at shaders/shader.vsh and shaders/shader.fsh. Once we have the program loaded, we use it to grab the attribute and uniform locations out of the shader.

We haven’t yet defined the code to load in the actual assets from the file system, since a good part of that is platform-specific. When we do, we’ll take care to set things up so that these relative paths “just work”.

Let’s complete game.c:

```void on_draw_frame() {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

glUseProgram(program);

glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, texture);
glUniform1i(u_texture_unit_location, 0);

glBindBuffer(GL_ARRAY_BUFFER, buffer);
glVertexAttribPointer(a_position_location, 2, GL_FLOAT, GL_FALSE,
4 * sizeof(GL_FLOAT), BUFFER_OFFSET(0));
glVertexAttribPointer(a_texture_coordinates_location, 2, GL_FLOAT, GL_FALSE,
4 * sizeof(GL_FLOAT), BUFFER_OFFSET(2 * sizeof(GL_FLOAT)));
glEnableVertexAttribArray(a_position_location);
glEnableVertexAttribArray(a_texture_coordinates_location);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);

glBindBuffer(GL_ARRAY_BUFFER, 0);
}```

In the draw loop, we clear the screen, set the shader program, bind the texture and VBO, setup the attributes using `glVertexAttribPointer()`, and then draw to the screen with `glDrawArrays()`. If you’ve looked at the Java tutorials before, one thing you’ll notice is that it’s a bit easier to use `glVertexAttribPointer()` from C than it is from Java. For one, if we were using client-side arrays, we could just pass the array without worrying about any `ByteBuffer`s, and for two, we can use the `sizeof` operator to get the size of a datatype in bytes, so no need to hardcode that.

This wraps up everything for the core folder, so in the next few steps, we’re going to add in the necessary platform wrappers to get this working on Android.

### Adding the common platform code

These new files should go in /airhockey/src/platform/common:

platform_file_utils.h

```#pragma once
typedef struct {
const long data_length;
const void* data;
const void* file_handle;
} FileData;

FileData get_file_data(const char* path);
void release_file_data(const FileData* file_data);```

We’ll use this to read data from the file system on iOS and emscripten. We’ll also use `FileData` for our Android asset reading code. We won’t define the implementation of the functions for now since we won’t need them for Android.

platform_asset_utils.h

```#include "platform_file_utils.h"

FileData get_asset_data(const char* relative_path);
void release_asset_data(const FileData* file_data);```

We’ll use this to read in assets. For Android this will be specialized code since it will use the `AssetManager` class to read files straight from the APK file.

platform_log.h

```#include "platform_macros.h"
#include "config.h"

void _debug_log_v(const char* tag, const char* text, ...) PRINTF_ATTRIBUTE(2, 3);
void _debug_log_d(const char* tag, const char* text, ...) PRINTF_ATTRIBUTE(2, 3);
void _debug_log_w(const char* tag, const char* text, ...) PRINTF_ATTRIBUTE(2, 3);
void _debug_log_e(const char* tag, const char* text, ...) PRINTF_ATTRIBUTE(2, 3);

#define DEBUG_LOG_PRINT_V(tag, fmt, ...) do { if (LOGGING_ON) _debug_log_v(tag, "%s:%d:%s(): " fmt, __FILE__, __LINE__, __func__, __VA_ARGS__); } while (0)
#define DEBUG_LOG_PRINT_D(tag, fmt, ...) do { if (LOGGING_ON) _debug_log_d(tag, "%s:%d:%s(): " fmt, __FILE__, __LINE__, __func__, __VA_ARGS__); } while (0)
#define DEBUG_LOG_PRINT_W(tag, fmt, ...) do { if (LOGGING_ON) _debug_log_w(tag, "%s:%d:%s(): " fmt, __FILE__, __LINE__, __func__, __VA_ARGS__); } while (0)
#define DEBUG_LOG_PRINT_E(tag, fmt, ...) do { if (LOGGING_ON) _debug_log_e(tag, "%s:%d:%s(): " fmt, __FILE__, __LINE__, __func__, __VA_ARGS__); } while (0)

#define DEBUG_LOG_WRITE_V(tag, text) DEBUG_LOG_PRINT_V(tag, "%s", text)
#define DEBUG_LOG_WRITE_D(tag, text) DEBUG_LOG_PRINT_D(tag, "%s", text)
#define DEBUG_LOG_WRITE_W(tag, text) DEBUG_LOG_PRINT_W(tag, "%s", text)
#define DEBUG_LOG_WRITE_E(tag, text) DEBUG_LOG_PRINT_E(tag, "%s", text)

#define CRASH(e) DEBUG_LOG_WRITE_E("Assert", #e); __builtin_trap()```

This contains a bunch of macros to help us do logging from our core game code. `CRASH()` is a special macro that will log the message passed to it, then call `__builtin_trap()` to stop execution. We used this macro above when we were loading in the PNG file.

platform_macros.h

```#if defined(__GNUC__)
#define PRINTF_ATTRIBUTE(format_pos, arg_pos) __attribute__((format(printf, format_pos, arg_pos)))
#else
#define PRINTF_ATTRIBUTE(format_pos, arg_pos)
#endif```

This is a special macro that helps the compiler do format checking when checking the formats that we pass to our log functions.

### Updating the Android code

For the Android target, we have a bit of cleanup to do first. Let’s open up the Android project in Eclipse, get rid of GameLibJNIWrapper.java and update RendererWrapper.java as follows:

```package com.learnopengles.airhockey;

import javax.microedition.khronos.egl.EGLConfig;
import javax.microedition.khronos.opengles.GL10;

import android.content.Context;
import android.opengl.GLSurfaceView.Renderer;

import com.learnopengles.airhockey.platform.PlatformFileUtils;

public class RendererWrapper implements Renderer {
static {
}

private final Context context;

public RendererWrapper(Context context) {
this.context = context;
}

@Override
public void onSurfaceCreated(GL10 gl, EGLConfig config) {
PlatformFileUtils.init_asset_manager(context.getAssets());
on_surface_created();
}

@Override
public void onSurfaceChanged(GL10 gl, int width, int height) {
on_surface_changed(width, height);
}

@Override
public void onDrawFrame(GL10 gl) {
on_draw_frame();
}

private static native void on_surface_created();

private static native void on_surface_changed(int width, int height);

private static native void on_draw_frame();
}```

We’ve moved the native methods into `RendererWrapper` itself. The new `RendererWrapper` wants a `Context` passed into its contructor, so give it one by updating the constructor call in MainActivity.java as follows:

`glSurfaceView.setRenderer(new RendererWrapper(this));`

For Android, we’ll be using the `AssetManager` to read in assets that are compiled directly into the APK file. We’ll need a way to pass a reference to the `AssetManager` to our C code, so let’s create a new class in a new package called `com.learnopengles.airhockey.platform` called `PlatformFileUtils`, and add the following code:

```package com.learnopengles.airhockey.platform;

import android.content.res.AssetManager;

public class PlatformFileUtils {
public static native void init_asset_manager(AssetManager assetManager);
}```

We are calling `init_asset_manager()` from `RendererWrapper.onSurfaceCreated()`, which you can see just a few lines above.

#### Updating the JNI code

We’ll also need to add platform-specific JNI code to the jni folder in the android folder. Let’s start off with platform_asset_utils.c:

```#include "platform_asset_utils.h"
#include "macros.h"
#include "platform_log.h"
#include <android/asset_manager_jni.h>
#include <assert.h>

static AAssetManager* asset_manager;

JNIEXPORT void JNICALL Java_com_learnopengles_airhockey_platform_PlatformFileUtils_init_1asset_1manager(
JNIEnv * env, jclass jclazz, jobject java_asset_manager) {
UNUSED(jclazz);
asset_manager = AAssetManager_fromJava(env, java_asset_manager);
}

FileData get_asset_data(const char* relative_path) {
assert(relative_path != NULL);
AAsset* asset =
AAssetManager_open(asset_manager, relative_path, AASSET_MODE_STREAMING);
assert(asset != NULL);

return (FileData) { AAsset_getLength(asset), AAsset_getBuffer(asset), asset };
}

void release_asset_data(const FileData* file_data) {
assert(file_data != NULL);
assert(file_data->file_handle != NULL);
AAsset_close((AAsset*)file_data->file_handle);
}```

We use `get_asset_data()` to wrap Android’s native asset manager and return the data to the calling code, and we release the data when `release_asset_data()` is called. The advantage of doing things like this is that the asset manager can choose to optimize data loading by mapping the file into memory, and we can return that mapped data directly to the caller.

platform_log.c

```#include "platform_log.h"
#include <android/log.h>
#include <stdio.h>
#include <stdlib.h>

#define ANDROID_LOG_VPRINT(priority)	\
va_list arg_ptr; \
va_start(arg_ptr, fmt); \
__android_log_vprint(priority, tag, fmt, arg_ptr); \
va_end(arg_ptr);

void _debug_log_v(const char *tag, const char *fmt, ...) {
ANDROID_LOG_VPRINT(ANDROID_LOG_VERBOSE);
}

void _debug_log_d(const char *tag, const char *fmt, ...) {
ANDROID_LOG_VPRINT(ANDROID_LOG_DEBUG);
}

void _debug_log_w(const char *tag, const char *fmt, ...) {
ANDROID_LOG_VPRINT(ANDROID_LOG_WARN);
}

void _debug_log_e(const char *tag, const char *fmt, ...) {
ANDROID_LOG_VPRINT(ANDROID_LOG_ERROR);
}```

This code wraps Android’s native logging facilities.

Finally, let’s rename jni.c to renderer_wrapper.c and update it to the following:

```#include "game.h"
#include "macros.h"
#include <jni.h>

/* These functions are called from Java. */

JNIEXPORT void JNICALL Java_com_learnopengles_airhockey_RendererWrapper_on_1surface_1created(
JNIEnv * env, jclass cls) {
UNUSED(env);
UNUSED(cls);
on_surface_created();
}

JNIEXPORT void JNICALL Java_com_learnopengles_airhockey_RendererWrapper_on_1surface_1changed(
JNIEnv * env, jclass cls, jint width, jint height) {
UNUSED(env);
UNUSED(cls);
on_surface_changed();
}

JNIEXPORT void JNICALL Java_com_learnopengles_airhockey_RendererWrapper_on_1draw_1frame(
JNIEnv* env, jclass cls) {
UNUSED(env);
UNUSED(cls);
on_draw_frame();
}```

Nothing has really changed here; we just use the `UNUSED()` macro (defined earlier in macros.h in the core folder) to suppress some unnecessary compiler warnings.

### Updating the NDK build files

We’re almost ready to build & test, just a few things left to be done. Download libpng 1.6.2 from http://www.libpng.org/pub/png/libpng.html and place it in /src/3rdparty/libpng. To configure libpng, copy pnglibconf.h.prebuilt from libpng/scripts/ to libpng/ and remove the .prebuilt extension.

To compile libpng with the NDK, let’s add a build script called Android.mk to the libpng folder, as follows:

```LOCAL_PATH := \$(call my-dir)

include \$(CLEAR_VARS)

LOCAL_MODULE := libpng
LOCAL_SRC_FILES = png.c \
pngerror.c \
pngget.c \
pngmem.c \
pngrio.c \
pngrtran.c \
pngrutil.c \
pngset.c \
pngtrans.c \
pngwio.c \
pngwrite.c \
pngwtran.c \
pngwutil.c
LOCAL_EXPORT_C_INCLUDES := \$(LOCAL_PATH)
LOCAL_EXPORT_LDLIBS := -lz

include \$(BUILD_STATIC_LIBRARY)
```

This build script will tell the NDK tools to build a static library called libpng that is linked against zlib, which is built into Android. It also sets up the right variables so that we can easily import this library into our own projects, and we won’t even have to do anything special because the right includes and libs are already exported.

Let’s also update the Android.mk file in our jni folder:

```LOCAL_PATH := \$(call my-dir)
PROJECT_ROOT_PATH := \$(LOCAL_PATH)/../../../
CORE_RELATIVE_PATH := ../../../core/

include \$(CLEAR_VARS)

LOCAL_MODULE    := game
LOCAL_CFLAGS    := -Wall -Wextra
LOCAL_SRC_FILES := platform_asset_utils.c \
platform_log.c \
renderer_wrapper.c \
\$(CORE_RELATIVE_PATH)/asset_utils.c \
\$(CORE_RELATIVE_PATH)/buffer.c \
\$(CORE_RELATIVE_PATH)/game.c \
\$(CORE_RELATIVE_PATH)/image.c \
\$(CORE_RELATIVE_PATH)/texture.c \

LOCAL_C_INCLUDES := \$(PROJECT_ROOT_PATH)/platform/common/
LOCAL_C_INCLUDES += \$(PROJECT_ROOT_PATH)/core/
LOCAL_STATIC_LIBRARIES := libpng
LOCAL_LDLIBS := -lGLESv2 -llog -landroid

include \$(BUILD_SHARED_LIBRARY)

\$(call import-module,libpng)```

Our new build script links in the new files that we’ve created in core, and it also imports libpng from the 3rdparty folder and builds it as a static library that is then linked into our Android application.

1. Delete the existing assets folder from the project.
2. Right-click the project and select Properties. In the window that appears, select Resource->Linked Resources and click New….
3. Enter ‘ASSETS_LOC’ as the name, and ‘\${PROJECT_LOC}/../../../assets’ as the location. Once that’s done, click OK until the Properties window is closed.
4. Right-click the project again and select New->Folder, enter ‘assets’ as the name, select Advanced, select Link to alternate location (Linked Folder), select Variables…, select ASSETS_LOC, and select OK, then Finish.

You should now have a new assets folder that is linked to the assets folder that we created in the airhockey root. More information can be found on Stack Overflow: How to link assets/www folder in Eclipse / Phonegap / Android project?

### Running the app

We should be able to check out the new code now. If you run the app on your Android emulator or device, it should look similar to the following image:

The texture looks a bit stretched/squashed, because we are currently asking OpenGL to fill the screen with that texture. With a basic framework in place, we can start adding some more detail in future lessons and start turning this into an actual game.

### Debugging NDK code

While developing this project, I had to hook up a debugger as something was going bad in the PNG loading code, and I just wasn’t sure what. It turns out that I had confused a `png_bytep`* with a `png_byte`* — the ‘p’ in the first one means that it’s already a pointer, so I didn’t have to put another star there. I had some issues using the debugging at first, so here are some tips that might help you out if you want to hook up the debugger:

1. Your project absolutely cannot have any spaces in its path. Otherwise, the debugger will inexplicably fail to connect.
2. The native code needs to be built with NDK_DEBUG=1; see “Debugging native applications” on this page: Using the NDK plugin.
3. Android will not wait for gdb to connect before executing the code. Add SystemClock.sleep(10000); to RendererWrapper’s onSurfaceCreated() method to add a sufficient delay to hit your breakpoints.

Once that’s done, you can start debugging from Eclipse by right-clicking the project and selecting Debug As->Android Native Application.

### Exploring further

The full source code for this lesson can be found at the GitHub project. For a “friendlier” introduction to OpenGL ES 2 that is focused on Java and Android, see Android Lesson One: Getting Started or OpenGL ES 2 for Android: A Quick-Start Guide.

What could we do to further streamline the code? If we were using C++, we could take advantage of destructors to create, for example, a FileData that cleans itself up when it goes out of scope. I’d also like to make the structs private somehow, as their internals don’t really need to be exposed to clients. What else would you do?

In the next two posts, we’ll look at adding support for iOS and emscripten. Now that we’ve built up this base, it actually won’t take too much work!

## Calling OpenGL from C on Android, Using the NDK

For this first post in the Developing a Simple Game of Air Hockey Using C++ and OpenGL ES 2 for Android, iOS, and the Web series, we’ll create a simple Android program that initializes OpenGL, then renders simple frames from native code.

### Prerequisites

• The Android SDK & NDK installed, along with a suitable IDE.
• An emulator or a device supporting OpenGL ES 2.0.

We’ll be using Eclipse in this lesson.

To prepare and test the code for this article, I used revision 22.0.1 of the ADT plugin and SDK tools, and revision 17 of the platform and build tools, along with revision 8e of the NDK and Eclipse Juno Service Pack 2.

### Getting started

The first thing to do is create a new Android project in Eclipse, with support for the NDK. You can follow along all of the code at the GitHub project.

Before creating the new project, create a new folder called airhockey, and then create a new Git repository in that folder. Git is a source version control system that will help you keep track of changes to the source and to roll back changes if anything goes wrong. To learn more about how to use Git, see the Git documentation.

To create a new project, select File->New->Android Application Project, and then create a new project called ‘AirHockey’, with the application name set to ‘Air Hockey’ and the package name set to ‘com.learnopengles.airhockey’. Leaving the rest as defaults or filling out as you prefer, save this new project in a new folder called android, inside of the airhockey folder that we created in the previous step.

Once the project has been created, right-click on the project in the Package Explorer, select Android Tools from the drop-down menu, then select Add Native Support…. When asked for the Library Name, enter ‘game’ and hit Finish, so that the library will be called libgame.so. This will create a new folder called jni in the project tree.

### Initializing OpenGL

With our project created, we can now edit the default activity and configure it to initialize OpenGL. We’ll first add two member variables to the top of our activity class:

```	private GLSurfaceView glSurfaceView;
private boolean rendererSet;
```

Now we can set the body of `onCreate()` as follows:

```	@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);

ActivityManager activityManager
= (ActivityManager) getSystemService(Context.ACTIVITY_SERVICE);
ConfigurationInfo configurationInfo = activityManager.getDeviceConfigurationInfo();

final boolean supportsEs2 =
configurationInfo.reqGlEsVersion >= 0x20000 || isProbablyEmulator();

if (supportsEs2) {
glSurfaceView = new GLSurfaceView(this);

if (isProbablyEmulator()) {
// Avoids crashes on startup with some emulator images.
glSurfaceView.setEGLConfigChooser(8, 8, 8, 8, 16, 0);
}

glSurfaceView.setEGLContextClientVersion(2);
glSurfaceView.setRenderer(new RendererWrapper());
rendererSet = true;
setContentView(glSurfaceView);
} else {
// Should never be seen in production, since the manifest filters
// unsupported devices.
Toast.makeText(this, "This device does not support OpenGL ES 2.0.",
Toast.LENGTH_LONG).show();
return;
}
}
```

First we check if the device supports OpenGL ES 2.0, and then if it does, we initialize a new `GLSurfaceView` and configure it to use OpenGL ES 2.0.

The check for `configurationInfo.reqGlEsVersion >= 0x20000` doesn’t work on the emulator, so we also call `isProbablyEmulator()` to see if we’re running on an emulator. Let’s define that method as follows:

```	private boolean isProbablyEmulator() {
return Build.VERSION.SDK_INT >= Build.VERSION_CODES.ICE_CREAM_SANDWICH_MR1
&& (Build.FINGERPRINT.startsWith("generic")
|| Build.FINGERPRINT.startsWith("unknown")
|| Build.MODEL.contains("Emulator")
|| Build.MODEL.contains("Android SDK built for x86"));
}
```

OpenGL ES 2.0 will only work in the emulator if it’s been configured to use the host GPU. For more info, read Android Emulator Now Supports Native OpenGL ES2.0!

Let’s complete the activity by adding the following methods:

```	@Override
protected void onPause() {
super.onPause();

if (rendererSet) {
glSurfaceView.onPause();
}
}

@Override
protected void onResume() {
super.onResume();

if (rendererSet) {
glSurfaceView.onResume();
}
}
```

We need to handle the Android lifecycle, so we also pause & resume the `GLSurfaceView` as needed. We only do this if we’ve also called `glSurfaceView.setRenderer()`; otherwise, calling these methods will cause the application to crash.

For a more detailed introduction to OpenGL ES 2, see Android Lesson One: Getting Started or OpenGL ES 2 for Android: A Quick-Start Guide.

Create a new class called `RendererWrapper`, and add the following code:

```public class RendererWrapper implements Renderer {
@Override
public void onSurfaceCreated(GL10 gl, EGLConfig config) {
glClearColor(0.0f, 0.0f, 1.0f, 0.0f);
}

@Override
public void onSurfaceChanged(GL10 gl, int width, int height) {
// No-op
}

@Override
public void onDrawFrame(GL10 gl) {
glClear(GL_COLOR_BUFFER_BIT);
}
}
```

This simple renderer will set the clear color to blue and clear the screen on every frame. Later on, we’ll change these methods to call into native code. To call methods like `glClearColor()` without prefixing them with `GLES20`, add `import static android.opengl.GLES20.*;` to the top of the class file, then select Source->Organize Imports.

If you have any issues in getting the code to compile, ensure that you’ve organized all imports, and that you’ve included the following imports in `RendererWrapper`:

`import javax.microedition.khronos.egl.EGLConfig;`
`import javax.microedition.khronos.opengles.GL10;`

`import android.opengl.GLSurfaceView.Renderer;`

### Updating the manifest to exclude unsupported devices

We should also update the manifest to make sure that we exclude devices that don’t support OpenGL ES 2.0. Add the following somewhere inside AndroidManifest.xml:

```    <uses-feature
android:glEsVersion="0x00020000"
android:required="true" />
```

Since OpenGL ES 2.0 is only fully supported from Android Gingerbread 2.3.3 (API 10), replace any existing `<uses-sdk />` tag with the following:

```    <uses-sdk
android:minSdkVersion="10"
android:targetSdkVersion="17" />
```

If we run the app now, we should see a blue screen as follows:

We’ve verified that things work from Java, but what we really want to do is to be using OpenGL from native code! In the next few steps, we’ll move the OpenGL code to a set of C files and setup an NDK build for these files.

We’ll be sharing this native code with our future projects for iOS and the web, so let’s create a folder called common located one level above the Android project. What this means is that in your airhockey folder, you should have one folder called android, containing the Android project, and a second folder called common which will contain the common code.

Linking a relative folder that lies outside of the project’s base folder is unfortunately not the easiest thing to do in Eclipse. To accomplish this, we’ll have to follow these steps:

1. Right-click the project and select Properties. In the window that appears, select Resource->Linked Resources and click New….
2. Enter ‘COMMON_SRC_LOC’ as the name, and ‘\${PROJECT_LOC}\..\common’ as the location. Once that’s done, click OK until the Properties window is closed.
3. Right-click the project again and select Build Path->Link Source…, select Variables…, select COMMON_SRC_LOC, and select OK. Enter ‘common’ as the folder name and select Finish, then close the Properties window.

You should now see a new folder in your project called common, linked to the folder that we created.

Let’s create two new files in the common folder, game.c and game.h. You can create these files by right-clicking on the folder and selecting New->File. Add the following to game.h:

```void on_surface_created();
void on_surface_changed();
void on_draw_frame();
```

In C, a .h file is known as a header file, and can be considered as an interface for a given .c source file. This header file defines three functions that we’ll be calling from Java.

Let’s add the following implementation to game.c:

```#include "game.h"
#include "glwrapper.h"

void on_surface_created() {
glClearColor(1.0f, 0.0f, 0.0f, 0.0f);
}

void on_surface_changed() {
// No-op
}

void on_draw_frame() {
glClear(GL_COLOR_BUFFER_BIT);
}
```

This code will set the clear color to red, and will clear the screen every time `on_draw_frame()` is called. We’ll use a special header file called glwrapper.h to wrap the platform-specific OpenGL libraries, as they are often located at a different place for each platform.

### Adding platform-specific code and JNI code

To use this code, we still need to add two things: a definition for glwrapper.h, and some JNI glue code so that we can call our C code from Java. JNI stands for Java Native Interface, and it’s how C and Java can talk to each other on Android.

Inside your project, create a new file called glwrapper.h in the jni folder, with the following contents:

```#include <GLES2/gl2.h>
```

That wraps Android’s OpenGL headers. To create the JNI glue, we’ll first need to create a Java class that exposes the native interface that we want. To do this, let’s create a new class called `GameLibJNIWrapper`, with the following code:

```public class GameLibJNIWrapper {
static {
}

public static native void on_surface_created();

public static native void on_surface_changed(int width, int height);

public static native void on_draw_frame();
}
```

This class will load the native library called libgame.so, which is what we’ll be calling our native library later on when we create the build scripts for it. To create the matching C file for this class, build the project, open up a command prompt, change to the bin/classes folder of your project, and run the following command:

`javah -o ../../jni/jni.c com.learnopengles.airhockey.GameLibJNIWrapper`

The javah command should be located in your JDKs bin directory. This command will create a jni.c file that will look very messy, with a bunch of stuff that we don’t need. Let’s simplify the file and replace it with the following contents:

```#include "../../common/game.h"
#include <jni.h>

JNIEXPORT void JNICALL Java_com_learnopengles_airhockey_GameLibJNIWrapper_on_1surface_1created
(JNIEnv * env, jclass cls) {
on_surface_created();
}

JNIEXPORT void JNICALL Java_com_learnopengles_airhockey_GameLibJNIWrapper_on_1surface_1changed
(JNIEnv * env, jclass cls, jint width, jint height) {
on_surface_changed();
}

JNIEXPORT void JNICALL Java_com_learnopengles_airhockey_GameLibJNIWrapper_on_1draw_1frame
(JNIEnv * env, jclass cls) {
on_draw_frame();
}
```

We’ve simplified the file greatly, and we’ve also added a reference to game.h so that we can call our game methods. Here’s how it works:

1. `GameLibJNIWrapper` defines the native C functions that we want to be able to call from Java.
2. To be able to call these C functions from Java, they have to be named in a very specific way, and each function also has to have at least two parameters, with a pointer to a `JNIEnv` as the first parameter, and a `jclass` as the second parameter. To make life easier, we can use javah to create the appropriate function signatures for us in a file called jni.c.
3. From jni.c, we call the functions that we declared in game.h and defined in game.c. That completes the connections and allows us to call our native functions from Java.

### Compiling the native code

To compile and run the native code, we need to describe our native sources to the NDK build system. We’ll do this with two files that should go in the jni folder: Android.mk and Application.mk. When we added native support to our project, a file called game.cpp was automatically created in the jni folder. We won’t be needing this file, so you can go ahead and delete it.

Let’s set Android.mk to the following contents:

```LOCAL_PATH := \$(call my-dir)

include \$(CLEAR_VARS)

LOCAL_MODULE    := game
LOCAL_CFLAGS    := -Wall -Wextra
LOCAL_SRC_FILES := ../../common/game.c jni.c
LOCAL_LDLIBS := -lGLESv2

include \$(BUILD_SHARED_LIBRARY)
```

This file describes our sources, and tells the NDK that it should compile game.c and jni.c and build them into a shared library called libgame.so. This shared library will be dynamically linked with libGLESv2.so at runtime.

When specifying this file, be careful not to leave any trailing spaces after any of the commands, as this may cause the build to fail.

The next file, Application.mk, should have the following contents:

```APP_PLATFORM := android-10
APP_ABI := armeabi-v7a
```

This tells the NDK build system to build for Android API 10, so that it doesn’t complain about us using unsupported features not present in earlier versions of Android, and it also tells the build system to generate a library for the ARMv7-A architecture, which supports hardware floating point and which most newer Android devices use.

### Updating `RendererWrapper`

Before we can see our new changes, we have to update `RendererWrapper` to call into our native code. We can do that by updating it as follows:

```	@Override
public void onSurfaceCreated(GL10 gl, EGLConfig config) {
GameLibJNIWrapper.on_surface_created();
}

@Override
public void onSurfaceChanged(GL10 gl, int width, int height) {
GameLibJNIWrapper.on_surface_changed(width, height);
}

@Override
public void onDrawFrame(GL10 gl) {
GameLibJNIWrapper.on_draw_frame();
}
```

The renderer now calls our `GameLibJNIWrapper` class, which calls the native functions in jni.c, which calls our game functions defined in game.c.

### Building and running the application

You should now be able to build and run the application. When you build the application, a new shared library called libgame.so should be created in your project’s /libs/armeabi-v7a/ folder. When you run the application, it should look as follows:

We know that our native code is being called with the color changing from blue to red.

#### Exploring further

The full source code for this lesson can be found at the GitHub project. For a more detailed introduction to OpenGL ES 2, see Android Lesson One: Getting Started or OpenGL ES 2 for Android: A Quick-Start Guide.

In the next part of this series, we’ll create an iOS project and we’ll see how easy it is to reuse our code from the common folder and wrap it up in Objective-C. Please let me know if you have any questions or feedback!

## Android Lesson Eight: An Introduction to Index Buffer Objects (IBOs)

In our last lesson, we learned how to use vertex buffer objects on Android. We learned about the difference between client-side memory and GPU-dedicated memory, and the difference between storing texture, position and normal data in separate buffers, or altogether in one buffer. We also learned how to work around Froyo’s broken OpenGL ES 2.0 bindings.

In this lesson, we’ll learn about index buffer objects, and go over a practical example of how to use them. Here’s what we’re going to cover:

• The difference between using vertex buffer objects only, and using vertex buffer objects together with index buffer objects.
• How to join together triangle strips using degenerate triangles, and render an entire height map in a single rendering call.

Let’s get started with the fundamental difference between vertex buffer objects and index buffer objects:

#### Vertex buffer objects and index buffer objects

In the previous lesson, we learned that a vertex buffer object is simply an array of vertex data which is directly rendered by OpenGL. We can use separate buffers for each attribute, such as positions and colors, or we can use a single buffer and interleave all of the data together. Contemporary articles suggest interleaving the data and making sure it’s aligned to 4-byte boundaries for better performance.

The downside to vertex buffers come when we use many of the same vertices over and over again. For example, a heightmap can be broken down into a series of triangle strips. Since each neighbour strip shares one row of vertices, we will end up repeating a lot of vertices with a vertex buffer.

You can see a vertex buffer object containing the vertices for two triangle strip rows. The order of vertices is shown, and the triangles defined by these vertices is also shown, when drawn using glDrawArrays(GL_TRIANGLE_STRIP, …).  In this example, we’re assuming that each row of triangles gets sent as a separate call to glDrawArrays(). Each vertex contains data as follows:

```vertexBuffer = {
// Position - Vertex 1
0, 0, 0,
// Color
1,1,1
// Normal
0,0,1,
// Position - Vertex 2
1, 0, 0,
...
}```

As you can see, the middle row of vertices needs to be sent twice, and this will also happen for each additional row of the height map. As our height map gets larger, our vertex buffer could end up having to repeat a lot of position, color, and normal data and consume a lot of additional memory.

How can we improve on this state of affairs? We can use an index buffer object. Instead of repeating vertices in the vertex buffer, we’ll define each vertex once, and only once. We’ll refer to these vertices using offsets into this vertex buffer, and when we need to reuse a vertex, we’ll repeat the offset instead of repeating the entire vertex. Here’s a visual illustration of the new vertex buffer:

Notice that our vertices are no longer linked together into triangles. We’ll no longer pass our vertex buffer object directly; instead, we’ll use an index buffer to tie the vertices together. The index buffer will contain only the offsets to our vertex buffer object. If we wanted to draw a triangle strip using the above buffer, our index buffer would contain data as follows:

```indexBuffer = {
1, 6, 2, 7, 3, 8, ...
}```

That’s how we’ll link everything together. When we repeat the middle row of vertices, we only repeat the number, instead of repeating the entire block of data. We’ll draw the index buffer with a call to glDrawElements(GL_TRIANGLE_STRIP, …).

#### Linking together triangle strips with degenerate triangles

The examples above assumed that we would render each row of the heightmap with a separate call to glDrawArrays() or glDrawElements(). How can we link up each row to the next? After all, the end of the first row is all the way on the right, and the beginning of the second row on the left. How do we link the two?

We don’t need to get fancy and start drawing right to left or anything like that. We can instead use what’s known as a degenerate triangle. A degenerate triangle is a triangle that has no area, and when the GPU encounters such triangles, it will simply skip over them.

Let’s look at our index buffer again:

```indexBuffer = {
1, 6, 2, 7, 3, 8, 4, 9, ...
}```

When drawing with GL_TRIANGLE_STRIP, OpenGL will build triangles by taking each set of three vertices, advancing by one vertex for each triangle. Every subsequent triangle shares two vertices with the previous triangle. For example, here are the sets of vertices that would be grouped into triangles:

• Triangle 1 = 1, 6, 2
• Triangle 2 = 6, 2, 7
• Triangle 3 = 2, 7, 3
• Triangle 4 = 7, 3, 8
• Triangle 5 = 3, 8, 4
• Triangle 6 = 8, 4, 9

OpenGL also maintains a specific order, or winding when building the triangles. The order of the first 3 vertices determines the order for the rest. If the first triangle is counter-clockwise, the rest will be counter-clockwise, too. OpenGL does this by swapping the first two vertices of every even triangle (swapped vertices are in bold):

• Triangle 1 = 1, 6, 2
• Triangle 2 = 2, 6, 7
• Triangle 3 = 2, 7, 3
• Triangle 4 = 3, 7, 8
• Triangle 5 = 3, 8, 4
• Triangle 6 = 4, 8, 9

Let’s show the entire index buffer, including the missing link between each row of the height map:

```indexBuffer = {
1, 6, 2, 7, 3, 8, 4, 9, 5, 10, ..., 6, 11, 7, 12, 8, 13, 9, 14, 10, 15
}```

What do we need to put in between, in order to link up the triangles? We’ll need an even number of new triangles in order to preserve the winding. We can do this by repeating the last vertex of the first row, and the first vertex of the second row. Here’s the new index buffer below, with the duplicated vertices in bold:

```indexBuffer = {
1, 6, 2, 7, 3, 8, 4, 9, 5, 10, 10, 6, 6, 11, 7, 12, 8, 13, 9, 14, 10, 15
}```

Here’s what the new sequence of triangles looks like:

• Triangle 8 = 5, 9, 10
• Triangle 9 (degenerate) = 5, 10, 10
• Triangle 10 (degenerate) = 10, 10, 6
• Triangle 11 (degenerate) = 10, 6, 6
• Triangle 12 (degenerate) = 6, 6, 11
• Triangle 13 = 6, 11, 7

By repeating the last vertex and the first vertex, we created four degenerate triangles that will be skipped, and linked the first row of the height map with the second. We could link an arbitrary number of rows this way and draw the entire heightmap with one call to glDrawElements(). Let’s take a look at this visually:

The degenerate triangles link each row with the next row.

##### Degenerate triangles done the wrong way

We need to repeat both the last vertex of the first row and the first of the second row. What would happen if we didn’t? Let’s say we only repeated one vertex:

```indexBuffer = {
1, 6, 2, 7, 3, 8, 4, 9, 5, 10, 10, 6, 11, 7, 12, 8, 13, 9, 14, 10, 15
}```

Here’s what the sequence of triangles would look like:

• Triangle 8 = 5, 9, 10
• Triangle 9 (degenerate) = 5, 10, 10
• Triangle 10 (degenerate) = 10, 10, 6
• Triangle 11 = 10, 6, 11
• Triangle 12 = 11, 6, 7

Triangle 11 starts at the right and cuts all the way across to the left, which isn’t what we wanted to happen. The winding is now also incorrect for the next row of triangles, since 3 new triangles were inserted, swapping even and odd.

#### A practical example

Let’s walk through the code to make it happen. I highly recommend heading over and reading Android Lesson Seven: An Introduction to Vertex Buffer Objects (VBOs) before continuing.

Do you remember those graph calculators, that could draw parabolas and other stuff on the screen? In this example, we’re going to draw a 3d parabola using a height map. We’ll walk through all of the code to build and draw the height map. First, let’s get started with the definitions:

```class HeightMap {
static final int SIZE_PER_SIDE = 32;
static final float MIN_POSITION = -5f;
static final float POSITION_RANGE = 10f;

final int[] vbo = new int[1];
final int[] ibo = new int[1];

int indexCount;```
• We’ll set the height map to 32 units per side, for a total of 1,024 vertices and 1,922 triangles, not including degenerate triangles (The total number of triangles in a height map is equal to (2 * (units_per_side – 1)2)). The height map positions will range from -5 to +5.
• The OpenGL reference to our vertex buffer object and index buffer object will go in vbo and ibo, respectively.
• indexCount will hold the total number of generated indices.
##### Building the vertex data

Let’s look at the code that will build the vertex buffer object. Remember that we still need a place to hold all of our vertices, and that each vertex will be defined once, and only once.

```HeightMap() {
try {
final int floatsPerVertex = POSITION_DATA_SIZE_IN_ELEMENTS + NORMAL_DATA_SIZE_IN_ELEMENTS
+ COLOR_DATA_SIZE_IN_ELEMENTS;
final int xLength = SIZE_PER_SIDE;
final int yLength = SIZE_PER_SIDE;

final float[] heightMapVertexData = new float[xLength * yLength * floatsPerVertex];

int offset = 0;

// First, build the data for the vertex buffer
for (int y = 0; y &lt; yLength; y++) {
for (int x = 0; x &lt; xLength; x++) {
final float xRatio = x / (float) (xLength - 1);

// Build our heightmap from the top down, so that our triangles are
// counter-clockwise.
final float yRatio = 1f - (y / (float) (yLength - 1));

final float xPosition = MIN_POSITION + (xRatio * POSITION_RANGE);
final float yPosition = MIN_POSITION + (yRatio * POSITION_RANGE);

...
}
}```

This bit of code sets up the loop to generate the vertices. Before we can send data to OpenGL, we need to build it in Java’s memory, so we create a floating point array to hold the height map vertices. Each vertex will hold enough floats to contain all of the position, normal, and color information.

Inside the loop, we calculate a ratio between 0 and 1 for x and y. This xRatio and yRatio will then be used to calculate the current position for the next vertex.

Let’s take a look at the actual calculations within the loop:

```// Position
heightMapVertexData[offset++] = xPosition;
heightMapVertexData[offset++] = yPosition;
heightMapVertexData[offset++] = ((xPosition * xPosition) + (yPosition * yPosition)) / 10f;```

First up is the position. Since this is a 3D parabola, we’ll calculate the Z as X2 + Y2. We divide the result by 10 so the resulting parabola isn’t so steep.

```// Cheap normal using a derivative of the function.
// The slope for X will be 2X, for Y will be 2Y.
// Divide by 10 since the position's Z is also divided by 10.
final float xSlope = (2 * xPosition) / 10f;
final float ySlope = (2 * yPosition) / 10f;

// Calculate the normal using the cross product of the slopes.
final float[] planeVectorX = {1f, 0f, xSlope};
final float[] planeVectorY = {0f, 1f, ySlope};
final float[] normalVector = {
(planeVectorX[1] * planeVectorY[2]) - (planeVectorX[2] * planeVectorY[1]),
(planeVectorX[2] * planeVectorY[0]) - (planeVectorX[0] * planeVectorY[2]),
(planeVectorX[0] * planeVectorY[1]) - (planeVectorX[1] * planeVectorY[0])};

// Normalize the normal
final float length = Matrix.length(normalVector[0], normalVector[1], normalVector[2]);

heightMapVertexData[offset++] = normalVector[0] / length;
heightMapVertexData[offset++] = normalVector[1] / length;
heightMapVertexData[offset++] = normalVector[2] / length;```

Next up is the normal calculation. As you’ll remember from our lesson on lighting, the normal will be used to calculate lighting. The normal of a surface is defined as a vector perpendicular to the tangent plane at that particular point. In other words, the normal should be an arrow pointing straight away from the surface. Here’s an visual example for a parabola:

The first thing we need is the tangent of the surface. Using a bit of calculus, (don’t worry, I didn’t remember it either and went and searched for an online calculator ;)) we know that we can get the tangent, or the slope from the derivative of the function. Since our function is X2 + Y2, our slopes will therefore be 2X and 2Y. We scale the slopes down by 10, since for the position we had also scaled the result of the function down by 10.

To calculate the normal, we create two vectors for each slope to define a plane, and we calculate the cross product to get the normal, which is the perpendicular vector.

We then normalize the normal by calculating its length, and dividing each component by the length. This ensures that the overall length will be equal to 1.

```// Add some fancy colors.
heightMapVertexData[offset++] = xRatio;
heightMapVertexData[offset++] = yRatio;
heightMapVertexData[offset++] = 0.5f;
heightMapVertexData[offset++] = 1f;```

Finally, we’ll set some fancy colors. Red will scale from 0 to 1 across the X axis, and green will scale from 0 to 1 across the Y axis. We add a bit of blue to brighten things up, and assign 1 to alpha.

##### Building the index data

The next step is to link all of these vertices together, using the index buffer.

```// Now build the index data
final int numStripsRequired = yLength - 1;
final int numDegensRequired = 2 * (numStripsRequired - 1);
final int verticesPerStrip = 2 * xLength;

final short[] heightMapIndexData = new short[(verticesPerStrip * numStripsRequired)
+ numDegensRequired];

offset = 0;

for (int y = 0; y &lt; yLength - 1; y++) { 	 	if (y &gt; 0) {
// Degenerate begin: repeat first vertex
heightMapIndexData[offset++] = (short) (y * yLength);
}

for (int x = 0; x &lt; xLength; x++) {
// One part of the strip
heightMapIndexData[offset++] = (short) ((y * yLength) + x);
heightMapIndexData[offset++] = (short) (((y + 1) * yLength) + x);
}

if (y &lt; yLength - 2) {
// Degenerate end: repeat last vertex
heightMapIndexData[offset++] = (short) (((y + 1) * yLength) + (xLength - 1));
}
}

indexCount = heightMapIndexData.length;```

In OpenGL ES 2, an index buffer needs to be an array of unsigned bytes or shorts, so we use shorts here. We’ll read from two rows of vertices at a time, and build a triangle strip using those vertices. If we’re on the second or subsequent rows, we’ll duplicate the first vertex of that row, and if we’re on any row but the last, we’ll also duplicate the last vertex of that row. This is so we can link the rows together using degenerate triangles as described earlier.

We’re just assigning offsets here. Imagine we had the height map as shown earlier:

With the index buffer, we want to end up with something like this:

Our buffer will contain data like this:

`heightMapIndexData = {1, 6, 2, 7, 3, 8, 4, 9, 5, 10, 10, 6, 6, 11, 7, 12, 8, 13, 9, 14, 10, 15}`

Just keep in mind that although our examples start with 1, and go on to 2, 3, etc… in the actual code, our arrays should be 0-based and start with 0.

The next step is to copy the data from Dalvik’s heap to a direct buffer on the native heap:

```final FloatBuffer heightMapVertexDataBuffer = ByteBuffer
.allocateDirect(heightMapVertexData.length * BYTES_PER_FLOAT).order(ByteOrder.nativeOrder())
.asFloatBuffer();
heightMapVertexDataBuffer.put(heightMapVertexData).position(0);

final ShortBuffer heightMapIndexDataBuffer = ByteBuffer
.allocateDirect(heightMapIndexData.length * BYTES_PER_SHORT).order(ByteOrder.nativeOrder())
.asShortBuffer();
heightMapIndexDataBuffer.put(heightMapIndexData).position(0);```

Remember, the index data needs to go in a short buffer or a byte buffer. Now we can create OpenGL buffers, and upload our data into the buffers:

```GLES20.glGenBuffers(1, vbo, 0);
GLES20.glGenBuffers(1, ibo, 0);

if (vbo[0] &gt; 0 &amp;&amp; ibo[0] &gt; 0) {
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, vbo[0]);
GLES20.glBufferData(GLES20.GL_ARRAY_BUFFER, heightMapVertexDataBuffer.capacity()
* BYTES_PER_FLOAT, heightMapVertexDataBuffer, GLES20.GL_STATIC_DRAW);

GLES20.glBindBuffer(GLES20.GL_ELEMENT_ARRAY_BUFFER, ibo[0]);
GLES20.glBufferData(GLES20.GL_ELEMENT_ARRAY_BUFFER, heightMapIndexDataBuffer.capacity()
* BYTES_PER_SHORT, heightMapIndexDataBuffer, GLES20.GL_STATIC_DRAW);

GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, 0);
GLES20.glBindBuffer(GLES20.GL_ELEMENT_ARRAY_BUFFER, 0);
} else {
errorHandler.handleError(ErrorType.BUFFER_CREATION_ERROR, &quot;glGenBuffers&quot;);
}```

We use GL_ARRAY_BUFFER to specify our vertex data, and GL_ELEMENT_ARRAY_BUFFER to specify our index data.

##### Drawing the height map

Much of the code to draw the height map will be similar as in previous lessons. I won’t cover the matrix setup code here; instead, we’ll just look at the calls to bind the array data and draw using the index buffer:

```GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, vbo[0]);

// Bind Attributes
glEs20.glVertexAttribPointer(positionAttribute, POSITION_DATA_SIZE_IN_ELEMENTS, GLES20.GL_FLOAT,
false, STRIDE, 0);
GLES20.glEnableVertexAttribArray(positionAttribute);

glEs20.glVertexAttribPointer(normalAttribute, NORMAL_DATA_SIZE_IN_ELEMENTS, GLES20.GL_FLOAT,
false, STRIDE, POSITION_DATA_SIZE_IN_ELEMENTS * BYTES_PER_FLOAT);
GLES20.glEnableVertexAttribArray(normalAttribute);

glEs20.glVertexAttribPointer(colorAttribute, COLOR_DATA_SIZE_IN_ELEMENTS, GLES20.GL_FLOAT,
false, STRIDE, (POSITION_DATA_SIZE_IN_ELEMENTS + NORMAL_DATA_SIZE_IN_ELEMENTS)
* BYTES_PER_FLOAT);
GLES20.glEnableVertexAttribArray(colorAttribute);

// Draw
GLES20.glBindBuffer(GLES20.GL_ELEMENT_ARRAY_BUFFER, ibo[0]);
glEs20.glDrawElements(GLES20.GL_TRIANGLE_STRIP, indexCount, GLES20.GL_UNSIGNED_SHORT, 0);

GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, 0);
GLES20.glBindBuffer(GLES20.GL_ELEMENT_ARRAY_BUFFER, 0);```

In the previous lesson, we had to introduce a custom OpenGL binding to properly use VBOs, as they’re broken in Froyo. We’ll also need to use this binding for IBOs as well. Just like in the previous lesson, we bind our position, normal, and color data in our vertex buffer to the matching attribute in the shader, taking care to pass in the proper stride and start offset of each attribute. Both the stride and the start offset are defined in terms of bytes.

The main difference is when it’s time to draw. We call glDrawElements() instead of glDrawArrays(), and we pass in the index count and data type. OpenGL ES 2.0 only accepts GL_UNSIGNED_SHORT and GL_UNSIGNED_BYTE, so we have to make sure that we define our index data using shorts or bytes.

##### Rendering and lighting two-sided triangles

For this lesson, we don’t enable GL_CULL_FACE so that both the front and the back sides of triangles are visible. We need to make a slight change to our fragment shader code so that lighting works properly for our two-sided triangles:

```float diffuse;

if (gl_FrontFacing) {
diffuse = max(dot(v_Normal, lightVector), 0.0);
} else {
diffuse = max(dot(-v_Normal, lightVector), 0.0);
}```

We use the special variable gl_FrontFacing to find out if the current fragment is part of a front-facing or back-facing triangle. If it’s front-facing, then no need to do anything special: the code is the same as before. If it’s back-facing, then we simply invert the surface normal (since the back side of a triangle faces the opposite direction) and proceed with the calculations as before.

##### Further exercises

When would it be a downside to use index buffers? Remember, when we use index buffers the GPU has to do an additional fetch into the vertex buffer object, so we’re introducing an additional step.

#### Wrapping up

The full source code for this lesson can be downloaded from the project site on GitHub. A compiled version of the lesson can also be downloaded directly from the Android Market:

Thanks for stopping by, and please feel free to check out the code and share your comments below.

## Android Lesson Seven: An Introduction to Vertex Buffer Objects (VBOs)

In this lesson, we’ll introduce vertex buffer objects (VBOs), how to define them, and how to use them. Here is what we are going to cover:

• How to define and render from vertex buffer objects.
• The difference between using a single buffer with all the data packed in, or multiple buffers.
• Problems and pitfalls, and what to do about them.

#### What are vertex buffer objects, and why use them?

Up until now, all of our lessons have been storing our object data in client-side memory, only transferring it into the GPU at render time. This is fine when there is not a lot of data to transfer, but as our scenes get more complex with more objects and triangles, this can impose an extra cost on the CPU and memory usage. What can we do about this? We can use vertex buffer objects. Instead of transferring vertex information from client memory every frame, the information will be transferred once and rendering will then be done from this graphics memory cache.

#### Assumptions and prerequisites

Please read Android Lesson One: Getting Started for an intro on how to upload the vertices from client-side memory. This understanding of how OpenGL ES works with the vertex arrays will be crucial to understanding this lesson.

#### Understanding client-side buffers in more detail

Once you understand how to render using client-side memory, it’s actually not too hard to switch to using VBOs. The main difference is that there is an additional step to upload the data into graphics memory, and an additional call to bind to this buffer when rendering.

This lesson has been setup to use four different modes:

• Client side, separate buffers.
• Client side, packed buffer.
• Vertex buffer object, separate buffers.
• Vertex buffer object, packed buffers.

Whether we are using vertex buffer objects or not, we need to first store our data in a client-side direct buffer. Recall from lesson one that OpenGL ES is a native system library, whereas Java on Android runs in a virtual machine. To bridge the gap, we need to use a set of special buffer classes to allocate memory on the native heap and make it accessible to OpenGL:

```// Java array.
float[] cubePositions;
...
// Floating-point buffer
final FloatBuffer cubePositionsBuffer;
...

// Allocate a direct block of memory on the native heap,
// size in bytes is equal to cubePositions.length * BYTES_PER_FLOAT.
// BYTES_PER_FLOAT is equal to 4, since a float is 32-bits, or 4 bytes.
cubePositionsBuffer = ByteBuffer.allocateDirect(cubePositions.length * BYTES_PER_FLOAT)

// Floats can be in big-endian or little-endian order.
// We want the same as the native platform.
.order(ByteOrder.nativeOrder())

// Give us a floating-point view on this byte buffer.
.asFloatBuffer();```

Transferring data from the Java heap to the native heap is then a matter of a couple calls:

```// Copy data from the Java heap to the native heap.
cubePositionsBuffer.put(cubePositions)

// Reset the buffer position to the beginning of the buffer.
.position(0);```

What is the purpose of the buffer position? Normally, Java does not give us a way to specify arbitrary locations in memory using pointer arithmetic. However, setting the position of the buffer is functionally equivalent to changing the value of a pointer to a block of memory. By changing the position, we can pass arbitrary memory locations within our buffer to OpenGL calls. This will come in handy when we work with packed buffers.

Once the data is on the native heap, we no longer need to keep the float[] array around, and we can let the garbage collector clean it up.

Rendering with client-side buffers is straightforward to setup. We just need to enable using vertex arrays on that attribute, and pass a pointer to our data:

```// Pass in the position information
GLES20.glEnableVertexAttribArray(mPositionHandle);
GLES20.glVertexAttribPointer(mPositionHandle, POSITION_DATA_SIZE,
GLES20.GL_FLOAT, false, 0, mCubePositions);```

Explanation of the parameters to glVertexAttribPointer:

• mPositionHandle: The OpenGL index of the position attribute of our shader program.
• POSITION_DATA_SIZE: How many elements (floats) define this attribute.
• GL_FLOAT: The type of each element.
• false: Should fixed-point data be normalized? Not applicable since we are using floating-point data.
• 0: The stride. Set to 0 to mean that the positions should be read sequentially.
• mCubePositions: The pointer to our buffer, containing all of the positional data.
##### Working with packed buffers

Working with packed buffers is very similar, except that instead of using a buffer each for positions, normals, etc… one buffer will contain all of this data. The difference looks like this:

Using separate buffers

positions = X,Y,Z, X, Y, Z, X, Y, Z, …
colors = R, G, B, A, R, G, B, A, …
textureCoordinates = S, T, S, T, S, T, …

Using a packed buffer

buffer = X, Y, Z, R, G, B, A, S, T, …

The advantage to using packed buffers is that it should be more efficient for the GPU to render, since all of the information needed to render a triangle is located within the same block of memory. The disadvantage is that it may be more difficult and slower to update, if you are using dynamic data.

When we use packed buffers, we need to change our rendering calls in a couple of ways. First, we need to tell OpenGL the stride, or how many bytes define a vertex.

```final int stride = (POSITION_DATA_SIZE + NORMAL_DATA_SIZE + TEXTURE_COORDINATE_DATA_SIZE)
* BYTES_PER_FLOAT;

// Pass in the position information
mCubeBuffer.position(0);
GLES20.glEnableVertexAttribArray(mPositionHandle);
GLES20.glVertexAttribPointer(mPositionHandle, POSITION_DATA_SIZE,
GLES20.GL_FLOAT, false, stride, mCubeBuffer);

// Pass in the normal information
mCubeBuffer.position(POSITION_DATA_SIZE);
GLES20.glEnableVertexAttribArray(mNormalHandle);
GLES20.glVertexAttribPointer(mNormalHandle, NORMAL_DATA_SIZE,
GLES20.GL_FLOAT, false, stride, mCubeBuffer);
...```

The stride tells OpenGL ES how far it needs to go to find the same attribute for the next vertex. For example, if element 0 is the beginning of the position for the first vertex, and there are 8 elements per vertex, then the stride will be equal to 8 elements, or 32 bytes. The position for the next vertex will be found at element 8, and the next vertex after that at element 16, and so on.

Keep in mind that the value of the stride passed to glVertexAttribPointer should be in bytes, not elements, so remember to do that conversion.

Notice that we also change the start position of the buffer when we switch from specifying the positions to the normals. This is the pointer arithmetic I was referring to before, and this is how we can do it in Java when working with OpenGL ES. We’re still working with the same buffer, mCubeBuffer, but we tell OpenGL to start reading in the normals at the first element after the position. Again, we pass in the stride to tell OpenGL that the next normal will be found 8 elements or 32 bytes later.

##### Dalvik and memory on the native heap

If you allocate a lot of memory on the native heap and release it, you will probably run into the beloved OutOfMemoryError, sooner or later. There are a couple of reasons behind that:

1. You might think that you’ve released the memory by letting the reference go out of scope, but native memory seems to take a few extra GC cycles to be completely cleaned up, and Dalvik will throw an exception if there is not enough free memory available and the native memory has not yet been released.
2. The native heap can become fragmented. Calls to allocateDirect() will inexplicably fail, even though there appears to be plenty of memory available. Sometimes it helps to make a smaller allocation, free it, and then try the larger allocation again.

What can you do about these problems? Not much, other than hoping that Google improves the behaviour of Dalvik in future editions (they’ve added a largeHeap parameter to 3.0+), or manage the heap yourself by doing your allocations in native code or allocating a huge block upfront, and spinning off buffers based off of that.

Note: this information was originally written in early 2012, and now Android uses a different runtime called ART which may not suffer from these problems to the same degree.

##### Moving to vertex buffer objects

Now that we’ve reviewed working with client-side buffers, let’s move on to vertex buffer objects! First, we need to review a few very important points:

1. Buffers must be created within a valid OpenGL context.

This might seem like an obvious point, but it’s just a reminder that you have to wait until onSurfaceCreated(), and you have to take care that the OpenGL ES calls are done on the GL thread. See this document: OpenGL ES Programming Guide for iOS. It might be written for iOS, but the behaviour of OpenGL ES is similar on Android.

2. Improper use of vertex buffer objects will crash the graphics driver.

You need to be careful with the data you pass around when you use vertex buffer objects. Improper values will cause a native crash in the OpenGL ES system library or in the graphics driver library. On my Nexus S, some games freeze up my phone completely or cause it to reboot, because the graphics driver is crashing on their commands. Not all crashes will lock up your device, but at a minimum you will not see the “This application has stopped working” dialogue. Your activity will restart without warning, and the only info you’ll get might be a native debug trace in the logs.

3. The OpenGL ES bindings are broken on Froyo (2.2), and incomplete/unavailable in earlier versions.

This is the most unfortunate and most important point to consider. For some reason, Google really dropped the ball when it comes to OpenGL ES 2 support on Froyo. The mappings are incomplete, and several crucial functions needed to use vertex buffer objects are unavailable and cannot be used from Java code, at least with the standard SDK.

I don’t know if it’s because they didn’t run their unit tests, or if the developer was sloppy with their code generation tools, or if everyone was on 8 cups of coffee and burning the midnight oil to get things out the door. I don’t know why the API is broken, but the fact is that it’s broken.

There are three solutions to this problem:

1. Target Gingerbread (2.3) and higher.
2. Don’t use vertex buffer objects.
3. Use your own Java Native Interface (JNI) library to interface with the native OpenGL ES system libraries.

I find option 1 to be unacceptable, since a full quarter of devices out there still run on Froyo as of the time of this writing. Option 2 works, but is kind of silly.

Note: This article was originally written in early 2012, when many devices were still on Froyo. As of 2017, this is no longer an issue and the most reasonable option is to target Gingerbread or later.

The option I recommend, and that I have decided to go with, is to use your own JNI bindings. For this lesson I have decided to go with the bindings generously provided by the guys who created libgdx, a cross-platform game development library licensed under the Apache License 2.0. You need to use the following files to make it work:

• /libs/armeabi/libandroidgl20.so
• /libs/armeabi-v7a/libandroidgl20.so

You might notice that this excludes Android platforms that do not run on ARM, and you’d be right. It would probably be possible to compile your own bindings for those platforms if you want to have VBO support on Froyo, though that is out of the scope of this lesson.

Using the bindings is as simple as these lines of code:

```AndroidGL20 mGlEs20 = new AndroidGL20();
...
mGlEs20.glVertexAttribPointer(mPositionHandle, POSITION_DATA_SIZE, GLES20.GL_FLOAT, false, 0, 0);
...```

You only need to call the custom binding where the SDK-provided binding is incomplete. I use the custom bindings to fill in the holes where the official one is missing functions.

To upload data to the GPU, we need to follow the same steps in creating a client-side buffer as before:

```...
cubePositionsBuffer = ByteBuffer.allocateDirect(cubePositions.length * BYTES_PER_FLOAT)
.order(ByteOrder.nativeOrder()).asFloatBuffer();
cubePositionsBuffer.put(cubePositions).position(0);
...```

Once we have the client-side buffer, we can create a vertex buffer object and upload data from client memory to the GPU with the following commands:

```// First, generate as many buffers as we need.
// This will give us the OpenGL handles for these buffers.
final int buffers[] = new int[3];
GLES20.glGenBuffers(3, buffers, 0);

// Bind to the buffer. Future commands will affect this buffer specifically.
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, buffers[0]);

// Transfer data from client memory to the buffer.
// We can release the client memory after this call.
GLES20.glBufferData(GLES20.GL_ARRAY_BUFFER, cubePositionsBuffer.capacity() * BYTES_PER_FLOAT,
cubePositionsBuffer, GLES20.GL_STATIC_DRAW);

// IMPORTANT: Unbind from the buffer when we're done with it.
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, 0);```

Once data has been uploaded to OpenGL ES, we can release the client-side memory as we no longer need to keep it around. Here is an explanation of glBufferData:

• GL_ARRAY_BUFFER: This buffer contains an array of vertex data.
• cubePositionsBuffer.capacity() * BYTES_PER_FLOAT: The number of bytes this buffer should contain.
• cubePositionsBuffer: The source that will be copied to this vertex buffer object.
• GL_STATIC_DRAW: The buffer will not be updated dynamically.

Our call to glVertexAttribPointer looks a little bit different, as the last parameter is now an offset rather than a pointer to our client-side memory:

```// Pass in the position information
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, mCubePositionsBufferIdx);
GLES20.glEnableVertexAttribArray(mPositionHandle);
mGlEs20.glVertexAttribPointer(mPositionHandle, POSITION_DATA_SIZE, GLES20.GL_FLOAT, false, 0, 0);
...```

Like before, we bind to the buffer, then enable the vertex array. Since the buffer is already bound, we only need to tell OpenGL the offset to start at when reading from the buffer. Since we are using separate buffers, we pass in an offset of 0. Notice also that we are using our custom binding to call glVertexAttribPointer, since the official SDK is missing this specific function call.

Once we are done drawing with our buffer, we should unbind from it:

`GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, 0);`

When we no longer want to keep our buffers around, we can free the memory:

```final int[] buffersToDelete = new int[] { mCubePositionsBufferIdx, mCubeNormalsBufferIdx,
mCubeTexCoordsBufferIdx };
GLES20.glDeleteBuffers(buffersToDelete.length, buffersToDelete, 0);```
##### Packed vertex buffer objects

We can also use a single, packed vertex buffer object to hold all of our vertex data. The creation of a packed buffer is the same as above, with the only difference being that we start from a packed client-side buffer. Rendering from the packed buffer is also the same, except we need to pass in a stride and an offset, like when using packed buffers in client-side memory:

```final int stride = (POSITION_DATA_SIZE + NORMAL_DATA_SIZE + TEXTURE_COORDINATE_DATA_SIZE)
* BYTES_PER_FLOAT;

// Pass in the position information
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, mCubeBufferIdx);
GLES20.glEnableVertexAttribArray(mPositionHandle);
mGlEs20.glVertexAttribPointer(mPositionHandle, POSITION_DATA_SIZE,
GLES20.GL_FLOAT, false, stride, 0);

// Pass in the normal information
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, mCubeBufferIdx);
GLES20.glEnableVertexAttribArray(mNormalHandle);
mGlEs20.glVertexAttribPointer(mNormalHandle, NORMAL_DATA_SIZE,
GLES20.GL_FLOAT, false, stride, POSITION_DATA_SIZE * BYTES_PER_FLOAT);
...```

Notice that the offset needs to be specified in bytes. The same considerations of unbinding and deleting the buffer apply, as before.

##### Putting it all together

This lesson is setup so that it builds a cube of cubes, with the same number of cubes in each dimension. It will build a cube of cubes between 1x1x1 cubes, and 16x16x16 cubes. Since each cube shares the same normal and texture data, this data will be copied repeatedly when we initialize our client-side buffer. All of the cubes will end up inside the same buffer objects.

You can view the code for the lesson and view an example of rendering with and without VBOs, and with and without packed buffers. Check the code to see how some of the following was handled:

• Generating the vertex data asynchronously.
• Handling out of memory errors.
• We removed the call to glEnable(GL_TEXTURE_2D), since that is actually an invalid enum on OpenGL ES 2. This is a hold over from the fixed pipeline days; In OpenGL ES 2 this stuff is handled by shaders, so no need to use a glEnable/glDisable.
• How to render using different paths, without adding too many if statements and conditions.
##### Further exercises

When would you use vertex buffers and when is it better to stream data from client memory? What are some of the drawbacks of using vertex buffer objects? How would you improve the asynchronous loading code?

#### Wrapping up

The full source code for this lesson can be downloaded from the project site on GitHub. A compiled version of the lesson can also be downloaded directly from the Android Market:

Thanks for stopping by, and please feel free to check out the code and share your comments below. A special thanks goes out again to the guys at libgdx for generously providing the source code and libraries for their OpenGL ES 2 bindings for Android 2.2!

## Android Lesson Six: An Introduction to Texture Filtering

In this lesson, we will introduce the different types of basic texture filtering modes and how to use them, including nearest-neighbour filtering, bilinear filtering, and trilinear filtering using mipmaps.

You’ll learn how to make your textures appear more smooth, as well as the drawbacks that come from smoothing. There are also different ways of rotating an object, one of which is used in this lesson.

#### Assumptions and prerequisites

It’s highly recommended to understand the basics of texture mapping in OpenGL ES, covered in the lesson Android Lesson Four: Introducing Basic Texturing.

#### What is texture filtering?

Textures in OpenGL are made up of arrays of elements known as texels, which contain colour and alpha values. This corresponds with the display, which is made up of a bunch of pixels and displays a different colour at each point. In OpenGL, textures are applied to triangles and drawn on the screen, so these textures can be drawn in various sizes and orientation. The texture filtering options in OpenGL tell it how to filter the texels onto the pixels of the device, depending on the case.

There are three cases:

• Each texel maps onto more than one pixel. This is known as magnification.
• Each texel maps exactly onto one pixel. Filtering doesn’t apply in this case.
• Each texel maps onto less than one pixel. This is known as minification.

OpenGL lets us assign a filter for both magnification and minification, and lets us use nearest-neighbour, bilinear filtering, or trilinear filtering. I will explain what these mean further below.

##### Magnification and minification

Here is a visualization of both magnification and minification with nearest-neighbour rendering, using the cute Android that shows up when you have your USB connected to your Android device:

Magnification

As you can see, the texels of the image are easily visible, as they now cover many of the pixels on your display.

Minification

With minification, many of the details are lost as many of the texels cannot be rendered onto the limited pixels available.

##### Texture filtering modes

Bilinear interpolation

The texels of a texture are clearly visible as large squares in the magnification example when no interpolation between the texel values are done. When rendering is done in nearest-neighbour mode, the pixel is assigned the value of the nearest texel.

The rendering quality can be dramatically improved by switching to bilinear interpolation. Instead of assigning the values of a group of pixels to the same nearby texel value, these values will instead be linearly interpolated between the neighbouring four texels. Each pixel will be smoothed out and the resulting image will look much smoother:

Some blockiness is still apparent, but the image looks much smoother than before. People who played 3D games back in the days before 3D accelerated cards came out will remember that this was the defining feature between a software-rendered game and a hardware-accelerated game: software-rendered games simply did not have the processing budget to do smoothing, so everything appeared blocky and jagged. Things suddenly got smooth once people starting using graphics accelerators.

Bilinear interpolation is mostly useful for magnification. It can also be used for minification, but beyond a certain point and we run into the same problem that we are trying to cram far too many texels onto the same pixel. OpenGL will only use at most 4 texels to render a pixel, so a lot of information is still being lost.

If we look at a detailed texture with bilinear interpolation being applied, it will look very noisy when we see it moving in the distance, since a different set of texels will be selected each frame.

Mipmapping

How can we minify textures without introducing noise and use all of the texels? This can be done by generating a set of optimized textures at different sizes which we can then use at runtime. Since these textures are pre-generated, they can be filtered using more expensive techniques that use all of the texels, and at runtime OpenGL will select the most appropriate level based on the final size of the texture on the screen.

The resulting image can have more detail, less noise, and look better overall. Although a bit more memory will be used, rendering can also be faster, as the smaller levels can be more easily kept in the GPU’s texture cache. Let’s take a closer look at the resulting image at 1/8th of its original size, using bilinear filtering without and with mipmaps; the image has been expanded for clarity:

Bilinear filtering without mipmaps

Bilinear filtering with mipmaps

The version using mipmaps has vastly more detail. Because of the pre-processing of the image into separate levels, all of the texels end up getting used in the final image.

Trilinear filtering

When using mipmaps with bilinear filtering, sometimes a noticeable jump or line can be seen in the rendered scene where OpenGL switches between different mipmap levels of the texture. This will be pointed out a bit further below when comparing the different OpenGL texture filtering modes.

Trilinear filtering solves this problem by also interpolating between the different mipmap levels, so that a total of 8 texels will be used to interpolate the final pixel value, resulting in a smoother image.

#### OpenGL texture filtering modes

OpenGL has two parameters that can be set:

• GL_TEXTURE_MIN_FILTER
• GL_TEXTURE_MAG_FILTER

These correspond to the minification and magnification described earlier. GL_TEXTURE_MIN_FILTER accepts the following options:

• GL_NEAREST
• GL_LINEAR
• GL_NEAREST_MIPMAP_NEAREST
• GL_NEAREST_MIPMAP_LINEAR
• GL_LINEAR_MIPMAP_NEAREST
• GL_LINEAR_MIPMAP_LINEAR

GL_TEXTURE_MAG_FILTER accepts the following options:

• GL_NEAREST
• GL_LINEAR

GL_NEAREST corresponds to nearest-neighbour rendering, GL_LINEAR corresponds to bilinear filtering, GL_LINEAR_MIPMAP_NEAREST corresponds to bilinear filtering with mipmaps, and GL_LINEAR_MIPMAP_LINEAR corresponds to trilinear filtering. Graphical examples and further explanation of the most common options are visible further down in this lesson.

How to set a texture filtering mode

We first need to bind the texture, then we can set the appropriate filter parameter on that texture:

```GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, mTextureHandle);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER, filter);```

How to generate mipmaps

This is really easy. After loading the texture into OpenGL (See Android Lesson Four: Introducing Basic Texturing for more information on how to do this), while the texture is still bound, we can simply call:

`GLES20.glGenerateMipmap(GLES20.GL_TEXTURE_2D);`

This will generate all of the mipmap levels for us, and these levels will get automatically used depending on the texture filter set.

#### How does it look?

Here are some screenshots of the most common combinations available. The effects are more dramatic when you see it in motion, so I recommend downloading the app and giving it a shot.

Nearest-neighbour rendering

This mode is reminiscent of older software-rendered 3D games.

GL_TEXTURE_MIN_FILTER = GL_NEAREST
GL_TEXTURE_MAG_FILTER = GL_NEAREST

Bilinear filtering, with mipmaps

This mode was used by many of the first games that supported 3D acceleration and is an efficient way of smoothing textures on Android phones today.

GL_TEXTURE_MIN_FILTER = GL_LINEAR_MIPMAP_NEAREST
GL_TEXTURE_MAG_FILTER = GL_LINEAR

It’s hard to see on this static image, but when things are in motion, you might notice horizontal bands where the rendered pixels switch between mipmap levels.

Trilinear filtering

This mode improves on the render quality of bilinear filtering with mipmaps, by interpolating between the mipmap levels.

GL_TEXTURE_MIN_FILTER = GL_LINEAR_MIPMAP_LINEAR
GL_TEXTURE_MAG_FILTER = GL_LINEAR

The pixels are completely smoothed between near and far distances; in fact, the textures may now appear too smooth at oblique angles. Anisotropic filtering is a more advanced technique that is supported by some mobile GPUs and can be used to improve the final results beyond what trilinear filtering can deliver.

##### Further exercises

What sort of effects can you achieve with the other modes? For example, when would you use something like GL_NEAREST_MIPMAP_LINEAR?

#### Wrapping up

The full source code for this lesson can be downloaded from the project site on GitHub.

A compiled version of the lesson can also be downloaded directly from the Android Market:

I hope you enjoyed this lesson, and thanks for stopping by!

## Android Lesson Five: An Introduction to Blending

In this lesson we’ll take a look at the basics of blending in OpenGL. We’ll look at how to turn blending on and off, how to set different blending modes, and how different blending modes mimic real-life effects. In a later lesson, we’ll also look at how to use the alpha channel, how to use the depth buffer to render both translucent and opaque objects in the same scene, as well as when we need to sort objects by depth, and why.

We’ll also look at how to listen to touch events, and then change our rendering state based on that.

#### Assumptions and prerequisites

Each lesson in this series builds on the one before it. However, for this lesson it will be enough if you understand Android Lesson One: Getting Started. Although the code is based on the preceding lesson, the lighting and texturing portion has been removed for this lesson so we can focus on the blending.

#### Blending

Blending is the act of combining one color with a second in order to get a third color. We see blending all of the time in the real world: when light passes through glass, when it bounces off of a surface, and when a light source itself is superimposed on the background, such as the flare we see around a lit streetlight at night.

OpenGL has different blending modes we can use to reproduce this effect. In OpenGL, blending occurs in a late stage of the rendering process: it happens once the fragment shader has calculated the final output color of a fragment and it’s about to be written to the frame buffer. Normally that fragment just overwrites whatever was there before, but if blending is turned on, then that fragment is blended with what was there before.

By default, here’s what the OpenGL blending equation looks like when glBlendEquation() is set to the default, GL_FUNC_ADD:

output = (source factor * source fragment) + (destination factor * destination fragment)

There are also two other modes available in OpenGL ES 2, GL_FUNC_SUBTRACT and GL_FUNC_REVERSE_SUBTRACT. These may be covered in a future tutorial, however, I get an UnsupportedOperationException on the Nexus S when I try to call this function so it’s possible that this is not actually supported on the Android implementation. This isn’t the end of the world since there is plenty you can do already with GL_FUNC_ADD.

The source factor and destination factor are set using the function glBlendFunc(). An overview of a few common blend factors will be given below; more information  as well as an enumeration of the different possible factors is available at the Khronos online manual:

The documentation appears better in Firefox or if you have a MathML extension installed.

##### Clamping

OpenGL expects the input to be clamped in the range [0, 1] , and the output will also be clamped to the range [0, 1]. What this means in practice is that colors can shift in hue when you are doing blending. If you keep adding red (RGB = 1, 0, 0) to the frame buffer, the final color will stay red. However, if you add in just a little bit of green so that you are adding (RGB = 1, 0.1, 0) to the frame buffer, you will end up with yellow even though you started with a reddish hue! You can see this effect in the demo for this lesson when blending is turned on: the colors become oversaturated where different colors overlap.

#### Different types of blending and how they relate to different effects

Additive blending is the type of blending we do when we add different colors together and add the result. This is the way that our vision works together with light and this is how we can perceive millions of different colors on our monitors — they are really just blending three different primary colors together.

This type of blending has many uses in 3D rendering, such as in particle effects which appear to give off light or overlays such as the corona around a light, or a glow effect around a light saber.

Additive blending can be specified by calling glBlendFunc(GL_ONE, GL_ONE). This results in the blending equation output = (1 * source fragment) + (1 * destination fragment), which collapses into output = source fragment + destination fragment.

##### Multiplicative blending

Multiplicative blending (also known as modulation) is another useful blending mode that represents the way that light behaves when it passes through a color filter, or bounces off of a lit object and enters our eyes. A red object appears red to us because when white light strikes the object, blue and green light is absorbed. Only the red light is reflected back toward our eyes. In the example to the left, we can see a surface that reflects some red and some green, but very little blue.

When multi-texturing is not available, multiplicative blending is used to implement lightmaps in games. The texture is multiplied by the lightmap in order to fill in the lit and shadowed areas.

Multiplicative blending can be specified by calling glBlendFunc(GL_DST_COLOR, GL_ZERO). This results in the blending equation output = (destination fragment * source fragment) + (0 * destination fragment), which collapses into output = source fragment * destination fragment.

##### Interpolative blending

Interpolative blending combines multiplication and addition to give an interpolative effect. Unlike addition and modulation by themselves, this blending mode can also be draw-order dependent, so in some cases the results will only be correct if you draw the furthest translucent objects first, and then the closer ones afterwards. Even sorting wouldn’t be perfect, since it’s possible for triangles to overlap and intersect, but the resulting artifacts may be acceptable.

Interpolation is often useful to blend adjacent surfaces together, as well as do effects like tinted glass, or fade-in/fade-out. The image on the left shows two textures (textures from public domain textures) blended together using interpolation.

Interpolation is specified by calling glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA). This results in the blending equation output = (source alpha * source fragment) + ((1 – source alpha) * destination fragment). Here’s an example:

Imagine that we’re drawing a green (0r, 1g, 0b) object that is only 25% opaque. The object currently on the screen is red (1r, 0g, 0b) .

output = (source factor * source fragment) + (destination factor * destination fragment)
output = (source alpha * source fragment) + ((1 – source alpha) * destination fragment)
output = (0.25 * (0r, 1g, 0b)) + (0.75 * (1r, 0g, 0b))
output = (0r, 0.25g, 0b) + (0.75r, 0g, 0b)
output = (0.75r, 0.25g, 0b)

Notice that we don’t make any reference to the destination alpha, so the frame buffer itself doesn’t need an alpha channel, which gives us more bits for the color channels.

#### Using blending

For our lesson, our demo will show the cubes as if they were emitters of light, using additive blending. Something that emits light doesn’t need to be lit by other light sources, so there are no lights in this demo. I’ve also removed the texture, although it could have been neat to use one. The shader program for this lesson will be simple; we just need a shader that will pass out the color given to it.

```uniform mat4 u_MVPMatrix;		// A constant representing the combined model/view/projection matrix.

attribute vec4 a_Position;		// Per-vertex position information we will pass in.
attribute vec4 a_Color;			// Per-vertex color information we will pass in.

varying vec4 v_Color;			// This will be passed into the fragment shader.

// The entry point for our vertex shader.
void main()
{
// Pass through the color.
v_Color = a_Color;

// gl_Position is a special variable used to store the final position.
// Multiply the vertex by the matrix to get the final point in normalized screen coordinates.
gl_Position = u_MVPMatrix * a_Position;
}```
```precision mediump float;       	// Set the default precision to medium. We don't need as high of a
// precision in the fragment shader.
varying vec4 v_Color;          	// This is the color from the vertex shader interpolated across the
// triangle per fragment.

// The entry point for our fragment shader.
void main()
{
// Pass through the color
gl_FragColor = v_Color;
}```
##### Turning blending on

Turning blending on is as simple as making these function calls:

```// No culling of back faces
GLES20.glDisable(GLES20.GL_CULL_FACE);

// No depth testing
GLES20.glDisable(GLES20.GL_DEPTH_TEST);

// Enable blending
GLES20.glEnable(GLES20.GL_BLEND);
GLES20.glBlendFunc(GLES20.GL_ONE, GLES20.GL_ONE);```

We turn off the culling of back faces, because if a cube is translucent, then we can now see the back sides of the cube. We should draw them otherwise it might look quite strange. We turn off depth testing for the same reason.

##### Listening to touch events, and acting on them

You’ll notice when you run the demo that it’s possible to turn blending on and off by tapping on the screen. See the article “Listening to Android Touch Events, and Acting on Them” for more information.

##### Further exercises

The demo only uses additive blending at the moment. Try changing it to interpolative blending and re-adding the lights and textures. Does the draw order matter if you’re only drawing two translucent textures on a black background? When would it matter?

#### Wrapping up

The full source code for this lesson can be downloaded from the project site on GitHub.

A compiled version of the lesson can also be downloaded directly from the Android Market:

As always, please don’t hesitate to leave feedbacks or comments, and thanks for stopping by!

## Android Lesson Four: Introducing Basic Texturing

This is the fourth tutorial in our Android series. In this lesson, we’re going to add to what we learned in lesson three and learn how to add texturing.We’ll look at how to read an image from the application resources, load this image into OpenGL ES, and display it on the screen.

Follow along with me and you’ll understand basic texturing in no time flat!

#### Assumptions and prerequisites

Each lesson in this series builds on the lesson before it. This lesson is an extension of lesson three, so please be sure to review that lesson before continuing on. Here are the previous lessons in the series:

#### The basics of texturing

The art of texture mapping (along with lighting) is one of the most important parts of building up a realistic-looking 3D world. Without texture mapping, everything is smoothly shaded and looks quite artificial, like an old console game from the 90s.

The first games to start heavily using textures, such as Doom and Duke Nukem 3D, were able to greatly enhance the realism of the gameplay through the added visual impact — these were games that could start to truly scare us if played at night in the dark.

Here’s a look at a scene, without and with texturing:

 Per fragment lighting; centered between four vertices of a square. Adding texturing; centered between four vertices of a square. In the image on the left, the scene is lit with per-pixel lighting and colored. Otherwise the scene appears very smooth. There are not many places in real-life where we would walk into a room full of smooth-shaded objects like this cube.In the image on the right, the same scene has now also been textured. The ambient lighting has also been increased because the use of textures darkens the overall scene, so this was done so you could also see the effects of texturing on the side cubes. The cubes have the same number of polygons as before, but they appear a lot more detailed with the new texture. For those who are curious, the texture source is from public domain textures.

#### Texture coordinates

In OpenGL, texture coordinates are sometimes referred to in coordinates (s, t) instead of (x, y). (s, t) represents a texel on the texture, which is then mapped to the polygon. Another thing to note is that these texture coordinates are like other OpenGL coordinates: The t (or y) axis is pointing upwards, so that values get higher the higher you go.

In most computer images, the y axis is pointing downwards. This means that the top-left most corner of the image is (0, 0), and the y values increase the lower you go.In other words, the y-axis is flipped between OpenGL’s coordinate system and most computer images, and this is something you need to take into account.

#### The basics of texture mapping

In this lesson, we will look at regular 2D textures (GL_TEXTURE_2D) with red, green, and blue color information (GL_RGB). OpenGL ES also offers other texture modes that let you do different and more specialized effects. We’ll look at point sampling using GL_NEAREST. GL_LINEAR and mip-mapping will be covered in a future lesson.

Let’s start getting into the code and see how to start using basic texturing in Android!

We’re going to take our per-pixel lighting shader from the previous lesson, and add texturing support. Here are the new changes:

```attribute vec2 a_TexCoordinate; // Per-vertex texture coordinate information we will pass in.

...

varying vec2 v_TexCoordinate;   // This will be passed into the fragment shader.

...
// Pass through the texture coordinate.
v_TexCoordinate = a_TexCoordinate;```

In the vertex shader, we add a new attribute of type vec2 (an array with two components) that will take in texture coordinate information as input. This will be per-vertex, like the position, color, and normal data. We also add a new varying that will pass this data through to the fragment shader via linear interpolation across the surface of the triangle.

```uniform sampler2D u_Texture;    // The input texture.

...

varying vec2 v_TexCoordinate; // Interpolated texture coordinate per fragment.

...

diffuse = diffuse * (1.0 / (1.0 + (0.10 * distance)));

...

diffuse = diffuse + 0.3;

...

// Multiply the color by the diffuse illumination level and texture value to get final output color.

gl_FragColor = (v_Color * diffuse * texture2D(u_Texture, v_TexCoordinate));```

We add a new uniform of type sampler2D to represent the actual texture data (as opposed to texture coordinates). The varying passes in the interpolated texture coordinates from the vertex shader, and we call texture2D(texture, textureCoordinate) to read in the value of the texture at the current coordinate. We then take this value and multiply it with the other terms to get the final output color.

Adding in a texture this way darkens the overall scene somewhat, so we also boost up the ambient lighting a bit and reduce the lighting attenuation.

```	public static int loadTexture(final Context context, final int resourceId)
{
final int[] textureHandle = new int[1];

GLES20.glGenTextures(1, textureHandle, 0);

if (textureHandle[0] != 0)
{
final BitmapFactory.Options options = new BitmapFactory.Options();
options.inScaled = false;	// No pre-scaling

final Bitmap bitmap = BitmapFactory.decodeResource(context.getResources(), resourceId, options);

// Bind to the texture in OpenGL
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, textureHandle[0]);

// Set filtering
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_NEAREST);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_NEAREST);

// Load the bitmap into the bound texture.
GLUtils.texImage2D(GLES20.GL_TEXTURE_2D, 0, bitmap, 0);

// Recycle the bitmap, since its data has been loaded into OpenGL.
bitmap.recycle();
}

if (textureHandle[0] == 0)
{
}

return textureHandle[0];
}```

This bit of code will read in a graphics file from your Android res folder and load it into OpenGL. I’ll explain what each part does.

We first need to ask OpenGL to create a new handle for us. This handle serves as a unique identifier, and we use it whenever we want to refer to the same texture in OpenGL.

```final int[] textureHandle = new int[1];
GLES20.glGenTextures(1, textureHandle, 0);```

The OpenGL method can be used to generate multiple handles at the same time; here we generate just one.

Once we have a texture handle, we use it to load the texture. First, we need to get the texture in a format that OpenGL will understand. We can’t just feed it raw data from a PNG or JPG, because it won’t understand that. The first step that we need to do is to decode the image file into an Android Bitmap object:

```final BitmapFactory.Options options = new BitmapFactory.Options();
options.inScaled = false;	// No pre-scaling

final Bitmap bitmap = BitmapFactory.decodeResource(context.getResources(), resourceId, options);```

By default, Android applies pre-scaling to bitmaps depending on the resolution of your device and which resource folder you placed the image in. We don’t want Android to scale our bitmap at all, so to be sure, we set inScaled to false.

```// Bind to the texture in OpenGL
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, textureHandle[0]);

// Set filtering
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_NEAREST);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_NEAREST);```

We then bind to the texture and set a couple of parameters. Binding to a texture tells OpenGL that subsequent OpenGL calls should affect this texture. We set the default filters to GL_NEAREST, which is the quickest and also the roughest form of filtering. All it does is pick the nearest texel at each point in the screen, which can lead to graphical artifacts and aliasing.

• GL_TEXTURE_MIN_FILTER — This tells OpenGL what type of filtering to apply when drawing the texture smaller than the original size in pixels.
• GL_TEXTURE_MAG_FILTER — This tells OpenGL what type of filtering to apply when magnifying the texture beyond its original size in pixels.
```// Load the bitmap into the bound texture.
GLUtils.texImage2D(GLES20.GL_TEXTURE_2D, 0, bitmap, 0);

// Recycle the bitmap, since its data has been loaded into OpenGL.
bitmap.recycle();```

Android has a very useful utility to load bitmaps directly into OpenGL. Once you’ve read in a resource into a Bitmap object, GLUtils.texImage2D() will take care of the rest. Here’s the method signature:

public static void texImage2D (int target, int level, Bitmap bitmap, int border)

We want a regular 2D bitmap, so we pass in GL_TEXTURE_2D as the first parameter. The second parameter is for mip-mapping, and lets you specify the image to use at each level. We’re not using mip-mapping here so we’ll put 0 which is the default level. We pass in the bitmap, and we’re not using the border so we pass in 0.

We then call recycle() on the original bitmap, which is an important step to free up memory. The texture has been loaded into OpenGL, so we don’t need to keep a copy of it lying around. Yes, Android apps run under a Dalvik VM that performs garbage collection, but Bitmap objects contain data that resides in native memory and they take a few cycles to be garbage collected if you don’t recycle them explicitly. This means that you could actually crash with an out of memory error if you forget to do this, even if you no longer hold any references to the bitmap.

##### Applying the texture to our scene

First, we need to add various members to the class to hold stuff we need for our texture:

```/** Store our model data in a float buffer. */
private final FloatBuffer mCubeTextureCoordinates;

/** This will be used to pass in the texture. */
private int mTextureUniformHandle;

/** This will be used to pass in model texture coordinate information. */
private int mTextureCoordinateHandle;

/** Size of the texture coordinate data in elements. */
private final int mTextureCoordinateDataSize = 2;

/** This is a handle to our texture data. */
private int mTextureDataHandle;```

We basically need to add new members to track what we added to the shaders, as well as hold a reference to our texture.

###### Defining the texture coordinates

We define our texture coordinates in the constructor:

```// S, T (or X, Y)
// Texture coordinate data.
// Because images have a Y axis pointing downward (values increase as you move down the image) while
// OpenGL has a Y axis pointing upward, we adjust for that here by flipping the Y axis.
// What's more is that the texture coordinates are the same for every face.
final float[] cubeTextureCoordinateData =
{
// Front face
0.0f, 0.0f,
0.0f, 1.0f,
1.0f, 0.0f,
0.0f, 1.0f,
1.0f, 1.0f,
1.0f, 0.0f,

...

```

The coordinate data might look a little confusing here. If you go back and look at how the position points are defined in Lesson 3, you’ll see that we define two triangles per face of the cube. The points are defined like this:

(Triangle 1)
Upper-left,
Lower-left,
Upper-right,

(Triangle 2)
Lower-left,
Lower-right,
Upper-right

The texture coordinates are pretty much the position coordinates for the front face, but with the Y axis flipped to compensate for the fact that in graphics images, the Y axis points in the opposite direction of OpenGL’s Y axis.

###### Setting up the texture

We load the texture in the onSurfaceCreated() method.

```@Override
public void onSurfaceCreated(GL10 glUnused, EGLConfig config)
{

...

// The below glEnable() call is a holdover from OpenGL ES 1, and is not needed in OpenGL ES 2.
// Enable texture mapping
// GLES20.glEnable(GLES20.GL_TEXTURE_2D);

...

new String[] {"a_Position",  "a_Color", "a_Normal", "a_TexCoordinate"});

...

We pass in “a_TexCoordinate” as a new attribute to bind to in our shader program, and we load in our texture using the loadTexture() method we created above.

###### Using the texture

We also add some code to the onDrawFrame(GL10 glUnused) method.

```@Override
public void onDrawFrame(GL10 glUnused)
{

...

mTextureUniformHandle = GLES20.glGetUniformLocation(mProgramHandle, "u_Texture");
mTextureCoordinateHandle = GLES20.glGetAttribLocation(mProgramHandle, "a_TexCoordinate");

// Set the active texture unit to texture unit 0.
GLES20.glActiveTexture(GLES20.GL_TEXTURE0);

// Bind the texture to this unit.
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, mTextureDataHandle);

// Tell the texture uniform sampler to use this texture in the shader by binding to texture unit 0.
GLES20.glUniform1i(mTextureUniformHandle, 0);```

We get the shader locations for the texture data and texture coordinates. In OpenGL, textures need to be bound to texture units before they can be used in rendering. A texture unit is what reads in the texture and actually passes it through the shader so it can be displayed on the screen. Different graphics chips have a different number of texture units, so you’ll need to check if additional texture units exist before using them.

First, we tell OpenGL that we want to set the active texture unit to the first unit, texture unit 0. Our call to glBindTexture() will then automatically bind the texture to the first texture unit. Finally, we tell OpenGL that we want to bind the first texture unit to the mTextureUniformHandle, which refers to “u_Texture” in the fragment shader.

In short:

1. Set the active texture unit.
2. Bind a texture to this unit.
3. Assign this unit to a texture uniform in the fragment shader.

Repeat for as many textures as you need.

##### Further exercises

Once you’ve made it this far, you’re done! Surely that wasn’t as bad as you expected… or was it? 😉 As your next exercise, try to add multi-texturing by loading in another texture, binding it to another unit, and using it in the shader.

#### Review

Here is a review of the full shader code, as well as a new helper function that we added to read in the shader code from the resource folder instead of storing it as a Java String:

```uniform mat4 u_MVPMatrix;		// A constant representing the combined model/view/projection matrix.
uniform mat4 u_MVMatrix;		// A constant representing the combined model/view matrix.

attribute vec4 a_Position;		// Per-vertex position information we will pass in.
attribute vec4 a_Color;			// Per-vertex color information we will pass in.
attribute vec3 a_Normal;		// Per-vertex normal information we will pass in.
attribute vec2 a_TexCoordinate; // Per-vertex texture coordinate information we will pass in.

varying vec3 v_Position;		// This will be passed into the fragment shader.
varying vec4 v_Color;			// This will be passed into the fragment shader.
varying vec3 v_Normal;			// This will be passed into the fragment shader.
varying vec2 v_TexCoordinate;   // This will be passed into the fragment shader.

// The entry point for our vertex shader.
void main()
{
// Transform the vertex into eye space.
v_Position = vec3(u_MVMatrix * a_Position);

// Pass through the color.
v_Color = a_Color;

// Pass through the texture coordinate.
v_TexCoordinate = a_TexCoordinate;

// Transform the normal's orientation into eye space.
v_Normal = vec3(u_MVMatrix * vec4(a_Normal, 0.0));

// gl_Position is a special variable used to store the final position.
// Multiply the vertex by the matrix to get the final point in normalized screen coordinates.
gl_Position = u_MVPMatrix * a_Position;
}```
```precision mediump float;       	// Set the default precision to medium. We don't need as high of a
// precision in the fragment shader.
uniform vec3 u_LightPos;       	// The position of the light in eye space.
uniform sampler2D u_Texture;    // The input texture.

varying vec3 v_Position;		// Interpolated position for this fragment.
varying vec4 v_Color;          	// This is the color from the vertex shader interpolated across the
// triangle per fragment.
varying vec3 v_Normal;         	// Interpolated normal for this fragment.
varying vec2 v_TexCoordinate;   // Interpolated texture coordinate per fragment.

// The entry point for our fragment shader.
void main()
{
// Will be used for attenuation.
float distance = length(u_LightPos - v_Position);

// Get a lighting direction vector from the light to the vertex.
vec3 lightVector = normalize(u_LightPos - v_Position);

// Calculate the dot product of the light vector and vertex normal. If the normal and light vector are
// pointing in the same direction then it will get max illumination.
float diffuse = max(dot(v_Normal, lightVector), 0.0);

diffuse = diffuse * (1.0 / (1.0 + (0.10 * distance)));

diffuse = diffuse + 0.3;

// Multiply the color by the diffuse illumination level and texture value to get final output color.
gl_FragColor = (v_Color * diffuse * texture2D(u_Texture, v_TexCoordinate));
}```
##### How to read in the shader from a text file in the raw resources folder
```public static String readTextFileFromRawResource(final Context context,
final int resourceId)
{
final InputStream inputStream = context.getResources().openRawResource(
resourceId);
inputStream);

String nextLine;
final StringBuilder body = new StringBuilder();

try
{
{
body.append(nextLine);
body.append('\n');
}
}
catch (IOException e)
{
return null;
}

return body.toString();
}```

#### Wrapping up

The full source code for this lesson can be downloaded from the project site on GitHub.

A compiled version of the lesson can also be downloaded directly from the Android Market:

As always, please don’t hesitate to leave feedbacks or comments, and thanks for stopping by!

## Android Lesson Three: Moving to Per-Fragment Lighting

Welcome to the the third tutorial for Android! In this lesson, we’re going to take everything we learned in lesson two and learn how to apply the same lighting technique on a per-pixel basis. We will be able to see the difference, even when using standard diffuse lighting with simple cubes.

#### Assumptions and prerequisites

Each lesson in this series builds on the lesson before it. This lesson is an extension of lesson two, so please be sure to review that lesson before continuing on. Here are the previous lessons in the series:

#### What is per-pixel lighting?

Per-pixel lighting is a relatively new phenomenon in gaming with the advent of the use of shaders. Many famous old games such as the original Half Life were developed before the time of shaders and featured mainly static lighting, with some tricks for simulating dynamic lighting using either per-vertex (otherwise known as Gouraud shading) lights or other techniques, such as dynamic lightmaps.

Lightmaps can give a very nice result and can sometimes give even better results than shaders alone as expensive light calculations can be precomputed, but the downside is that they take up a lot of memory and doing dynamic lighting with them is limited to simple calculations.

With shaders, a lot of these calculations can now be offloaded to the GPU, which allows for many more effects to be done in real-time.

#### Moving from per-vertex to per-fragment lighting

In this lesson, we’re going to look at the same lighting code for a per-vertex solution and a per-fragment solution. Although I have referred to this type of lighting as per-pixel, in OpenGL ES we actually work with fragments, and several fragments can contribute to the final value of a pixel.

Mobile GPUs are getting faster and faster, but performance is still a concern. For “soft” lighting such as terrain, per-vertex lighting may be good enough. Ensure you have a proper balance between quality and speed.

A significant difference between the two types of lighting can be seen in certain situations. Take a look at the following screen captures:

 Per vertex lighting; centered between four vertices of a square. Per fragment lighting; centered between four vertices of a square. In the per-vertex lighting in the left image, the front face of the cube appears as if flat-shaded, and there is no evidence of a light nearby. This is because each of the four points of the front face are more or less equidistant from the light, and the low light intensity at each of these four points is simply interpolated across the two triangles that make up the front face. The per-fragment version shows a nice highlight in comparison. Per vertex lighting; At the corner of a square. Per fragment lighting; At the corner of a square. The left image shows a Gouraud-shaded cube.As the light source moves near the corner of the front face of the cube, a triangle-like effect can be seen. This is because the front face is actually composed of two triangles, and as the values are interpolated in different directions across each triangle we can see the underlying geometry. The per-fragment version shows no such interpolation errors and shows a nice circular highlight near the edge.
##### An overview of per-vertex lighting

Let’s take a look at our shaders from lesson two; a more detailed explanation on what the shaders do can be found in that lesson.

```uniform mat4 u_MVPMatrix;     // A constant representing the combined model/view/projection matrix.
uniform mat4 u_MVMatrix;      // A constant representing the combined model/view matrix.
uniform vec3 u_LightPos;      // The position of the light in eye space.

attribute vec4 a_Position;    // Per-vertex position information we will pass in.
attribute vec4 a_Color;       // Per-vertex color information we will pass in.
attribute vec3 a_Normal;      // Per-vertex normal information we will pass in.

varying vec4 v_Color;         // This will be passed into the fragment shader.

// The entry point for our vertex shader.
void main()
{
// Transform the vertex into eye space.
vec3 modelViewVertex = vec3(u_MVMatrix * a_Position);

// Transform the normal's orientation into eye space.
vec3 modelViewNormal = vec3(u_MVMatrix * vec4(a_Normal, 0.0));

// Will be used for attenuation.
float distance = length(u_LightPos - modelViewVertex);

// Get a lighting direction vector from the light to the vertex.
vec3 lightVector = normalize(u_LightPos - modelViewVertex);

// Calculate the dot product of the light vector and vertex normal. If the normal and light vector are
// pointing in the same direction then it will get max illumination.
float diffuse = max(dot(modelViewNormal, lightVector), 0.1);

// Attenuate the light based on distance.
diffuse = diffuse * (1.0 / (1.0 + (0.25 * distance * distance)));

// Multiply the color by the illumination level. It will be interpolated across the triangle.
v_Color = a_Color * diffuse;

// gl_Position is a special variable used to store the final position.
// Multiply the vertex by the matrix to get the final point in normalized screen coordinates.
gl_Position = u_MVPMatrix * a_Position;
}```
```precision mediump float;       // Set the default precision to medium. We don't need as high of a
// precision in the fragment shader.
varying vec4 v_Color;          // This is the color from the vertex shader interpolated across the
// triangle per fragment.

// The entry point for our fragment shader.
void main()
{
gl_FragColor = v_Color;    // Pass the color directly through the pipeline.
}```

As you can see, most of the work is being done in our vertex shader. Moving to per-fragment lighting means that our fragment shader is going to have more work to do.

##### Implementing per-fragment lighting

Here is what the code looks like after moving to per-fragment lighting.

```uniform mat4 u_MVPMatrix;      // A constant representing the combined model/view/projection matrix.
uniform mat4 u_MVMatrix;       // A constant representing the combined model/view matrix.

attribute vec4 a_Position;     // Per-vertex position information we will pass in.
attribute vec4 a_Color;        // Per-vertex color information we will pass in.
attribute vec3 a_Normal;       // Per-vertex normal information we will pass in.

varying vec3 v_Position;       // This will be passed into the fragment shader.
varying vec4 v_Color;          // This will be passed into the fragment shader.
varying vec3 v_Normal;         // This will be passed into the fragment shader.

// The entry point for our vertex shader.
void main()
{
// Transform the vertex into eye space.
v_Position = vec3(u_MVMatrix * a_Position);

// Pass through the color.
v_Color = a_Color;

// Transform the normal's orientation into eye space.
v_Normal = vec3(u_MVMatrix * vec4(a_Normal, 0.0));

// gl_Position is a special variable used to store the final position.
// Multiply the vertex by the matrix to get the final point in normalized screen coordinates.
gl_Position = u_MVPMatrix * a_Position;
}```

The vertex shader is simpler than before. We have added two linearly-interpolated variables to be passed through to the fragment shader: the vertex position and the vertex normal. Both of these will be used when calculating lighting in the fragment shader.

```precision mediump float;       // Set the default precision to medium. We don't need as high of a
// precision in the fragment shader.
uniform vec3 u_LightPos;       // The position of the light in eye space.

varying vec3 v_Position;	   // Interpolated position for this fragment.
varying vec4 v_Color;          // This is the color from the vertex shader interpolated across the
// triangle per fragment.
varying vec3 v_Normal;         // Interpolated normal for this fragment.

// The entry point for our fragment shader.
void main()
{
// Will be used for attenuation.
float distance = length(u_LightPos - v_Position);

// Get a lighting direction vector from the light to the vertex.
vec3 lightVector = normalize(u_LightPos - v_Position);

// Calculate the dot product of the light vector and vertex normal. If the normal and light vector are
// pointing in the same direction then it will get max illumination.
float diffuse = max(dot(v_Normal, lightVector), 0.1);

diffuse = diffuse * (1.0 / (1.0 + (0.25 * distance * distance)));

// Multiply the color by the diffuse illumination level to get final output color.
gl_FragColor = v_Color * diffuse;
}```

With per-fragment lighting, our fragment shader has a lot more work to do. We have basically moved the Lambertian calculation and attenuation to the per-pixel level, which gives us more realistic lighting without needing to add more vertices.

#### Further exercises

Could we calculate the distance in the vertex shader instead and simply pass that on to the pixel shader using linear interpolation via a varying?

#### Wrapping up

The full source code for this lesson can be downloaded from the project site on GitHub.

A compiled version of the lesson can also be downloaded directly from the Android Market:

As always, please don’t hesitate to leave feedbacks or comments, and thanks for stopping by!