## OpenGL Roundup, April 29, 2014: Milestones

Two big names in the game development community are celebrating their achievements as they reach important milestones and bring their work to the community:

libGDX 1.0 released

Zero to 95,688: How I wrote Game Programming Patterns

Congrats to you guys, and thanks for sharing your work with the world!

In other news, I’d like to thank El androide libre and Mobile Phone Development for linking to A Performance Comparison Between Java and C on the Nexus 5, which turned out to be more controversial than expected! A member of the Google team has kindly offered to help out with bringing the benchmark to RenderScript, so that will be interesting to see.

## Finishing Up Our Native Air Hockey Project With Touch Events and Basic Collision Detection

In this post in the air hockey series, we’re going to wrap up our air hockey project and add touch event handling and basic collision detection with support for Android, iOS, and emscripten.

### Prerequisites

This lesson continues the air hockey project series, building upon the code from GitHub for ‘article-3-matrices-and-objects’. Here are the previous posts in this series:

### Updating our game code for touch interaction

The first thing we’ll do is update the core to add touch interaction to the game. We’ll first need to add some helper functions to a new core file called geometry.h.

#### geometry.h

Let’s start off with the following code:

```#include "linmath.h"
#include

typedef struct {
vec3 point;
vec3 vector;
} Ray;

typedef struct {
vec3 point;
vec3 normal;
} Plane;

typedef struct {
vec3 center;
} Sphere;```

These are a few `typedef`s that build upon linmath.h to add a few basic types that we’ll use in our code. Let’s wrap up geometry.h:

```static inline int sphere_intersects_ray(Sphere sphere, Ray ray);
static inline float distance_between(vec3 point, Ray ray);
static inline void ray_intersection_point(vec3 result, Ray ray, Plane plane);

static inline int sphere_intersects_ray(Sphere sphere, Ray ray) {
return 1;
return 0;
}

static inline float distance_between(vec3 point, Ray ray) {
vec3 p1_to_point;
vec3_sub(p1_to_point, point, ray.point);
vec3 p2_to_point;
vec3 translated_ray_point;
vec3_sub(p2_to_point, point, translated_ray_point);

// The length of the cross product gives the area of an imaginary
// parallelogram having the two vectors as sides. A parallelogram can be
// thought of as consisting of two triangles, so this is the same as
// twice the area of the triangle defined by the two vectors.
// http://en.wikipedia.org/wiki/Cross_product#Geometric_meaning
vec3 cross_product;
vec3_mul_cross(cross_product, p1_to_point, p2_to_point);
float area_of_triangle_times_two = vec3_len(cross_product);
float length_of_base = vec3_len(ray.vector);

// The area of a triangle is also equal to (base * height) / 2. In
// other words, the height is equal to (area * 2) / base. The height
// of this triangle is the distance from the point to the ray.
float distance_from_point_to_ray = area_of_triangle_times_two / length_of_base;
return distance_from_point_to_ray;
}

// http://en.wikipedia.org/wiki/Line-plane_intersection
// This also treats rays as if they were infinite. It will return a
// point full of NaNs if there is no intersection point.
static inline void ray_intersection_point(vec3 result, Ray ray, Plane plane) {
vec3 ray_to_plane_vector;
vec3_sub(ray_to_plane_vector, plane.point, ray.point);

float scale_factor = vec3_mul_inner(ray_to_plane_vector, plane.normal)
/ vec3_mul_inner(ray.vector, plane.normal);

vec3 intersection_point;
vec3 scaled_ray_vector;
vec3_scale(scaled_ray_vector, ray.vector, scale_factor);
memcpy(result, intersection_point, sizeof(intersection_point));
}```

We’ll do a line-sphere intersection test to see if we’ve touched the mallet using our fingers or a mouse. Once we’ve grabbed the mallet, we’ll do a line-plane intersection test to determine where to place the mallet on the board.

#### game.h

We’ll need two new function prototypes in game.h:

```void on_touch_press(float normalized_x, float normalized_y);
void on_touch_drag(float normalized_x, float normalized_y);```

game.c

Now we can begin the implementation in game.c. Add the following in the appropriate places to the top of the file:

```#include "geometry.h"
// ...
static const float puck_radius = 0.06f;
static const float mallet_radius = 0.08f;

static const float left_bound = -0.5f;
static const float right_bound = 0.5f;
static const float far_bound = -0.8f;
static const float near_bound = 0.8f;
// ...
static mat4x4 inverted_view_projection_matrix;

static int mallet_pressed;
static vec3 blue_mallet_position;
static vec3 previous_blue_mallet_position;
static vec3 puck_position;
static vec3 puck_vector;

static Ray convert_normalized_2D_point_to_ray(float normalized_x, float normalized_y);
static void divide_by_w(vec4 vector);
static float clamp(float value, float min, float max);```

We’ll now begin with the code for handling a touch press:

```void on_touch_press(float normalized_x, float normalized_y) {
Ray ray = convert_normalized_2D_point_to_ray(normalized_x, normalized_y);

// Now test if this ray intersects with the mallet by creating a
// bounding sphere that wraps the mallet.
Sphere mallet_bounding_sphere = (Sphere) {
{blue_mallet_position[0],
blue_mallet_position[1],
blue_mallet_position[2]},
mallet_height / 2.0f};

// If the ray intersects (if the user touched a part of the screen that
// intersects the mallet's bounding sphere), then set malletPressed =
// true.
mallet_pressed = sphere_intersects_ray(mallet_bounding_sphere, ray);
}

static Ray convert_normalized_2D_point_to_ray(float normalized_x, float normalized_y) {
// We'll convert these normalized device coordinates into world-space
// coordinates. We'll pick a point on the near and far planes, and draw a
// line between them. To do this transform, we need to first multiply by
// the inverse matrix, and then we need to undo the perspective divide.
vec4 near_point_ndc = {normalized_x, normalized_y, -1, 1};
vec4 far_point_ndc = {normalized_x, normalized_y,  1, 1};

vec4 near_point_world, far_point_world;
mat4x4_mul_vec4(near_point_world, inverted_view_projection_matrix, near_point_ndc);
mat4x4_mul_vec4(far_point_world, inverted_view_projection_matrix, far_point_ndc);

// Why are we dividing by W? We multiplied our vector by an inverse
// matrix, so the W value that we end up is actually the *inverse* of
// what the projection matrix would create. By dividing all 3 components
// by W, we effectively undo the hardware perspective divide.
divide_by_w(near_point_world);
divide_by_w(far_point_world);

// We don't care about the W value anymore, because our points are now
// in world coordinates.
vec3 near_point_ray = {near_point_world[0], near_point_world[1], near_point_world[2]};
vec3 far_point_ray = {far_point_world[0], far_point_world[1], far_point_world[2]};
vec3 vector_between;
vec3_sub(vector_between, far_point_ray, near_point_ray);
return (Ray) {
{near_point_ray[0], near_point_ray[1], near_point_ray[2]},
{vector_between[0], vector_between[1], vector_between[2]}};
}

static void divide_by_w(vec4 vector) {
vector[0] /= vector[3];
vector[1] /= vector[3];
vector[2] /= vector[3];
}```

This code first takes normalized touch coordinates which it receives from the Android, iOS or emscripten front ends, and then turns those touch coordinates into a 3D ray in world space. It then intersects the 3D ray with a bounding sphere for the mallet to see if we’ve touched the mallet.

Let’s continue with the code for handling a touch drag:

```void on_touch_drag(float normalized_x, float normalized_y) {
if (mallet_pressed == 0)
return;

Ray ray = convert_normalized_2D_point_to_ray(normalized_x, normalized_y);
// Define a plane representing our air hockey table.
Plane plane = (Plane) {{0, 0, 0}, {0, 1, 0}};

// Find out where the touched point intersects the plane
// representing our table. We'll move the mallet along this plane.
vec3 touched_point;
ray_intersection_point(touched_point, ray, plane);

memcpy(previous_blue_mallet_position, blue_mallet_position,
sizeof(blue_mallet_position));

// Clamp to bounds
blue_mallet_position[0] =
blue_mallet_position[1] = mallet_height / 2.0f;
blue_mallet_position[2] =

// Now test if mallet has struck the puck.
vec3 mallet_to_puck;
vec3_sub(mallet_to_puck, puck_position, blue_mallet_position);
float distance = vec3_len(mallet_to_puck);

// The mallet has struck the puck. Now send the puck flying
// based on the mallet velocity.
vec3_sub(puck_vector, blue_mallet_position, previous_blue_mallet_position);
}
}

static float clamp(float value, float min, float max) {
return fmin(max, fmax(value, min));
}```

Once we’ve grabbed the mallet, we move it across the air hockey table by intersecting the new touch point with the table to determine the new position on the table. We then move the mallet to that new position. We also check if the mallet has struck the puck, and if so, we use the movement distance to calculate the puck’s new velocity.

We next need to update the lines that initialize our objects inside `on_surface_created()` as follows:

```puck = create_puck(puck_radius, puck_height, 32, puck_color);
red_mallet = create_mallet(mallet_radius, mallet_height, 32, red);
blue_mallet = create_mallet(mallet_radius, mallet_height, 32, blue);

blue_mallet_position[0] = 0;
blue_mallet_position[1] = mallet_height / 2.0f;
blue_mallet_position[2] = 0.4f;
puck_position[0] = 0;
puck_position[1] = puck_height / 2.0f;
puck_position[2] = 0;
puck_vector[0] = 0;
puck_vector[1] = 0;
puck_vector[2] = 0;```

The new linmath.h has merged in the custom code we added to our matrix_helper.h, so we no longer need that file. As part of those changes, our perspective method call in `on_surface_changed()` now needs the angle entered in radians, so let’s update that method call as follows:

```mat4x4_perspective(projection_matrix, deg_to_radf(45),
(float) width / (float) height, 1.0f, 10.0f);```

We can then update `on_draw_frame()` to add the new movement code. Let’s first add the following to the top, right after the call to `glClear()`:

```// Translate the puck by its vector

// If the puck struck a side, reflect it off that side.
if (puck_position[0] < left_bound + puck_radius
|| puck_position[0] > right_bound - puck_radius) {
puck_vector[0] = -puck_vector[0];
vec3_scale(puck_vector, puck_vector, 0.9f);
}
if (puck_position[2] < far_bound + puck_radius
|| puck_position[2] > near_bound - puck_radius) {
puck_vector[2] = -puck_vector[2];
vec3_scale(puck_vector, puck_vector, 0.9f);
}

// Clamp the puck position.
puck_position[0] =
puck_position[2] =

// Friction factor
vec3_scale(puck_vector, puck_vector, 0.99f);```

This code will update the puck’s position and cause it to go bouncing around the table. We’ll also need to add the following after the call to `mat4x4_mul(view_projection_matrix, projection_matrix, view_matrix);`:

`mat4x4_invert(inverted_view_projection_matrix, view_projection_matrix);`

This sets up the inverted view projection matrix, which we need for turning the normalized touch coordinates back into world space coordinates.

Let’s finish up the changes to game.c by updating the following calls to `position_object_in_scene()`:

```position_object_in_scene(blue_mallet_position[0], blue_mallet_position[1],
blue_mallet_position[2]);
// ...
position_object_in_scene(puck_position[0], puck_position[1], puck_position[2]);```

### Adding touch events to Android

With these changes in place, we now need to link in the touch events from each platform. We’ll start off with Android:

#### MainActivity.java

In MainActivity.java, we first need to update the way that we create the renderer in `onCreate()`:

```final RendererWrapper rendererWrapper = new RendererWrapper(this);
// ...
glSurfaceView.setRenderer(rendererWrapper);```

```glSurfaceView.setOnTouchListener(new OnTouchListener() {
@Override
public boolean onTouch(View v, MotionEvent event) {
if (event != null) {
// Convert touch coordinates into normalized device
// coordinates, keeping in mind that Android's Y
// coordinates are inverted.
final float normalizedX = (event.getX() / (float) v.getWidth()) * 2 - 1;
final float normalizedY = -((event.getY() / (float) v.getHeight()) * 2 - 1);

if (event.getAction() == MotionEvent.ACTION_DOWN) {
glSurfaceView.queueEvent(new Runnable() {
@Override
public void run() {
rendererWrapper.handleTouchPress(normalizedX, normalizedY);
}});
} else if (event.getAction() == MotionEvent.ACTION_MOVE) {
glSurfaceView.queueEvent(new Runnable() {
@Override
public void run() {
rendererWrapper.handleTouchDrag(normalizedX, normalizedY);
}});
}

return true;
} else {
return false;
}
}});```

This touch listener takes the incoming touch events from the user, converts them into normalized coordinates in OpenGL’s normalized device coordinate space, and then calls the renderer wrapper which will pass the event on into our native code.

#### RendererWrapper.java

We’ll need to add the following to RendererWrapper.java:

```public void handleTouchPress(float normalizedX, float normalizedY) {
on_touch_press(normalizedX, normalizedY);
}

public void handleTouchDrag(float normalizedX, float normalizedY) {
on_touch_drag(normalizedX, normalizedY);
}

private static native void on_touch_press(float normalized_x, float normalized_y);

private static native void on_touch_drag(float normalized_x, float normalized_y);```

#### renderer_wrapper.c

We’ll also need to add the following to renderer_wrapper.c in our jni folder:

```JNIEXPORT void JNICALL Java_com_learnopengles_airhockey_RendererWrapper_on_1touch_1press(
JNIEnv* env, jclass cls, jfloat normalized_x, jfloat normalized_y) {
UNUSED(env);
UNUSED(cls);
on_touch_press(normalized_x, normalized_y);
}

JNIEXPORT void JNICALL Java_com_learnopengles_airhockey_RendererWrapper_on_1touch_1drag(
JNIEnv* env, jclass cls, jfloat normalized_x, jfloat normalized_y) {
UNUSED(env);
UNUSED(cls);
on_touch_drag(normalized_x, normalized_y);
}```

We now have everything in place for Android, and if we run the app, it should look similar to as seen below:

To add support for iOS, we need to update ViewController.m and add support for touch events. To do that and update the frame rate at the same time, let’s add the following to `viewDidLoad:` before the call to `[self setupGL]`:

```view.userInteractionEnabled = YES;
self.preferredFramesPerSecond = 60;```

To listen to the touch events, we need to override a few methods. Let’s add the following methods before `- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect`:

```static CGPoint getNormalizedPoint(UIView* view, CGPoint locationInView)
{
const float normalizedX = (locationInView.x / view.bounds.size.width) * 2.f - 1.f;
const float normalizedY = -((locationInView.y / view.bounds.size.height) * 2.f - 1.f);
return CGPointMake(normalizedX, normalizedY);
}

- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
[super touchesBegan:touches withEvent:event];
UITouch* touchEvent = [touches anyObject];
CGPoint locationInView = [touchEvent locationInView:self.view];
CGPoint normalizedPoint = getNormalizedPoint(self.view, locationInView);
on_touch_press(normalizedPoint.x, normalizedPoint.y);
}

- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event
{
[super touchesMoved:touches withEvent:event];
UITouch* touchEvent = [touches anyObject];
CGPoint locationInView = [touchEvent locationInView:self.view];
CGPoint normalizedPoint = getNormalizedPoint(self.view, locationInView);
on_touch_drag(normalizedPoint.x, normalizedPoint.y);
}

- (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event
{
[super touchesEnded:touches withEvent:event];
}

- (void)touchesCancelled:(NSSet *)touches withEvent:(UIEvent *)event
{
[super touchesCancelled:touches withEvent:event];
}```

This is similar to the Android code in that it takes the input touch event, converts it to OpenGL’s normalized device coordinate space, and then sends it on to our game code.

Our iOS app should look similar to the following image:

Adding support for emscripten is just as easy. Let’s first add the following to the top of main.c:

```static void handle_input();
// ...
int is_dragging;```

At the beginning of `do_frame()`, add a call to `handle_input();`:

```static void do_frame()
{
handle_input();
// ...```

Add the following for `handle_input`:

```static void handle_input()
{
glfwPollEvents();
const int left_mouse_button_state = glfwGetMouseButton(GLFW_MOUSE_BUTTON_1);
if (left_mouse_button_state == GLFW_PRESS) {
int x_pos, y_pos;
glfwGetMousePos(&x_pos, &y_pos);
const float normalized_x = ((float)x_pos / (float) width) * 2.f - 1.f;
const float normalized_y = -(((float)y_pos / (float) height) * 2.f - 1.f);

if (is_dragging == 0) {
is_dragging = 1;
on_touch_press(normalized_x, normalized_y);
} else {
on_touch_drag(normalized_x, normalized_y);
}
} else {
is_dragging = 0;
}
}```

This code sets `is_dragging` depending on whether we just clicked the primary mouse button or if we’re currently dragging the mouse. Depending on the case, we’ll call either `on_touch_press` or ` on_touch_drag`. The code to normalize the coordinates is the same as in Android and iOS, and indeed a case could be made to abstract out into the common game code, and just pass in the raw coordinates relative to the view size to that game code.

After compiling with emcc make, we should get output similar to the below:

### Exploring further

That concludes our air hockey project! The full source code for this lesson can be found at the GitHub project. You can find a more in-depth look at the concepts behind the project from the perspective of Java Android in OpenGL ES 2 for Android: A Quick-Start Guide. For exploring further, there are many things you could add, like improved graphics, support for sound, a simple AI, multiplayer (on the same device), scoring, or a menu system.

Whether you end up using a commercial cross-platform solution like Unity or Corona, or whether you decide to go the independent route, I hope this series was helpful to you and most importantly, that you enjoy your future projects ahead and have a lot of fun with them. 🙂

## OpenGL Roundup, October 1, 2013: Hexscreen 3D Live Wallpaper and more…

A fellow developer and blogger, Hisham, has released his Hexscreen 3D Live Wallpaper to the market, and it looks quite cool. Check it out:

In other news…

Libgdx and Bullet Physics on iOS via RoboVM

## OpenGL Roundup, September 19, 2013

Here’s the beginning of a new series on OpenGL ES 2.0 for iOS, using Apple’s GLKit.

ROBOVM BACKEND IN LIBGDX NIGHTLIES AND FIRST PERFORMANCE FIGURES! – Libgdx is moving to a new backend for iOS that uses RoboVM, a Java to machine code compiler for iOS. Initial performance figures look good!

Zero to Sixty in One Second – the developer & designer behind acko.net has redesigned his header and website using WebGL, and I have to say that it looks very cool.

And now for something completely different…

## Adding a 3d Perspective and Object Rendering to Our Air Hockey Project in Native C Code

For this post in the air hockey series, we’ll learn how to render our scene from a 3D perspective, as well as how to add a puck and two mallets to the scene. We’ll also see how easy it is to bring these changes to Android, iOS, and emscripten.

### Prerequisites

This lesson continues the air hockey project series, building upon the code from GitHub for ‘article-2-loading-png-file’. Here are the previous posts in this series:

### Adding support for a matrix library

The first thing we’ll do is add support for a matrix library so we can use the same matrix math on all three platforms, and then we’ll introduce the changes to our code from the top down. There are a lot of libraries out there, so I decided to use linmath.h by Wolfgang Draxinger for its simplicity and compactness. Since it’s on GitHub, we can easily add it to our project by running the following git command from the root airhockey/ folder:

`git submodule add https://github.com/datenwolf/linmath.h.git src/3rdparty/linmath`

### Updating our game code

We’ll introduce all of the changes from the top down, so let’s begin by replacing everything inside game.c as follows:

```#include "game.h"
#include "game_objects.h"
#include "asset_utils.h"
#include "buffer.h"
#include "image.h"
#include "linmath.h"
#include "math_helper.h"
#include "matrix.h"
#include "platform_gl.h"
#include "platform_asset_utils.h"
#include "program.h"
#include "texture.h"

static const float puck_height = 0.02f;
static const float mallet_height = 0.15f;

static Table table;
static Puck puck;
static Mallet red_mallet;
static Mallet blue_mallet;

static TextureProgram texture_program;
static ColorProgram color_program;

static mat4x4 projection_matrix;
static mat4x4 model_matrix;
static mat4x4 view_matrix;

static mat4x4 view_projection_matrix;
static mat4x4 model_view_projection_matrix;

static void position_table_in_scene();
static void position_object_in_scene(float x, float y, float z);```

We’ve added all of the new includes, constants, variables, and function declarations that we’ll need for our new game code. We’ll use `Table`, `Puck`, and `Mallet` to represent our drawable objects, `TextureProgram` and `ColorProgram` to represent our shader programs, and the `mat4x4` (a datatype from linmath.h) matrices for our OpenGL matrices. In our draw loop, we’ll call position_table_in_scene() to position the table, and position_object_in_scene() to position our other objects.

For those of you who have also followed the Java tutorials from OpenGL ES 2 for Android: A Quick-Start Guide, you’ll recognize that this has a lot in common with the air hockey project from the first part of the book. The code for that project can be freely downloaded from The Pragmatic Bookshelf.

#### `on_surface_created()`

```void on_surface_created() {
glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
glEnable(GL_DEPTH_TEST);

table = create_table(

vec4 puck_color = {0.8f, 0.8f, 1.0f, 1.0f};
vec4 red = {1.0f, 0.0f, 0.0f, 1.0f};
vec4 blue = {0.0f, 0.0f, 1.0f, 1.0f};

puck = create_puck(0.06f, puck_height, 32, puck_color);
red_mallet = create_mallet(0.08f, mallet_height, 32, red);
blue_mallet = create_mallet(0.08f, mallet_height, 32, blue);

texture_program = get_texture_program(build_program_from_assets(
color_program = get_color_program(build_program_from_assets(
}```

Our new `on_surface_created()` enables depth-testing, initializes the table, puck, and mallets, and loads in the shader programs.

#### `on_surface_changed(int width, int height)`

```void on_surface_changed(int width, int height) {
glViewport(0, 0, width, height);
mat4x4_perspective(projection_matrix, 45, (float) width / (float) height, 1, 10);
mat4x4_look_at(view_matrix, 0.0f, 1.2f, 2.2f, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f);
}```

Our new `on_surface_changed(int width, int height)` now takes in two parameters for the width and the height, and it sets up a projection matrix, and then sets up the view matrix to be slightly above and behind the origin, with an eye position of (0, 1.2, 2.2).

#### `on_draw_frame()`

```void on_draw_frame() {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
mat4x4_mul(view_projection_matrix, projection_matrix, view_matrix);

position_table_in_scene();
draw_table(&table, &texture_program, model_view_projection_matrix);

position_object_in_scene(0.0f, mallet_height / 2.0f, -0.4f);
draw_mallet(&red_mallet, &color_program, model_view_projection_matrix);

position_object_in_scene(0.0f, mallet_height / 2.0f, 0.4f);
draw_mallet(&blue_mallet, &color_program, model_view_projection_matrix);

// Draw the puck.
position_object_in_scene(0.0f, puck_height / 2.0f, 0.0f);
draw_puck(&puck, &color_program, model_view_projection_matrix);
}```

Our new `on_draw_frame()` positions and draws the table, mallets, and the puck.

Because we changed the definition of `on_surface_changed()`, we also have to change the declaration in game.h. Change `void on_surface_changed();` to `void on_surface_changed(int width, int height);`.

```static void position_table_in_scene() {
// The table is defined in terms of X & Y coordinates, so we rotate it
// 90 degrees to lie flat on the XZ plane.
mat4x4 rotated_model_matrix;
mat4x4_identity(model_matrix);
mat4x4_mul(
model_view_projection_matrix, view_projection_matrix, rotated_model_matrix);
}

static void position_object_in_scene(float x, float y, float z) {
mat4x4_identity(model_matrix);
mat4x4_translate_in_place(model_matrix, x, y, z);
mat4x4_mul(model_view_projection_matrix, view_projection_matrix, model_matrix);
}```

These functions update the matrices to let us position the table, puck, and mallets in the scene. We’ll define all of the extra functions that we need soon.

Now we’ll start drilling down into each part of the program and make the changes necessary for our game code to work. Let’s begin by updating our shaders. First, let’s rename our vertex shader shader.vsh to texture_shader.vsh and update it as follows:

```uniform mat4 u_MvpMatrix;

attribute vec4 a_Position;
attribute vec2 a_TextureCoordinates;

varying vec2 v_TextureCoordinates;

void main()
{
v_TextureCoordinates = a_TextureCoordinates;
gl_Position = u_MvpMatrix * a_Position;
}```

We’ll also need a new set of shaders to render our puck and mallets. Let’s add the following new shaders:

```uniform mat4 u_MvpMatrix;
attribute vec4 a_Position;
void main()
{
gl_Position = u_MvpMatrix * a_Position;
}```

```precision mediump float;
uniform vec4 u_Color;
void main()
{
gl_FragColor = u_Color;
}```

### Creating our game objects

Now we’ll add support for generating and drawing our game objects. Let’s begin with game_objects.h:

```#include "platform_gl.h"
#include "program.h"
#include "linmath.h"

typedef struct {
GLuint texture;
GLuint buffer;
} Table;

typedef struct {
vec4 color;
GLuint buffer;
int num_points;
} Puck;

typedef struct {
vec4 color;
GLuint buffer;
int num_points;
} Mallet;

Table create_table(GLuint texture);
void draw_table(const Table* table, const TextureProgram* texture_program, mat4x4 m);

Puck create_puck(float radius, float height, int num_points, vec4 color);
void draw_puck(const Puck* puck, const ColorProgram* color_program, mat4x4 m);

Mallet create_mallet(float radius, float height, int num_points, vec4 color);
void draw_mallet(const Mallet* mallet, const ColorProgram* color_program, mat4x4 m);```

We’ve defined three C structs to hold the data for our table, puck, and mallets, and we’ve declared functions to create and draw these objects.

#### Drawing a table

Let’s continue with game_objects.c:

```#include "game_objects.h"
#include "buffer.h"
#include "platform_gl.h"
#include "program.h"
#include "linmath.h"
#include <math.h>

// Triangle fan
// position X, Y, texture S, T
static const float table_data[] = { 0.0f,  0.0f, 0.5f, 0.5f,
-0.5f, -0.8f, 0.0f, 0.9f,
0.5f, -0.8f, 1.0f, 0.9f,
0.5f,  0.8f, 1.0f, 0.1f,
-0.5f,  0.8f, 0.0f, 0.1f,
-0.5f, -0.8f, 0.0f, 0.9f};

Table create_table(GLuint texture) {
return (Table) {texture,
create_vbo(sizeof(table_data), table_data, GL_STATIC_DRAW)};
}

void draw_table(const Table* table, const TextureProgram* texture_program, mat4x4 m)
{
glUseProgram(texture_program->program);

glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, table->texture);
glUniformMatrix4fv(texture_program->u_mvp_matrix_location, 1,
GL_FALSE, (GLfloat*)m);
glUniform1i(texture_program->u_texture_unit_location, 0);

glBindBuffer(GL_ARRAY_BUFFER, table->buffer);
glVertexAttribPointer(texture_program->a_position_location, 2, GL_FLOAT,
GL_FALSE, 4 * sizeof(GL_FLOAT), BUFFER_OFFSET(0));
glVertexAttribPointer(texture_program->a_texture_coordinates_location, 2, GL_FLOAT,
GL_FALSE, 4 * sizeof(GL_FLOAT), BUFFER_OFFSET(2 * sizeof(GL_FLOAT)));
glEnableVertexAttribArray(texture_program->a_position_location);
glEnableVertexAttribArray(texture_program->a_texture_coordinates_location);
glDrawArrays(GL_TRIANGLE_FAN, 0, 6);

glBindBuffer(GL_ARRAY_BUFFER, 0);
}```

After the imports, this is the code to create and draw the table data. This is essentially the same as what we had before, with the coordinates adjusted a bit to change the table into a rectangle.

#### Generating circles and cylinders

Before we can draw a puck or a mallet, we’ll need to add some helper functions to draw a circle or a cylinder. Let’s define those now:

```static inline int size_of_circle_in_vertices(int num_points) {
return 1 + (num_points + 1);
}

static inline int size_of_open_cylinder_in_vertices(int num_points) {
return (num_points + 1) * 2;
}```

We first need two helper functions to calculate the size of a circle or a cylinder in terms of vertices. A circle drawn as a triangle fan has one vertex for the center, `num_points` vertices around the circle, and one more vertex to close the circle. An open-ended cylinder drawn as a triangle strip doesn’t have a center point, but it does have two vertices for each point around the circle, and two more vertices to close off the circle.

```static inline int gen_circle(float* out, int offset,
float center_x, float center_y, float center_z,
{
out[offset++] = center_x;
out[offset++] = center_y;
out[offset++] = center_z;

int i;
for (i = 0; i <= num_points; ++i) {
float angle_in_radians = ((float) i / (float) num_points)
* ((float) M_PI * 2.0f);
out[offset++] = center_y;
}

return offset;
}```

This code will generate a circle, given a center point, a radius, and the number of points around the circle.

```static inline int gen_cylinder(float* out, int offset,
float center_x, float center_y, float center_z,
float height, float radius, int num_points)
{
const float y_start = center_y - (height / 2.0f);
const float y_end = center_y + (height / 2.0f);

int i;
for (i = 0; i <= num_points; i++) {
float angle_in_radians = ((float) i / (float) num_points)
* ((float) M_PI * 2.0f);

out[offset++] = x_position;
out[offset++] = y_start;
out[offset++] = z_position;

out[offset++] = x_position;
out[offset++] = y_end;
out[offset++] = z_position;
}

return offset;
}```

This code will generate the vertices for an open-ended cylinder. Note that for both the circle and the cylinder, the loop goes from 0 to `num_points`, so the first and last points around the circle are duplicated in order to close the loop around the circle.

#### Drawing a puck

Let’s add the code to generate and draw the puck:

```Puck create_puck(float radius, float height, int num_points, vec4 color)
{
float data[(size_of_circle_in_vertices(num_points)
+ size_of_open_cylinder_in_vertices(num_points)) * 3];

int offset = gen_circle(data, 0, 0.0f, height / 2.0f, 0.0f, radius, num_points);
gen_cylinder(data, offset, 0.0f, 0.0f, 0.0f, height, radius, num_points);

return (Puck) {{color[0], color[1], color[2], color[3]},
create_vbo(sizeof(data), data, GL_STATIC_DRAW),
num_points};
}```

A puck contains one open-ended cylinder, and a circle to top off that cylinder.

```void draw_puck(const Puck* puck, const ColorProgram* color_program, mat4x4 m)
{
glUseProgram(color_program->program);

glUniformMatrix4fv(color_program->u_mvp_matrix_location, 1, GL_FALSE, (GLfloat*)m);
glUniform4fv(color_program->u_color_location, 1, puck->color);

glBindBuffer(GL_ARRAY_BUFFER, puck->buffer);
glVertexAttribPointer(color_program->a_position_location, 3, GL_FLOAT,
GL_FALSE, 0, BUFFER_OFFSET(0));
glEnableVertexAttribArray(color_program->a_position_location);

int circle_vertex_count = size_of_circle_in_vertices(puck->num_points);
int cylinder_vertex_count = size_of_open_cylinder_in_vertices(puck->num_points);

glDrawArrays(GL_TRIANGLE_FAN, 0, circle_vertex_count);
glDrawArrays(GL_TRIANGLE_STRIP, circle_vertex_count, cylinder_vertex_count);
glBindBuffer(GL_ARRAY_BUFFER, 0);
}```

To draw the puck, we pass in the uniforms and attributes, and then we draw the circle as a triangle fan, and the cylinder as a triangle strip.

#### Drawing a mallet

Let’s continue with the code to create and draw a mallet:

```Mallet create_mallet(float radius, float height, int num_points, vec4 color)
{
float data[(size_of_circle_in_vertices(num_points) * 2
+ size_of_open_cylinder_in_vertices(num_points) * 2) * 3];

float base_height = height * 0.25f;
float handle_height = height * 0.75f;

int offset = gen_circle(data, 0, 0.0f, -base_height, 0.0f, radius, num_points);
offset = gen_circle(data, offset,
0.0f, height * 0.5f, 0.0f,
offset = gen_cylinder(data, offset,
0.0f, -base_height - base_height / 2.0f, 0.0f,
gen_cylinder(data, offset,
0.0f, height * 0.5f - handle_height / 2.0f, 0.0f,

return (Mallet) {{color[0], color[1], color[2], color[3]},
create_vbo(sizeof(data), data, GL_STATIC_DRAW),
num_points};
}```

A mallet contains two circles and two open-ended cylinders, positioned and sized so that the mallet’s base is wider and shorter than the mallet’s handle.

```void draw_mallet(const Mallet* mallet, const ColorProgram* color_program, mat4x4 m)
{
glUseProgram(color_program->program);

glUniformMatrix4fv(color_program->u_mvp_matrix_location, 1, GL_FALSE, (GLfloat*)m);
glUniform4fv(color_program->u_color_location, 1, mallet->color);

glBindBuffer(GL_ARRAY_BUFFER, mallet->buffer);
glVertexAttribPointer(color_program->a_position_location, 3, GL_FLOAT,
GL_FALSE, 0, BUFFER_OFFSET(0));
glEnableVertexAttribArray(color_program->a_position_location);

int circle_vertex_count = size_of_circle_in_vertices(mallet->num_points);
int cylinder_vertex_count = size_of_open_cylinder_in_vertices(mallet->num_points);
int start_vertex = 0;

glDrawArrays(GL_TRIANGLE_FAN, start_vertex, circle_vertex_count);
start_vertex += circle_vertex_count;
glDrawArrays(GL_TRIANGLE_FAN, start_vertex, circle_vertex_count);
start_vertex += circle_vertex_count;
glDrawArrays(GL_TRIANGLE_STRIP, start_vertex, cylinder_vertex_count);
start_vertex += cylinder_vertex_count;
glDrawArrays(GL_TRIANGLE_STRIP, start_vertex, cylinder_vertex_count);
glBindBuffer(GL_ARRAY_BUFFER, 0);
}```

Drawing the mallet is similar to drawing the puck, except that now we draw two circles and two cylinders.

We’ll need to add a helper function that we’re currently using in game.c; create a new header file called math_helper.h, and add the following code:

```#include <math.h>

static inline float deg_to_radf(float deg) {
return deg * (float)M_PI / 180.0f;
}```

Since C’s trigonometric functions expect passed-in values to be in radians, we’ll use this function to convert degrees into radians, where needed.

While linmath.h contains a lot of useful functions, there’s a few missing that we need for our game code. Create a new header file called matrix.h, and begin by adding the following code, all adapted from Android’s OpenGL `Matrix` class:

```#include "linmath.h"
#include <math.h>
#include <string.h>

/* Adapted from Android's OpenGL Matrix.java. */

static inline void mat4x4_perspective(mat4x4 m, float y_fov_in_degrees,
float aspect, float n, float f)
{
const float angle_in_radians = (float) (y_fov_in_degrees * M_PI / 180.0);
const float a = (float) (1.0 / tan(angle_in_radians / 2.0));

m[0][0] = a / aspect;
m[1][0] = 0.0f;
m[2][0] = 0.0f;
m[3][0] = 0.0f;

m[1][0] = 0.0f;
m[1][1] = a;
m[1][2] = 0.0f;
m[1][3] = 0.0f;

m[2][0] = 0.0f;
m[2][1] = 0.0f;
m[2][2] = -((f + n) / (f - n));
m[2][3] = -1.0f;

m[3][0] = 0.0f;
m[3][1] = 0.0f;
m[3][2] = -((2.0f * f * n) / (f - n));
m[3][3] = 0.0f;
}```

We’ll use `mat4x4_perspective()` to setup a perspective projection matrix.

```static inline void mat4x4_translate_in_place(mat4x4 m, float x, float y, float z)
{
int i;
for (i = 0; i < 4; ++i) {
m[3][i] += m[0][i] * x
+  m[1][i] * y
+  m[2][i] * z;
}
}```

This helper function lets us translate a matrix in place.

```static inline void mat4x4_look_at(mat4x4 m,
float eyeX, float eyeY, float eyeZ,
float centerX, float centerY, float centerZ,
float upX, float upY, float upZ)
{
// See the OpenGL GLUT documentation for gluLookAt for a description
// of the algorithm. We implement it in a straightforward way:

float fx = centerX - eyeX;
float fy = centerY - eyeY;
float fz = centerZ - eyeZ;

// Normalize f
vec3 f_vec = {fx, fy, fz};
float rlf = 1.0f / vec3_len(f_vec);
fx *= rlf;
fy *= rlf;
fz *= rlf;

// compute s = f x up (x means "cross product")
float sx = fy * upZ - fz * upY;
float sy = fz * upX - fx * upZ;
float sz = fx * upY - fy * upX;

// and normalize s
vec3 s_vec = {sx, sy, sz};
float rls = 1.0f / vec3_len(s_vec);
sx *= rls;
sy *= rls;
sz *= rls;

// compute u = s x f
float ux = sy * fz - sz * fy;
float uy = sz * fx - sx * fz;
float uz = sx * fy - sy * fx;

m[0][0] = sx;
m[0][1] = ux;
m[0][2] = -fx;
m[0][3] = 0.0f;

m[1][0] = sy;
m[1][1] = uy;
m[1][2] = -fy;
m[1][3] = 0.0f;

m[2][0] = sz;
m[2][1] = uz;
m[2][2] = -fz;
m[2][3] = 0.0f;

m[3][0] = 0.0f;
m[3][1] = 0.0f;
m[3][2] = 0.0f;
m[3][3] = 1.0f;

mat4x4_translate_in_place(m, -eyeX, -eyeY, -eyeZ);
}```

We can use `mat4x4_look_at()` like a camera, and use it to position the scene in a certain way.

We’re almost done the changes to our core code. Let’s wrap up those changes by adding the following code:

#### program.h

```#pragma once
#include "platform_gl.h"

typedef struct {
GLuint program;

GLint a_position_location;
GLint a_texture_coordinates_location;
GLint u_mvp_matrix_location;
GLint u_texture_unit_location;
} TextureProgram;

typedef struct {
GLuint program;

GLint a_position_location;
GLint u_mvp_matrix_location;
GLint u_color_location;
} ColorProgram;

TextureProgram get_texture_program(GLuint program);
ColorProgram get_color_program(GLuint program);```

#### program.c

```#include "program.h"
#include "platform_gl.h"

TextureProgram get_texture_program(GLuint program)
{
return (TextureProgram) {
program,
glGetAttribLocation(program, "a_Position"),
glGetAttribLocation(program, "a_TextureCoordinates"),
glGetUniformLocation(program, "u_MvpMatrix"),
glGetUniformLocation(program, "u_TextureUnit")};
}

ColorProgram get_color_program(GLuint program)
{
return (ColorProgram) {
program,
glGetAttribLocation(program, "a_Position"),
glGetUniformLocation(program, "u_MvpMatrix"),
glGetUniformLocation(program, "u_Color")};
}```

We first need to update Android.mk and add the following to `LOCAL_SRC_FILES`:

```				   \$(CORE_RELATIVE_PATH)/game_objects.c \
\$(CORE_RELATIVE_PATH)/program.c \```

We also need to add a new `LOCAL_C_INCLUDES`:

`LOCAL_C_INCLUDES += \$(PROJECT_ROOT_PATH)/3rdparty/linmath/`

We then need to update renderer_wrapper.c and change the call to `on_surface_changed();` to ` on_surface_changed(width, height);`. Once we’ve done that, we should be able to run the app on our Android device, and it should look similar to the following image:

For iOS, we just need to open up the Xcode project and add the necessary references to linmath.h and our new core files to the appropriate folder groups, and then we need to update ViewController.m and change `on_surface_changed();` to the following:

`on_surface_changed([[self view] bounds].size.width, [[self view] bounds].size.height);`

Once we run the app, it should look similar to the following image:

For emscripten, we need to update the Makefile and add the following lines to `SOURCES`:

```		  ../../core/game_objects.c \
../../core/program.c \```

We’ll also need to add the following lines to `OBJECTS`:

```		  ../../core/game_objects.o \
../../core/program.o \```

We then just need to update main.c, move the constants `width` and `height` from inside `init_gl()` to outside the function near the top of the file, and update the call to `on_surface_changed();` to `on_surface_changed(width, height);`. We can then build the file by calling `emmake make`, which should produce a file that looks as follows:

See how easy that was? Now that we have a minimal cross-platform framework in place, it’s very easy for us to bring changes to the core code across to each platform.

### Exploring further

The full source code for this lesson can be found at the GitHub project. In the next post, we’ll take a look at user input so we can move our mallet around the screen.

## Loading a PNG into Memory and Displaying It as a Texture with OpenGL ES 2: Adding Support for Emscripten

In the last two posts in this series, we added support for loading a PNG file into OpenGL as a texture, and then we displayed that texture on the screen:

In this post, we’ll add support for emscripten.

### Prerequisites

To complete this lesson, you’ll need to have completed Loading a PNG into Memory and Displaying It as a Texture with OpenGL ES 2: Adding Support for iOS. The previous emscripten post, Calling OpenGL from C on the Web by Using Emscripten, Sharing Common Code with Android and iOS, covers emscripten installation and setup.

You can also just download the completed project for this part of the series from GitHub and check out the code from there.

### Updating the emscripten code

There’s just one new file that we we need to add to /airhockey/src/platform/emscripten/, which is platform_asset_utils.c:

```#include "platform_asset_utils.h"
#include "platform_file_utils.h"
#include <assert.h>
#include <stdio.h>
#include <stdlib.h>

FileData get_asset_data(const char* relative_path) {
assert(relative_path != NULL);
return get_file_data(relative_path);
}

void release_asset_data(const FileData* file_data) {
assert(file_data != NULL);
release_file_data(file_data);
}```

For emscripten, there is nothing special to do since it supports a virtual file system using pre-embedded resources. For loading assets, all we need to do here is just forward the calls on to the platform-independent file loading functions that we defined in the previous post.

Since emscripten doesn’t have zlib built into it like Android and iOS do, we’ll need to add that as a third-party dependency. Download zlib 1.2.8 from http://zlib.net/ and extract it to /airhockey/src/3rdparty/libzlib. We won’t need to do anything else to get it to compile.

#### Updating the Makefile

To get things to compile and run, let’s replace the Makefile in the emscripten directory with the following contents:

```CFLAGS = -O2 -I. -I../../core -I../common -I../../3rdparty/libpng -I../../3rdparty/libzlib -Wall -Wextra
LDFLAGS = --embed-file ../../../assets@/

OBJECTS = main.o \
platform_asset_utils.o \
../common/platform_log.o \
../common/platform_file_utils.o \
../../core/buffer.o \
../../core/asset_utils.o \
../../core/game.o \
../../core/image.o \
../../core/texture.o \
../../3rdparty/libpng/png.o \
../../3rdparty/libpng/pngerror.o \
../../3rdparty/libpng/pngget.o \
../../3rdparty/libpng/pngmem.o \
../../3rdparty/libpng/pngrio.o \
../../3rdparty/libpng/pngrtran.o \
../../3rdparty/libpng/pngrutil.o \
../../3rdparty/libpng/pngset.o \
../../3rdparty/libpng/pngtrans.o \
../../3rdparty/libpng/pngwio.o \
../../3rdparty/libpng/pngwrite.o \
../../3rdparty/libpng/pngwtran.o \
../../3rdparty/libpng/pngwutil.o \
../../3rdparty/libzlib/crc32.o \
../../3rdparty/libzlib/deflate.o \
../../3rdparty/libzlib/infback.o \
../../3rdparty/libzlib/inffast.o \
../../3rdparty/libzlib/inflate.o \
../../3rdparty/libzlib/inftrees.o \
../../3rdparty/libzlib/trees.o \
../../3rdparty/libzlib/zutil.o
TARGET = airhockey.html

all: \$(TARGET)

\$(TARGET): \$(OBJECTS)
\$(CC) \$(CFLAGS) -o \$@ \$(LDFLAGS) \$(OBJECTS)

clean:
\$(RM) \$(TARGET) \$(OBJECTS)```

This Makefile specifies all of the required object files. When we run `make`, it will find the source files automatically and compile them into objects, using the `CFLAGS` that we’ve defined above. When we run this Makefile through `emmake`, `emcc` will use the `--embed-file ../../../assets@/` line to package all of the assets into our target HTML file; the `@/` syntax tells emscripten to place the assets at the root of the virtual file system. More information can be found at the emscripten wiki.

If your emscripten is configured and ready to go, then you can build the program by running `emmake` as follows, changing the path to `emmake` to reflect where you’ve installed emscripten.

`MacBook-Air:emscripten user\$ /opt/emscripten/emmake make -j 8`

If all went well, airhockey.html should look similar to the following HTML:

For information on installing and configuring emscripten, see Calling OpenGL from C on the Web by Using Emscripten, Sharing Common Code with Android and iOS or the emscripten tutorial.

#### Exploring further

The full source code for this lesson can be found at the GitHub project. For the next few posts, we’re going to start doing more with our base so that we can start making this look more like an actual air hockey game!

## Loading a PNG into Memory and Displaying It as a Texture with OpenGL ES 2: Adding Support for iOS

In the previous post, we looked at loading in texture data from a PNG file and uploading it to an OpenGL texture, and then displaying that on the screen in Android. To do that, we used libpng and loaded in the data from our platform-independent C code.

In this post, we’ll add supporting files to our iOS project so we can do the same from there.

### Prerequisites

To complete this lesson, you’ll need to have completed Loading a PNG into Memory and Displaying It as a Texture with OpenGL ES 2, Using (Almost) the Same Code on iOS, Android, and Emscripten. The previous iOS post, Calling OpenGL from C on iOS, Sharing Common Code with Android, covers setup of the Xcode project and environment.

You can also just download the completed project for this part of the series from GitHub and check out the code from there.

### Adding the common platform code

The first thing we’ll do is add new supporting files to the common platform code, as we’ll need them for both iOS and emscripten. These new files should go in /airhockey/src/platform/common:

platform_file_utils.c

```#include "platform_file_utils.h"
#include <assert.h>
#include <stdio.h>
#include <stdlib.h>

FileData get_file_data(const char* path) {
assert(path != NULL);

FILE* stream = fopen(path, "r");
assert (stream != NULL);

fseek(stream, 0, SEEK_END);
long stream_size = ftell(stream);
fseek(stream, 0, SEEK_SET);

void* buffer = malloc(stream_size);

assert(ferror(stream) == 0);
fclose(stream);

return (FileData) {stream_size, buffer, NULL};
}

void release_file_data(const FileData* file_data) {
assert(file_data != NULL);
assert(file_data->data != NULL);

free((void*)file_data->data);
}```

We’ll use these two functions to read data from a file and return it in a memory buffer, and release that buffer when we no longer need to keep it around. For iOS & emscripten, our asset loading code will wrap these file loading functions.

platform_log.c

```#include "platform_log.h"
#include <stdarg.h>
#include <stdlib.h>
#include <stdio.h>

#define LOG_VPRINTF(priority)	printf("(" priority ") %s: ", tag); \
va_list arg_ptr; \
va_start(arg_ptr, fmt); \
vprintf(fmt, arg_ptr); \
va_end(arg_ptr); \
printf("\n");

void _debug_log_v(const char *tag, const char *fmt, ...) {
LOG_VPRINTF("VERBOSE");
}

void _debug_log_d(const char *tag, const char *fmt, ...) {
LOG_VPRINTF("DEBUG");
}

void _debug_log_w(const char *tag, const char *fmt, ...) {
LOG_VPRINTF("WARN");
}

void _debug_log_e(const char *tag, const char *fmt, ...) {
LOG_VPRINTF("ERROR");
}```

For iOS and emscripten, our platform logging code just wraps around `printf`.

### Updating the iOS code

There’s just one new file that we we need to add to the ios group in our Xcode project, platform_asset_utils.m:

```#include "platform_asset_utils.h"
#include "platform_file_utils.h"
#include <assert.h>
#include <stdio.h>
#include <stdlib.h>

FileData get_asset_data(const char* relative_path) {
assert(relative_path != NULL);

[[NSMutableString alloc] initWithString:@"/assets/"];
[[NSString alloc] initWithCString:relative_path encoding:NSASCIIStringEncoding]];

return get_file_data(
cStringUsingEncoding:NSASCIIStringEncoding]);
}

void release_asset_data(const FileData* file_data) {
assert(file_data != NULL);
release_file_data(file_data);
}```

To load in an asset that’s been bundled with the application, we first prefix the path with ‘/assets/’, and then we use the `mainBundle` of the application to get the path for the resource. Once we’ve done that, we can use the regular file reading code that we’ve defined in platform_file_utils.c.

iOS experts: When I was researching how to do this, I wasn’t sure if this was the best way or even the right way, but it does seem to work. I’d love to know if there’s another way to do this that is more appropriate, perhaps just by grabbing the path of the application and concatenating that with the relative path?

Aside from adding this new file, we just need to add some references to the project and then we’ll be able to compile & run the app.

Right-click the project and select Add Files to “Air Hockey”…. Add the following C files from the libpng folder, and add them as a new folder group:

png.c pngerror.c pngget.c pngmem.c pngpread.c pngread.c pngrio.c pngrtran.c pngrutil.c pngset.c pngtrans.c pngwio.c pngwrite.c pngwtran.c pngwutil.c

Remove the common folder group that may be left there from the last lesson, and then add all of the files from the core folder as a new folder group. Do the same for all of the files in /platform/common. Finally, add the assets folder as a folder reference, not as a folder group. That will link the assets folder directly into the project and package those files with the application.

We’ll also need to link to libz.dylib. To do this, click on the ‘airhockey’ target, select Build Phases, expand Link Binary With Libraries, and add a reference to ‘libz.dylib’.

The Xcode Project Navigator should look similar to the below:

It might make more sense to link in the libpng sources as a static library somehow, but I found that this compiled very fast even from a clean build. Once you run the application in the simulator, it should look similar to the following image:

Now the same code that we used in Android is running on iOS to load in a texture, with very little work required to customize it for iOS! One of the advantages of this approach is that we can also take advantage of the vastly superior debugging and profiling capabilities of Xcode (as compared to what you get in Eclipse with the NDK!), and Xcode can also build the project far faster than the Android tools can, leading to quicker iteration times.

### Exploring further

The full source code for this lesson can be found at the GitHub project. In the next post, we’ll also cover an Emscripten target, and we’ll see that it also won’t take much work to support. As always, let me know your feedback. 🙂

## Loading a PNG into Memory and Displaying It as a Texture with OpenGL ES 2, Using (Almost) the Same Code on iOS, Android, and Emscripten

In the last post in this series, we setup a system to render OpenGL to Android, iOS and the web via WebGL and emscripten. In this post, we’ll expand on that work and add support for PNG loading, shaders, and VBOs.

### TL;DR

We can put most of our common code into a core folder, and call into that core from a main loop in our platform-specific code. By taking advantage of open source libraries like libpng and zlib, most of our code can remain platform independent. In this post, we cover the new core code and the new Android platform-specific code.

### Prerequisites

Before we begin, you may want to check out the previous posts in this series so that you can get the right tools installed and configured on your local development machine:

You can setup a local git repository with all of the code by cloning ‘article-1-clearing-the-screen’ or by downloading it as a ZIP from GitHub: https://github.com/learnopengles/airhockey/tree/article-1-clearing-the-screen.

For a “friendlier” introduction to OpenGL ES 2 using Java as the development language of choice, you can also check out Android Lesson One: Getting Started or OpenGL ES 2 for Android: A Quick-Start Guide.

### Updating the platform-independent code

In this section, we’ll cover all of the new changes to the platform-independent core code that we’ll be making to support the new features. The first thing that we’ll do is move things around, so that they follow this new structure:

/src/common => rename to /src/core

/src/android => rename to /src/platform/android

/src/ios => rename to /src/platform/ios

/src/emscripten => rename to /src/platform/emscripten

We’ll also rename glwrapper.h to platform_gl.h for all platforms. This will help to keep our source code more organized as we add more features and source files.

To start off, let’s cover all of the source files that go into /src/core.

Let’s begin with buffer.h:

```#include "platform_gl.h"

#define BUFFER_OFFSET(i) ((void*)(i))

GLuint create_vbo(const GLsizeiptr size, const GLvoid* data, const GLenum usage);```

We’ll use `create_vbo` to upload data into a vertex buffer object. `BUFFER_OFFSET()` is a helper macro that we’ll use to pass the right offsets to `glVertexAttribPointer()`.

Let’s follow up with the implementation in buffer.c:

```#include "buffer.h"
#include "platform_gl.h"
#include <assert.h>
#include <stdlib.h>

GLuint create_vbo(const GLsizeiptr size, const GLvoid* data, const GLenum usage) {
assert(data != NULL);
GLuint vbo_object;
glGenBuffers(1, &vbo_object);
assert(vbo_object != 0);

glBindBuffer(GL_ARRAY_BUFFER, vbo_object);
glBufferData(GL_ARRAY_BUFFER, size, data, usage);
glBindBuffer(GL_ARRAY_BUFFER, 0);

return vbo_object;
}```

First, we generate a new OpenGL vertex buffer object, and then we bind to it and upload the data from `data` into the VBO. We also assert that the data is not null and that we successfully created a new vertex buffer object. Why do we assert instead of returning an error code? There are a couple of reasons for that:

1. In the context of a game, there isn’t really a reasonable course of action that we can take in the event that creating a new VBO fails. Something is going to fail to display properly, so our game experience isn’t going to be as intended. We would also never expect this to fail, unless we’re abusing the platform and trying to do too much for the target hardware.
2. Returning an error means that we now have to expand our code by handling the error and checking for the error at the other end, perhaps cascading that across several function calls. This adds a lot of maintenance burden with little gain.

I have been greatly influenced by this excellent series over at the Bitsquid blog:

`assert()` is only compiled into the program in debug mode by default, so in release mode, the application will just continue to run and might end up crashing on bad data. To avoid this, when going into production, you may want to create a special `assert()` that works in release mode and does a little bit more, perhaps showing a dialog box to the user before crashing and writing out a log to a file, so that it can be sent off to the developers.

```#include "platform_gl.h"

GLuint compile_shader(const GLenum type, const GLchar* source, const GLint length);
GLuint build_program(

/* Should be called just before using a program to draw, if validation is needed. */
GLint validate_program(const GLuint program);```

Here, we have methods to compile a shader and to link two shaders into an OpenGL shader program. We also have a helper method here for validating a program, if we want to do that for debugging reasons.

Let’s begin the implementation for shader.c:

```#include "shader.h"
#include "platform_gl.h"
#include "platform_log.h"
#include <assert.h>
#include <stdlib.h>
#include <string.h>

static void log_v_fixed_length(const GLchar* source, const GLint length) {
if (LOGGING_ON) {
char log_buffer[length + 1];
memcpy(log_buffer, source, length);
log_buffer[length] = '\0';

DEBUG_LOG_WRITE_V(TAG, log_buffer);
}
}

if (LOGGING_ON) {
GLint log_length;
GLchar log_buffer[log_length];

DEBUG_LOG_WRITE_V(TAG, log_buffer);
}
}

static void log_program_info_log(GLuint program_object_id) {
if (LOGGING_ON) {
GLint log_length;
glGetProgramiv(program_object_id, GL_INFO_LOG_LENGTH, &log_length);
GLchar log_buffer[log_length];
glGetProgramInfoLog(program_object_id, log_length, NULL, log_buffer);

DEBUG_LOG_WRITE_V(TAG, log_buffer);
}
}```

We’ve added some helper functions to help us log the shader and program info logs when logging is enabled. We’ll define `LOGGING_ON` and the other logging functions in other include files, soon. Let’s continue:

```GLuint compile_shader(const GLenum type, const GLchar* source, const GLint length) {
assert(source != NULL);
GLint compile_status;

if (LOGGING_ON) {
DEBUG_LOG_WRITE_D(TAG, "Results of compiling shader source:");
log_v_fixed_length(source, length);
}

assert(compile_status != 0);

}```

We create a new shader object, pass in the source, compile it, and if everything was successful, we then return the shader ID. Now we need a method for linking two shaders together into an OpenGL program:

```GLuint link_program(const GLuint vertex_shader, const GLuint fragment_shader) {
GLuint program_object_id = glCreateProgram();

assert(program_object_id != 0);

if (LOGGING_ON) {
log_program_info_log(program_object_id);
}

return program_object_id;
}```

To link the program, we pass in two OpenGL shader objects, one for the vertex shader and one for the fragment shader, and then we link them together. If all was successful, then we return the program object ID.

```GLuint build_program(

}```

This helper method method takes in the source for a vertex shader and a fragment shader, and returns the linked program object. Let’s add the second helper method:

```GLint validate_program(const GLuint program) {
if (LOGGING_ON) {
int validate_status;

glValidateProgram(program);
glGetProgramiv(program, GL_VALIDATE_STATUS, &validate_status);
DEBUG_LOG_PRINT_D(TAG, "Results of validating program: %d", validate_status);
log_program_info_log(program);
return validate_status;
}

return 0;
}```

We can use `validate_program()` for debugging purposes, if we want some extra info about a program during a specific moment in our rendering code.

Now we need some code to load in raw data into a texture. Let’s add the following into a new file called texture.h:

```#include "platform_gl.h"

const GLsizei width, const GLsizei height,
const GLenum type, const GLvoid* pixels);```

Let’s follow that up with the implementation in texture.c:

```#include "texture.h"
#include "platform_gl.h"
#include <assert.h>

const GLsizei width, const GLsizei height,
const GLenum type, const GLvoid* pixels) {
GLuint texture_object_id;
glGenTextures(1, &texture_object_id);
assert(texture_object_id != 0);

glBindTexture(GL_TEXTURE_2D, texture_object_id);

glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexImage2D(
GL_TEXTURE_2D, 0, type, width, height, 0, type, GL_UNSIGNED_BYTE, pixels);
glGenerateMipmap(GL_TEXTURE_2D);

glBindTexture(GL_TEXTURE_2D, 0);
return texture_object_id;
}```

This is pretty straightforward and not currently customized for special cases: it just loads in the raw data in `pixels` into the texture, assuming that each component is 8-bit. It then sets up the texture for trilinear mipmapping.

For this post, we’ll package our texture asset as a PNG file, and use libpng to decode the file into raw data. For that we’ll need to add some wrapper code around libpng so that we can decode a PNG file into raw data suitable for upload into an OpenGL texture.

Let’s create a new file called image.h, with the following contents:

```#include "platform_gl.h"

typedef struct {
const int width;
const int height;
const int size;
const GLenum gl_color_format;
const void* data;
} RawImageData;

/* Returns the decoded image data, or aborts if there's an error during decoding. */
RawImageData get_raw_image_data_from_png(const void* png_data, const int png_data_size);
void release_raw_image_data(const RawImageData* data);```

We’ll use `get_raw_image_data_from_png()` to read in the PNG data from `png_data` and return the raw data in a struct. When we no longer need to keep that raw data around, we can call `release_raw_image_data()` to release the associated resources.

Let’s start writing the implementation in image.c:

```#include "image.h"
#include "platform_log.h"
#include <assert.h>
#include <png.h>
#include <string.h>
#include <stdlib.h>

typedef struct {
const png_byte* data;
const png_size_t size;
} DataHandle;

typedef struct {
const DataHandle data;
png_size_t offset;

typedef struct {
const png_uint_32 width;
const png_uint_32 height;
const int color_type;
} PngInfo;```

We’ve started off with the includes and a few structs that we’ll be using locally. Let’s continue with a few function prototypes:

```static void read_png_data_callback(
png_structp png_ptr, png_byte* png_data, png_size_t read_length);
static PngInfo read_and_update_info(const png_structp png_ptr, const png_infop info_ptr);
const png_structp png_ptr, const png_infop info_ptr, const png_uint_32 height);
static GLenum get_gl_color_format(const int png_color_format);```

We’ll be using these as local helper functions. Now we can add the implementation for `get_raw_image_data_from_png()`:

```RawImageData get_raw_image_data_from_png(const void* png_data, const int png_data_size) {
assert(png_data != NULL && png_data_size > 8);
assert(png_check_sig((void*)png_data, 8));

PNG_LIBPNG_VER_STRING, NULL, NULL, NULL);
assert(png_ptr != NULL);
png_infop info_ptr = png_create_info_struct(png_ptr);
assert(info_ptr != NULL);

if (setjmp(png_jmpbuf(png_ptr))) {
}

const PngInfo png_info = read_and_update_info(png_ptr, info_ptr);
png_ptr, info_ptr, png_info.height);

return (RawImageData) {
png_info.width,
png_info.height,
raw_image.size,
get_gl_color_format(png_info.color_type),
raw_image.data};
}```

There’s a lot going on here, so let’s explain each part in turn:

```	assert(png_data != NULL && png_data_size > 8);
assert(png_check_sig((void*)png_data, 8));```

This checks that the PNG data is present and has a valid header.

```	png_structp png_ptr = png_create_read_struct(
PNG_LIBPNG_VER_STRING, NULL, NULL, NULL);
assert(png_ptr != NULL);
png_infop info_ptr = png_create_info_struct(png_ptr);
assert(info_ptr != NULL);```

This initializes the PNG structures that we’ll use to read in the rest of the data.

```	ReadDataHandle png_data_handle = (ReadDataHandle) {{png_data, png_data_size}, 0};

As the PNG data is parsed, libpng will call `read_png_data_callback()` for each part of the PNG file. Since we’re reading in the PNG file from memory, we’ll use `ReadDataHandle` to wrap this memory buffer so that we can read from it as if it were a file.

```	if (setjmp(png_jmpbuf(png_ptr))) {
}```

This is how libpng does its error handling. If something goes wrong, then `setjmp` will return true and we’ll enter the body of the if statement. We want to handle this like an assert, so we just crash the program. We’ll define the `CRASH` macro later on.

`	const PngInfo png_info = read_and_update_info(png_ptr, info_ptr);`

We’ll use one of our helper functions here to parse the PNG information, such as the color format, and convert the PNG into a format that we want.

```	const DataHandle raw_image = read_entire_png_image(
png_ptr, info_ptr, png_info.height);```

We’ll use another helper function here to read in and decode the PNG image data.

```	png_read_end(png_ptr, info_ptr);

return (RawImageData) {
png_info.width,
png_info.height,
raw_image.size,
get_gl_color_format(png_info.color_type),
raw_image.data};```

Once reading is complete, we clean up the PNG structures and then we return the data inside of a `RawImageData` struct.

Let’s define our helper methods now:

```static void read_png_data_callback(
png_structp png_ptr, png_byte* raw_data, png_size_t read_length) {
const png_byte* png_src = handle->data.data + handle->offset;

}```

`read_png_data_callback()` will be called by libpng to read from the memory buffer. To read from the right place in the memory buffer, we store an offset and we increase that offset every time that `read_png_data_callback()` is called.

```static PngInfo read_and_update_info(const png_structp png_ptr, const png_infop info_ptr)
{
png_uint_32 width, height;
int bit_depth, color_type;

png_get_IHDR(
png_ptr, info_ptr, &width, &height, &bit_depth, &color_type, NULL, NULL, NULL);

// Convert transparency to full alpha
if (png_get_valid(png_ptr, info_ptr, PNG_INFO_tRNS))
png_set_tRNS_to_alpha(png_ptr);

// Convert grayscale, if needed.
if (color_type == PNG_COLOR_TYPE_GRAY && bit_depth < 8)
png_set_expand_gray_1_2_4_to_8(png_ptr);

// Convert paletted images, if needed.
if (color_type == PNG_COLOR_TYPE_PALETTE)
png_set_palette_to_rgb(png_ptr);

// Add alpha channel, if there is none.
// Rationale: GL_RGBA is faster than GL_RGB on many GPUs)
if (color_type == PNG_COLOR_TYPE_PALETTE || color_type == PNG_COLOR_TYPE_RGB)

// Ensure 8-bit packing
if (bit_depth < 8)
png_set_packing(png_ptr);
else if (bit_depth == 16)
png_set_scale_16(png_ptr);

color_type = png_get_color_type(png_ptr, info_ptr);

return (PngInfo) {width, height, color_type};
}```

This helper function reads in the PNG data, and then it asks libpng to perform several transformations based on the PNG type:

• Transparency information is converted into a full alpha channel.
• Grayscale images are converted to 8-bit.
• Paletted images are converted to full RGB.
• RGB images get an alpha channel added, if none is present.
• Color channels are converted to 8-bit, if less than 8-bit or 16-bit.

The PNG is then updated with the new transformations and the new color type is stored into `color_type`.

For the next step, we’ll add a helper function to decode the PNG image data into raw image data:

```static DataHandle read_entire_png_image(
const png_structp png_ptr,
const png_infop info_ptr,
const png_uint_32 height)
{
const png_size_t row_size = png_get_rowbytes(png_ptr, info_ptr);
const int data_length = row_size * height;
assert(row_size > 0);

png_byte* raw_image = malloc(data_length);
assert(raw_image != NULL);

png_byte* row_ptrs[height];

png_uint_32 i;
for (i = 0; i < height; i++) {
row_ptrs[i] = raw_image + i * row_size;
}

return (DataHandle) {raw_image, data_length};
}```

First, we allocate a block of memory large enough to hold the decoded image data. Since libpng wants to decode things line by line, we also need to setup an array on the stack that contains a set of pointers into this image data, one pointer for each line. We can then call `png_read_image()` to decode all of the PNG data and then we return that as a `DataHandle`.

Let’s add the last helper method:

```static GLenum get_gl_color_format(const int png_color_format) {
assert(png_color_format == PNG_COLOR_TYPE_GRAY
|| png_color_format == PNG_COLOR_TYPE_RGB_ALPHA
|| png_color_format == PNG_COLOR_TYPE_GRAY_ALPHA);

switch (png_color_format) {
case PNG_COLOR_TYPE_GRAY:
return GL_LUMINANCE;
case PNG_COLOR_TYPE_RGB_ALPHA:
return GL_RGBA;
case PNG_COLOR_TYPE_GRAY_ALPHA:
return GL_LUMINANCE_ALPHA;
}

return 0;
}```

This function will read in the PNG color format and return the matching OpenGL color format. We expect that after the transformations that we did, the PNG color format will be either `PNG_COLOR_TYPE_GRAY`, `PNG_COLOR_TYPE_GRAY_ALPHA`, or `PNG_COLOR_TYPE_RGB_ALPHA`, so we assert against those types.

```void release_raw_image_data(const RawImageData* data) {
assert(data != NULL);
free((void*)data->data);
}```

We’ll call this when we’re done with the raw data and can return the associated memory to the heap.

### The benefits of using libpng versus platform-specific code

At this point, you might be asking why we simply didn’t use what each platform offers us, such as `BitmapFactory.decode???` on Android, where `???` is one of the decode methods. Using platform specific code means that we would have to duplicate the code for each platform, so on Android we would wrap some code around `BitmapFactory`, and on the other platforms we would do something else. This might be a good idea if the platform-specific code was better at the job; however, in personal testing on the Nexus 7, using `BitmapFactory` actually seems to be a lot slower than just using libpng directly.

Here were the timings I observed for loading a single PNG file from the assets folder and uploading it into an OpenGL texture:

```iPhone 5, libpng:       ~28ms
Nexus 7, libpng:        ~35ms
Nexus 7, BitmapFactory: ~93ms
```

To reduce possible sources of slowdown, I avoided JNI and had the Java code upload the data directly into a texture, and return the texture object ID to C. I also used `inScaled = false` and placed the image in the assets folder to avoid extra scaling; if someone has extra insight into this issue, I would definitely love to hear it! I can only surmise that there must be a lot of extra stuff going on behind the scenes, or that the overhead of doing this from Java using the Dalvik VM is just so great that it results in that much of a slowdown. The Nexus 7 is a powerful Android device, so these timings are going to be much worse on slower Android devices. Since libpng is faster than the platform-specific alternative, at least on Android, and since maintaining one set of code is easier than maintaining separate code for each platform, I’ve decided to just use libpng on all platforms for PNG image decoding.

Just for fun, here are the emscripten numbers on a MacBook Air with a 1.7 GHz Intel Core i5 and 4GB 1333 Mhz DDR3 RAM, loading an uncompressed HTML with embedded resources from the local filesystem:

Chrome 28, first time: ~318ms
Firefox 22: ~27ms

Interestingly enough, the code ran faster when it was compiled without the closure compiler and LLVM LTO.

#### Wrapping up the rest of the changes to the core folder

Let’s wrap up the rest of the changes to the core folder by adding the following files:

config.h:

`#define LOGGING_ON 1`

We’ll use this to control whether logging should be turned on or off.

macros.h:

`#define UNUSED(x) (void)(x)`

This will help us suppress compiler warnings related to unused parameters, which is useful for JNI methods which get called by Java.

asset_utils.h

```#include "platform_gl.h"

GLuint build_program_from_assets(

We’ll use these helper methods in game.c to make it easier to load in the texture and shaders.

asset_utils.c

```#include "asset_utils.h"
#include "image.h"
#include "platform_asset_utils.h"
#include "texture.h"
#include <assert.h>
#include <stdlib.h>

assert(relative_path != NULL);

const FileData png_file = get_asset_data(relative_path);
const RawImageData raw_image_data =
get_raw_image_data_from_png(png_file.data, png_file.data_length);
raw_image_data.width, raw_image_data.height,
raw_image_data.gl_color_format, raw_image_data.data);

release_raw_image_data(&raw_image_data);
release_asset_data(&png_file);

return texture_object_id;
}

GLuint build_program_from_assets(

const GLuint program_object_id = build_program(

return program_object_id;
}```

This is the implementation for asset_utils.h. We’ll use `load_png_asset_into_texture()` to load a PNG file from the assets folder into an OpenGL texture, and we’ll use `build_program_from_assets()` to load in two shaders from the assets folder and compile and link them into an OpenGL shader program.

#### Updating game.c

We’ll need to update game.c to use all of the new code that we’ve added. Delete everything that’s there and replace it with the following start to our new code:

```#include "game.h"
#include "asset_utils.h"
#include "buffer.h"
#include "image.h"
#include "platform_gl.h"
#include "platform_asset_utils.h"
#include "texture.h"

static GLuint texture;
static GLuint buffer;
static GLuint program;

static GLint a_position_location;
static GLint a_texture_coordinates_location;
static GLint u_texture_unit_location;

// position X, Y, texture S, T
static const float rect[] = {-1.0f, -1.0f, 0.0f, 0.0f,
-1.0f,  1.0f, 0.0f, 1.0f,
1.0f, -1.0f, 1.0f, 0.0f,
1.0f,  1.0f, 1.0f, 1.0f};```

We’ve added our includes, a few local variables to hold the OpenGL objects and shader attribute and uniform locations, and an array of floats which contains a set of positions and texture coordinates for a rectangle that will completely fill the screen. We’ll use that to draw our texture onto the screen.

Let’s continue the code:

```void on_surface_created() {
glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
}

void on_surface_changed() {
buffer = create_vbo(sizeof(rect), rect, GL_STATIC_DRAW);

a_position_location = glGetAttribLocation(program, "a_Position");
a_texture_coordinates_location =
glGetAttribLocation(program, "a_TextureCoordinates");
u_texture_unit_location = glGetUniformLocation(program, "u_TextureUnit");
}```

`glClearColor()` is just as we were doing it before. In `on_surface_changed()`, we load in a texture from textures/air_hockey_surface.png, we create a VBO from the data stored in `rect`, and then we build an OpenGL shader program from the shaders located at shaders/shader.vsh and shaders/shader.fsh. Once we have the program loaded, we use it to grab the attribute and uniform locations out of the shader.

We haven’t yet defined the code to load in the actual assets from the file system, since a good part of that is platform-specific. When we do, we’ll take care to set things up so that these relative paths “just work”.

Let’s complete game.c:

```void on_draw_frame() {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

glUseProgram(program);

glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, texture);
glUniform1i(u_texture_unit_location, 0);

glBindBuffer(GL_ARRAY_BUFFER, buffer);
glVertexAttribPointer(a_position_location, 2, GL_FLOAT, GL_FALSE,
4 * sizeof(GL_FLOAT), BUFFER_OFFSET(0));
glVertexAttribPointer(a_texture_coordinates_location, 2, GL_FLOAT, GL_FALSE,
4 * sizeof(GL_FLOAT), BUFFER_OFFSET(2 * sizeof(GL_FLOAT)));
glEnableVertexAttribArray(a_position_location);
glEnableVertexAttribArray(a_texture_coordinates_location);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);

glBindBuffer(GL_ARRAY_BUFFER, 0);
}```

In the draw loop, we clear the screen, set the shader program, bind the texture and VBO, setup the attributes using `glVertexAttribPointer()`, and then draw to the screen with `glDrawArrays()`. If you’ve looked at the Java tutorials before, one thing you’ll notice is that it’s a bit easier to use `glVertexAttribPointer()` from C than it is from Java. For one, if we were using client-side arrays, we could just pass the array without worrying about any `ByteBuffer`s, and for two, we can use the `sizeof` operator to get the size of a datatype in bytes, so no need to hardcode that.

This wraps up everything for the core folder, so in the next few steps, we’re going to add in the necessary platform wrappers to get this working on Android.

### Adding the common platform code

These new files should go in /airhockey/src/platform/common:

platform_file_utils.h

```#pragma once
typedef struct {
const long data_length;
const void* data;
const void* file_handle;
} FileData;

FileData get_file_data(const char* path);
void release_file_data(const FileData* file_data);```

We’ll use this to read data from the file system on iOS and emscripten. We’ll also use `FileData` for our Android asset reading code. We won’t define the implementation of the functions for now since we won’t need them for Android.

platform_asset_utils.h

```#include "platform_file_utils.h"

FileData get_asset_data(const char* relative_path);
void release_asset_data(const FileData* file_data);```

We’ll use this to read in assets. For Android this will be specialized code since it will use the `AssetManager` class to read files straight from the APK file.

platform_log.h

```#include "platform_macros.h"
#include "config.h"

void _debug_log_v(const char* tag, const char* text, ...) PRINTF_ATTRIBUTE(2, 3);
void _debug_log_d(const char* tag, const char* text, ...) PRINTF_ATTRIBUTE(2, 3);
void _debug_log_w(const char* tag, const char* text, ...) PRINTF_ATTRIBUTE(2, 3);
void _debug_log_e(const char* tag, const char* text, ...) PRINTF_ATTRIBUTE(2, 3);

#define DEBUG_LOG_PRINT_V(tag, fmt, ...) do { if (LOGGING_ON) _debug_log_v(tag, "%s:%d:%s(): " fmt, __FILE__, __LINE__, __func__, __VA_ARGS__); } while (0)
#define DEBUG_LOG_PRINT_D(tag, fmt, ...) do { if (LOGGING_ON) _debug_log_d(tag, "%s:%d:%s(): " fmt, __FILE__, __LINE__, __func__, __VA_ARGS__); } while (0)
#define DEBUG_LOG_PRINT_W(tag, fmt, ...) do { if (LOGGING_ON) _debug_log_w(tag, "%s:%d:%s(): " fmt, __FILE__, __LINE__, __func__, __VA_ARGS__); } while (0)
#define DEBUG_LOG_PRINT_E(tag, fmt, ...) do { if (LOGGING_ON) _debug_log_e(tag, "%s:%d:%s(): " fmt, __FILE__, __LINE__, __func__, __VA_ARGS__); } while (0)

#define DEBUG_LOG_WRITE_V(tag, text) DEBUG_LOG_PRINT_V(tag, "%s", text)
#define DEBUG_LOG_WRITE_D(tag, text) DEBUG_LOG_PRINT_D(tag, "%s", text)
#define DEBUG_LOG_WRITE_W(tag, text) DEBUG_LOG_PRINT_W(tag, "%s", text)
#define DEBUG_LOG_WRITE_E(tag, text) DEBUG_LOG_PRINT_E(tag, "%s", text)

#define CRASH(e) DEBUG_LOG_WRITE_E("Assert", #e); __builtin_trap()```

This contains a bunch of macros to help us do logging from our core game code. `CRASH()` is a special macro that will log the message passed to it, then call `__builtin_trap()` to stop execution. We used this macro above when we were loading in the PNG file.

platform_macros.h

```#if defined(__GNUC__)
#define PRINTF_ATTRIBUTE(format_pos, arg_pos) __attribute__((format(printf, format_pos, arg_pos)))
#else
#define PRINTF_ATTRIBUTE(format_pos, arg_pos)
#endif```

This is a special macro that helps the compiler do format checking when checking the formats that we pass to our log functions.

### Updating the Android code

For the Android target, we have a bit of cleanup to do first. Let’s open up the Android project in Eclipse, get rid of GameLibJNIWrapper.java and update RendererWrapper.java as follows:

```package com.learnopengles.airhockey;

import javax.microedition.khronos.egl.EGLConfig;
import javax.microedition.khronos.opengles.GL10;

import android.content.Context;
import android.opengl.GLSurfaceView.Renderer;

import com.learnopengles.airhockey.platform.PlatformFileUtils;

public class RendererWrapper implements Renderer {
static {
}

private final Context context;

public RendererWrapper(Context context) {
this.context = context;
}

@Override
public void onSurfaceCreated(GL10 gl, EGLConfig config) {
PlatformFileUtils.init_asset_manager(context.getAssets());
on_surface_created();
}

@Override
public void onSurfaceChanged(GL10 gl, int width, int height) {
on_surface_changed(width, height);
}

@Override
public void onDrawFrame(GL10 gl) {
on_draw_frame();
}

private static native void on_surface_created();

private static native void on_surface_changed(int width, int height);

private static native void on_draw_frame();
}```

We’ve moved the native methods into `RendererWrapper` itself. The new `RendererWrapper` wants a `Context` passed into its contructor, so give it one by updating the constructor call in MainActivity.java as follows:

`glSurfaceView.setRenderer(new RendererWrapper(this));`

For Android, we’ll be using the `AssetManager` to read in assets that are compiled directly into the APK file. We’ll need a way to pass a reference to the `AssetManager` to our C code, so let’s create a new class in a new package called `com.learnopengles.airhockey.platform` called `PlatformFileUtils`, and add the following code:

```package com.learnopengles.airhockey.platform;

import android.content.res.AssetManager;

public class PlatformFileUtils {
public static native void init_asset_manager(AssetManager assetManager);
}```

We are calling `init_asset_manager()` from `RendererWrapper.onSurfaceCreated()`, which you can see just a few lines above.

#### Updating the JNI code

We’ll also need to add platform-specific JNI code to the jni folder in the android folder. Let’s start off with platform_asset_utils.c:

```#include "platform_asset_utils.h"
#include "macros.h"
#include "platform_log.h"
#include <android/asset_manager_jni.h>
#include <assert.h>

static AAssetManager* asset_manager;

JNIEXPORT void JNICALL Java_com_learnopengles_airhockey_platform_PlatformFileUtils_init_1asset_1manager(
JNIEnv * env, jclass jclazz, jobject java_asset_manager) {
UNUSED(jclazz);
asset_manager = AAssetManager_fromJava(env, java_asset_manager);
}

FileData get_asset_data(const char* relative_path) {
assert(relative_path != NULL);
AAsset* asset =
AAssetManager_open(asset_manager, relative_path, AASSET_MODE_STREAMING);
assert(asset != NULL);

return (FileData) { AAsset_getLength(asset), AAsset_getBuffer(asset), asset };
}

void release_asset_data(const FileData* file_data) {
assert(file_data != NULL);
assert(file_data->file_handle != NULL);
AAsset_close((AAsset*)file_data->file_handle);
}```

We use `get_asset_data()` to wrap Android’s native asset manager and return the data to the calling code, and we release the data when `release_asset_data()` is called. The advantage of doing things like this is that the asset manager can choose to optimize data loading by mapping the file into memory, and we can return that mapped data directly to the caller.

platform_log.c

```#include "platform_log.h"
#include <android/log.h>
#include <stdio.h>
#include <stdlib.h>

#define ANDROID_LOG_VPRINT(priority)	\
va_list arg_ptr; \
va_start(arg_ptr, fmt); \
__android_log_vprint(priority, tag, fmt, arg_ptr); \
va_end(arg_ptr);

void _debug_log_v(const char *tag, const char *fmt, ...) {
ANDROID_LOG_VPRINT(ANDROID_LOG_VERBOSE);
}

void _debug_log_d(const char *tag, const char *fmt, ...) {
ANDROID_LOG_VPRINT(ANDROID_LOG_DEBUG);
}

void _debug_log_w(const char *tag, const char *fmt, ...) {
ANDROID_LOG_VPRINT(ANDROID_LOG_WARN);
}

void _debug_log_e(const char *tag, const char *fmt, ...) {
ANDROID_LOG_VPRINT(ANDROID_LOG_ERROR);
}```

This code wraps Android’s native logging facilities.

Finally, let’s rename jni.c to renderer_wrapper.c and update it to the following:

```#include "game.h"
#include "macros.h"
#include <jni.h>

/* These functions are called from Java. */

JNIEXPORT void JNICALL Java_com_learnopengles_airhockey_RendererWrapper_on_1surface_1created(
JNIEnv * env, jclass cls) {
UNUSED(env);
UNUSED(cls);
on_surface_created();
}

JNIEXPORT void JNICALL Java_com_learnopengles_airhockey_RendererWrapper_on_1surface_1changed(
JNIEnv * env, jclass cls, jint width, jint height) {
UNUSED(env);
UNUSED(cls);
on_surface_changed();
}

JNIEXPORT void JNICALL Java_com_learnopengles_airhockey_RendererWrapper_on_1draw_1frame(
JNIEnv* env, jclass cls) {
UNUSED(env);
UNUSED(cls);
on_draw_frame();
}```

Nothing has really changed here; we just use the `UNUSED()` macro (defined earlier in macros.h in the core folder) to suppress some unnecessary compiler warnings.

### Updating the NDK build files

We’re almost ready to build & test, just a few things left to be done. Download libpng 1.6.2 from http://www.libpng.org/pub/png/libpng.html and place it in /src/3rdparty/libpng. To configure libpng, copy pnglibconf.h.prebuilt from libpng/scripts/ to libpng/ and remove the .prebuilt extension.

To compile libpng with the NDK, let’s add a build script called Android.mk to the libpng folder, as follows:

```LOCAL_PATH := \$(call my-dir)

include \$(CLEAR_VARS)

LOCAL_MODULE := libpng
LOCAL_SRC_FILES = png.c \
pngerror.c \
pngget.c \
pngmem.c \
pngrio.c \
pngrtran.c \
pngrutil.c \
pngset.c \
pngtrans.c \
pngwio.c \
pngwrite.c \
pngwtran.c \
pngwutil.c
LOCAL_EXPORT_C_INCLUDES := \$(LOCAL_PATH)
LOCAL_EXPORT_LDLIBS := -lz

include \$(BUILD_STATIC_LIBRARY)
```

This build script will tell the NDK tools to build a static library called libpng that is linked against zlib, which is built into Android. It also sets up the right variables so that we can easily import this library into our own projects, and we won’t even have to do anything special because the right includes and libs are already exported.

Let’s also update the Android.mk file in our jni folder:

```LOCAL_PATH := \$(call my-dir)
PROJECT_ROOT_PATH := \$(LOCAL_PATH)/../../../
CORE_RELATIVE_PATH := ../../../core/

include \$(CLEAR_VARS)

LOCAL_MODULE    := game
LOCAL_CFLAGS    := -Wall -Wextra
LOCAL_SRC_FILES := platform_asset_utils.c \
platform_log.c \
renderer_wrapper.c \
\$(CORE_RELATIVE_PATH)/asset_utils.c \
\$(CORE_RELATIVE_PATH)/buffer.c \
\$(CORE_RELATIVE_PATH)/game.c \
\$(CORE_RELATIVE_PATH)/image.c \
\$(CORE_RELATIVE_PATH)/texture.c \

LOCAL_C_INCLUDES := \$(PROJECT_ROOT_PATH)/platform/common/
LOCAL_C_INCLUDES += \$(PROJECT_ROOT_PATH)/core/
LOCAL_STATIC_LIBRARIES := libpng
LOCAL_LDLIBS := -lGLESv2 -llog -landroid

include \$(BUILD_SHARED_LIBRARY)

\$(call import-module,libpng)```

Our new build script links in the new files that we’ve created in core, and it also imports libpng from the 3rdparty folder and builds it as a static library that is then linked into our Android application.

1. Delete the existing assets folder from the project.
2. Right-click the project and select Properties. In the window that appears, select Resource->Linked Resources and click New….
3. Enter ‘ASSETS_LOC’ as the name, and ‘\${PROJECT_LOC}/../../../assets’ as the location. Once that’s done, click OK until the Properties window is closed.
4. Right-click the project again and select New->Folder, enter ‘assets’ as the name, select Advanced, select Link to alternate location (Linked Folder), select Variables…, select ASSETS_LOC, and select OK, then Finish.

You should now have a new assets folder that is linked to the assets folder that we created in the airhockey root. More information can be found on Stack Overflow: How to link assets/www folder in Eclipse / Phonegap / Android project?

### Running the app

We should be able to check out the new code now. If you run the app on your Android emulator or device, it should look similar to the following image:

The texture looks a bit stretched/squashed, because we are currently asking OpenGL to fill the screen with that texture. With a basic framework in place, we can start adding some more detail in future lessons and start turning this into an actual game.

### Debugging NDK code

While developing this project, I had to hook up a debugger as something was going bad in the PNG loading code, and I just wasn’t sure what. It turns out that I had confused a `png_bytep`* with a `png_byte`* — the ‘p’ in the first one means that it’s already a pointer, so I didn’t have to put another star there. I had some issues using the debugging at first, so here are some tips that might help you out if you want to hook up the debugger:

1. Your project absolutely cannot have any spaces in its path. Otherwise, the debugger will inexplicably fail to connect.
2. The native code needs to be built with NDK_DEBUG=1; see “Debugging native applications” on this page: Using the NDK plugin.
3. Android will not wait for gdb to connect before executing the code. Add SystemClock.sleep(10000); to RendererWrapper’s onSurfaceCreated() method to add a sufficient delay to hit your breakpoints.

Once that’s done, you can start debugging from Eclipse by right-clicking the project and selecting Debug As->Android Native Application.

### Exploring further

The full source code for this lesson can be found at the GitHub project. For a “friendlier” introduction to OpenGL ES 2 that is focused on Java and Android, see Android Lesson One: Getting Started or OpenGL ES 2 for Android: A Quick-Start Guide.

What could we do to further streamline the code? If we were using C++, we could take advantage of destructors to create, for example, a FileData that cleans itself up when it goes out of scope. I’d also like to make the structs private somehow, as their internals don’t really need to be exposed to clients. What else would you do?

In the next two posts, we’ll look at adding support for iOS and emscripten. Now that we’ve built up this base, it actually won’t take too much work!

## Calling OpenGL from C on the Web by Using Emscripten, Sharing Common Code with Android and iOS

In the last two posts, we started building up a simple system to reuse a common set of C code in Android and iOS:

In this post, we’ll also add support for emscripten, an LLVM-to-JavaScript compiler that can convert C and C++ code into JavaScript. Emscripten is quite a neat piece of technology, and has led to further improvements to JavaScript engines, such as asm.js. Check out all of the demos over at the wiki.

### Prerequisites

For this post, you’ll need to have Emscripten installed and configured; we’ll cover installation instructions further below. It’ll also be helpful if you’ve completed the first two posts in this series: OpenGL from C on Android by using the NDK and Calling OpenGL from C on iOS, Sharing Common Code with Android. If not, then you can also download the code from GitHub and follow along.

### Installing emscripten

#### Installing on Windows (tested on Windows 8)

There is a set of detailed instructions available at https://github.com/kripken/emscripten/wiki/Using-Emscripten-on-Windows. There’s no need to build anything from source as there’s prebuilt binaries for everything you need.

Here are a few gotchas that you might run into during the install:

• The GCC and Clang archives need to be extracted to the same location, such as C:\mingw64.
• The paths in .emscripten should be specified with forward slashes, as in ‘C:/mingw64’, or double backward slashes, as in ‘C:\\mingw64’.
• TEMP_DIR in .emscripten should be set to a valid path, such as ‘C:\\Windows\\Temp’.

You can then test the install by entering the following commands into a command prompt from the emscripten directory:

`python emcc tests\hello_world.cpp -o hello_world.html`
`hello_world.html`

#### Installing on Mac OS X (tested on OS X 10.8.4)

The instructions over at https://gist.github.com/dweekly/5873953 should get you up and running. Instead of `brew install node`, you can also enter `sudo port install nodejs`, if using MacPorts. I installed emscripten and LLVM into the /opt directory.

First you should run emcc from the emscripten directory to create a default config file in ~/.emscripten. After configuring ~/.emscripten and checking that all paths are correct, you can test the install by entering the following into a terminal shell from the emscripten directory:

`./emcc tests/hello_world.cpp -o hello_world.html`
`open hello_world.html`

#### Installing on Ubuntu Linux (tested on Ubuntu 13.04)

The following commands should be entered into a terminal shell; They were adapted from https://earthserver.com/Setting_up_emscripten_development_environment_on_Linux:

##### Installing prerequisites

`sudo apt-get update; sudo apt-get install build-essential openjdk-7-jdk openjdk-7-jre-headless git`

##### Installing node.js:

Download the latest node.js from http://nodejs.org/, extract it, and then build & install it with the following commands from inside the nodejs source directory:

`./configure`
`make`
`sudo make install`

##### Installing LLVM

`sudo apt-get install llvm clang`

##### Installing emscripten

`sudo mkdir /opt/emscripten`
`sudo chmod 777 /opt/emscripten`
`cd /opt`
`git clone git://github.com/kripken/emscripten.git emscripten`

##### Configuring emscripten

`cd emscripten`
`./emcc`

This command will print out a listing with the auto-detected paths for LLVM and other utilities. Check that all paths are correct, and edit ~/.emscripten if any are not.

You can then test out the install by entering the following commands:

`./emcc tests/hello_world.cpp -o hello_world.html`
`xdg-open hello_world.html`

If all goes well, you should then see a browser window open with “hello, world!” printed out in a box.

Let’s start by creating a new folder called emscripten in the airhockey folder. In that new folder, let’s create a new source file called main.c, beginning with the following contents:

```#include <stdlib.h>
#include <stdio.h>
#include <GL/glfw.h>
#include <emscripten/emscripten.h>
#include "game.h"

int init_gl();
void do_frame();
void shutdown_gl();

int main()
{
if (init_gl() == GL_TRUE) {
on_surface_created();
on_surface_changed();
emscripten_set_main_loop(do_frame, 0, 1);
}

shutdown_gl();

return 0;
}
```

In this C source file, we’ve cleared a few functions, and then we’ve defined the main body of our program. The program will begin by calling `init_gl()` (a function that we’ll define further below) to initialize OpenGL, then it will call `on_surface_created()` and `on_surface_changed()` from our common code, and then it will call a special emscripten function, `emscripten_set_main_loop()`, which can simulate an infinite loop by using the browser’s `requestAnimationFrame` mechanism.

Let’s complete the rest of the source file:

```int init_gl()
{
const int width = 480,
height = 800;

if (glfwInit() != GL_TRUE) {
printf("glfwInit() failed\n");
return GL_FALSE;
}

if (glfwOpenWindow(width, height, 8, 8, 8, 8, 16, 0, GLFW_WINDOW) != GL_TRUE) {
printf("glfwOpenWindow() failed\n");
return GL_FALSE;
}

return GL_TRUE;
}

void do_frame()
{
on_draw_frame();
glfwSwapBuffers();
}

void shutdown_gl()
{
glfwTerminate();
}
```

In the rest of this code, we use GLFW, an OpenGL library for managing OpenGL contexts, creating windows, and handling input. Emscripten has special support for GLFW built into it, so that the calls will be translated to matching JavaScript code on compilation.

Like we did for Android and iOS, we also need to define where the OpenGL headers are stored for our common code. Save the following into a new file called glwrapper.h in airhockey/emscripten/:

```#include <GLES2/gl2.h>
```

### Building the code and running it in a browser

To build the program, run the following command in a terminal shell from airhockey/emscripten/:

`emcc -I. -I../common main.c ../common/game.c -o airhockey.html`

In the GitHub project, there’s also a Makefile which will build airhockey.html when `emmake make` is called. This Makefile can also be used on Windows by running `python emmake mingw32-make`, putting the right paths where appropriate. To see the code in action, just open up airhockey.html in a browser.

When we ask emscripten to generate an HTML file, it will generate an HTML file that contains the embedded code, which you can see further below (WebGL support is required to see the OpenGL code in action):

#### Exploring further

The full source code for this lesson can be found at the GitHub project. Now that we have a base setup in Android, iOS, and emscripten, we can start fleshing out our project in the next few posts. Emscripten is pretty neat, and I definitely recommend checking out the samples over at https://github.com/kripken/emscripten/wiki!

## Calling OpenGL from C on iOS, Sharing Common Code with Android

In the last post, we covered how to call OpenGL from C on Android by using the NDK; in this post, we’ll call into the same common code from an Objective-C codebase which will run on an iOS device.

### Prerequisites

• A Mac with a suitable IDE installed.
• An iOS emulator, or a provisioned iPhone or iPad.

You’ll also need to have completed the first post in this series. If not, then you can also download the code from GitHub so that you can follow along.

We’ll be using Xcode in this lesson. There are other IDEs available, such as AppCode. If you don’t have a Mac available for development, there is more info on alternatives available here:

For this article, I used Xcode 4.6.3 with the iOS 6.1 Simulator.

### Getting started

We’ll create a new project from an Xcode template with support for OpenGL already configured. You can follow along all of the code at the GitHub project.

To create the new project, open Xcode, and select File->New->Project…. When asked to choose a template, select Application under iOS, and then select OpenGL Game and select Next. Enter ‘Air Hockey’ as the Product Name, enter whatever you prefer for the Organization Name and Company Identifier, select Universal next to Devices, and make sure that Use Storyboards and Use Automatic Reference Counting are both checked, then select Next.

Place the project in a new folder called ios inside of the airhockey folder that we worked with from the previous post. This means that we should now have three folders inside of airhockeyandroidcommon, and ios. Don’t check Create local git repository for this project, as we already setup a git repository in the previous post.

Once the project’s been created, you should see a new folder called Air Hockey inside of the ios folder, and inside of Air Hockey, there’ll be another folder called Air Hockey, as well as a project folder called Air Hockey.xcodeproj.

#### Flattening the Xcode project

I personally prefer to flatten this all out and put everything in the ios folder, and the following steps will show you how to do this; please feel free to skip this section if you don’t mind having the nested folders:

1. Close the project in Xcode, and then move all of the files inside of the second Air Hockey folder so that they are directly inside of the ios folder.
2. Move Air Hockey.xcodeproj so that it’s also inside of the ios folder. The extra Air Hockey folders can now be deleted.
3. Right-click Air Hockey.xcodeproj in the Finder and select Show Package Contents.
4. Edit project.pbxproj in a text editor, and delete all occurrences of ‘Air Hockey/’.
5. Go back to the ios folder and open up Air Hockey.xcodeproj in Xcode.
6. Select View->Navigator->Show Project Navigator and View->Utilities->Show File Inspector.
7. Select Air Hockey in the Project Navigator on the left. On the right in the File Inspector, click the button under Relative to Group, to the right of Air Hockey, select some random folder (this is to work around a bug) and select Choose, then click it again and select the ios folder this time.

The project should now be able to build OK, with all files in the ios folder. More information can be found here: How do you move an Xcode 4.2 project file to another folder?

### Adding a reference to the common code

Here’s how we add our common code to the project:

1. Right-click the project root in the Project Navigator (the item that looks like “Air Hockey, 1 target, iOS SDK 6.1”).
2. Select Add Files to “Air Hockey”….
3. Select the common folder, which will be located next to the ios folder, make sure that Copy items into destination group’s folder (if needed) is not checked, that Create groups for any added folders is selected, and that Air Hockey is selected next to Add to targets, then select Add.

You should now see the common folder appear in the Project Navigator, with game.h and game.c inside.

### Understanding how iOS manages OpenGL through the `GLKit` framework

When we created a new project with the OpenGL Game template, Xcode set things up so that when the application is launched, it displays an OpenGL view on the screen, and drives that view with a special view controller. A view controller in iOS manages a set of views, and can be thought of as being sort of like the iOS counterpart of an Android Activity.

When the application is launched, the OS reads the storyboard file, which tells it to create a new view controller that is subclassed from `GLKViewController` and add it to the application’s window. This view controller is part of the `GLKit` framework and provides an OpenGL ES rendering loop. It can be configured with a preferred frame rate, and it can also automatically handle application-level events, such as pausing the rendering loop when the application is about to go to the background.

That `GLKViewController` contains a `GLKView` as its root view, which is what creates and manages the actual OpenGL framebuffer. This `GLKView` is automatically linked to the `GLKViewController`, so that when it’s time to draw a new frame, it will call a method called `drawInRect:` in our `GLKViewController`.

Before moving on to the next step, you may want to check out the default project by running it in the simulator, just to see what it looks like.

### Calling our common code from the view controller

The default code in the view controller does a lot more than we need, since it creates an entire demo scene. We want to keep things simple for now and just see that we can call OpenGL from C and wrap that with the view controller, so let’s open up ViewController.m, delete everything, and start off by adding the following code:

```#import "ViewController.h"
#include "game.h"

@interface ViewController () {
}

@property (strong, nonatomic) EAGLContext *context;

- (void)setupGL;

@end
```

This includes game.h so that we can call our common functions, defines a new property to hold the EAGL context, and declares a method called `setupGL:` which we’ll define soon. Let’s continue the code:

```@implementation ViewController

{

self.context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];

if (!self.context) {
NSLog(@"Failed to create ES context");
}

GLKView *view = (GLKView *)self.view;
view.context = self.context;
view.drawableDepthFormat = GLKViewDrawableDepthFormat24;

[self setupGL];
}

- (void)dealloc
{
if ([EAGLContext currentContext] == self.context) {
[EAGLContext setCurrentContext:nil];
}
}
```

Once the `GLKView` has been loaded into memory, `viewDidLoad:` will be called so that we can initialize our OpenGL context. We initialize an OpenGL ES 2 context here and assign it to the `context` property by calling:

`self.context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2]`

This allocates a new instance of an `EAGLContext`, which manages all of the state and resources required to render using OpenGL ES. We then initialize that instance by calling `initWithAPI:`, passing in a special token which tells it to initialize the context for OpenGL ES 2 rendering.

For those of you not used to Objective-C syntax, here’s what this could look like if it were using Java syntax:

`this.context = new EAGLContext().initWithAPI(kEAGLRenderingAPIOpenGLES2);`

Once we have an `EAGLContext`, we assign it to the view, we configure the view’s depth buffer format, and then we call the following to do further OpenGL setup:

`[self setupGL]`

We’ll define this method further below. `dealloc:` will be called when the view controller is destroyed, so there we release the `EAGLContext` if needed by calling the following:

`[EAGLContext setCurrentContext:nil]`

Let’s complete the code for ViewController.m:

```
- (void)setupGL
{
[EAGLContext setCurrentContext:self.context];
on_surface_created();
on_surface_changed();
}

- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect
{
on_draw_frame();
}

@end

```

Here is where we call our game code to do the actual rendering. In `setupGL:`, we set the `EAGLContext` so that we have a valid context to use for drawing, and then we call `on_surface_created()` and `on_surface_changed()` from our common code. Every time a new frame needs to be drawn, `drawInRect:` will be called, so there we call `on_draw_frame()`.

Why don’t we also need to set the context from `drawInRect:`? This method is actually a delegate method which is declared in ` GLKViewDelegate` and called by the `GLKView`, and the view takes care of setting the context and framebuffer target for us before it calls our delegate. For those of you from the Java world, this is like having our class implement an interface and passing ourselves as a listener to another class, so that it can call us back via that interface. Our view controller automatically set itself as the delegate when it was linked to the `GLKView` by the storyboard.

We don’t have to do things this way — we could also subclass `GLKView` instead and override its `drawRect:` method. Delegation is simply a preferred pattern in iOS when subclassing isn’t required.

As a quick reminder, here’s what we had defined in our three functions from game.c:

```void on_surface_created() {
glClearColor(1.0f, 0.0f, 0.0f, 0.0f);
}

void on_surface_changed() {
// No-op
}

void on_draw_frame() {
glClear(GL_COLOR_BUFFER_BIT);
}
```

So, when we actually run our project, we should expect the screen to get cleared to red.

Before we can build and run the project, we’ll need to add a glwrapper.h header to the project, like we did for the Android project in the previous post. In the same folder as ViewController.m, add a new header file called glwrapper.h, and add the following contents:

```#include <OpenGLES/ES2/gl.h>
```

### Building and running the project

We should now be able to build and run the project in the iOS simulator. Click the play button to run the app and launch the simulator. Once it’s launched, you should see a screen similar to the following image:

And that’s it! By using `GLKit`, we can easily wrap our OpenGL code and call it from Objective-C.

To tell iOS and the App Store that our application should not be displayed to unsupported devices, we can add the key ‘opengles-2’ to Air Hockey-Info.plist, inside Required device capabilities.

#### Exploring further

The full source code for this lesson can be found at the GitHub project. For further reading, I recommend the following excellent intro to `GLKit`, which goes into more detail on using `GLKView`, `GLKViewController` and other areas of `GLKit`:

Beginning OpenGL ES 2.0 with GLKit Part 1

In the next part of this series, we’ll take a look at using emscripten to create a new project that also calls into our common code and compiles it for the web. I am coming to iOS from a background in Java and Android and I am new to iOS and Objective-C, so please let me know if anything doesn’t make sense here, and I’ll go and fix it up. 🙂