## Finishing Up Our Native Air Hockey Project With Touch Events and Basic Collision Detection

In this post in the air hockey series, we’re going to wrap up our air hockey project and add touch event handling and basic collision detection with support for Android, iOS, and emscripten.

### Prerequisites

This lesson continues the air hockey project series, building upon the code from GitHub for ‘article-3-matrices-and-objects’. Here are the previous posts in this series:

### Updating our game code for touch interaction

The first thing we’ll do is update the core to add touch interaction to the game. We’ll first need to add some helper functions to a new core file called geometry.h.

#### geometry.h

Let’s start off with the following code:

```#include "linmath.h"
#include

typedef struct {
vec3 point;
vec3 vector;
} Ray;

typedef struct {
vec3 point;
vec3 normal;
} Plane;

typedef struct {
vec3 center;
} Sphere;```

These are a few `typedef`s that build upon linmath.h to add a few basic types that we’ll use in our code. Let’s wrap up geometry.h:

```static inline int sphere_intersects_ray(Sphere sphere, Ray ray);
static inline float distance_between(vec3 point, Ray ray);
static inline void ray_intersection_point(vec3 result, Ray ray, Plane plane);

static inline int sphere_intersects_ray(Sphere sphere, Ray ray) {
return 1;
return 0;
}

static inline float distance_between(vec3 point, Ray ray) {
vec3 p1_to_point;
vec3_sub(p1_to_point, point, ray.point);
vec3 p2_to_point;
vec3 translated_ray_point;
vec3_sub(p2_to_point, point, translated_ray_point);

// The length of the cross product gives the area of an imaginary
// parallelogram having the two vectors as sides. A parallelogram can be
// thought of as consisting of two triangles, so this is the same as
// twice the area of the triangle defined by the two vectors.
// http://en.wikipedia.org/wiki/Cross_product#Geometric_meaning
vec3 cross_product;
vec3_mul_cross(cross_product, p1_to_point, p2_to_point);
float area_of_triangle_times_two = vec3_len(cross_product);
float length_of_base = vec3_len(ray.vector);

// The area of a triangle is also equal to (base * height) / 2. In
// other words, the height is equal to (area * 2) / base. The height
// of this triangle is the distance from the point to the ray.
float distance_from_point_to_ray = area_of_triangle_times_two / length_of_base;
return distance_from_point_to_ray;
}

// http://en.wikipedia.org/wiki/Line-plane_intersection
// This also treats rays as if they were infinite. It will return a
// point full of NaNs if there is no intersection point.
static inline void ray_intersection_point(vec3 result, Ray ray, Plane plane) {
vec3 ray_to_plane_vector;
vec3_sub(ray_to_plane_vector, plane.point, ray.point);

float scale_factor = vec3_mul_inner(ray_to_plane_vector, plane.normal)
/ vec3_mul_inner(ray.vector, plane.normal);

vec3 intersection_point;
vec3 scaled_ray_vector;
vec3_scale(scaled_ray_vector, ray.vector, scale_factor);
memcpy(result, intersection_point, sizeof(intersection_point));
}```

We’ll do a line-sphere intersection test to see if we’ve touched the mallet using our fingers or a mouse. Once we’ve grabbed the mallet, we’ll do a line-plane intersection test to determine where to place the mallet on the board.

#### game.h

We’ll need two new function prototypes in game.h:

```void on_touch_press(float normalized_x, float normalized_y);
void on_touch_drag(float normalized_x, float normalized_y);```

game.c

Now we can begin the implementation in game.c. Add the following in the appropriate places to the top of the file:

```#include "geometry.h"
// ...
static const float puck_radius = 0.06f;
static const float mallet_radius = 0.08f;

static const float left_bound = -0.5f;
static const float right_bound = 0.5f;
static const float far_bound = -0.8f;
static const float near_bound = 0.8f;
// ...
static mat4x4 inverted_view_projection_matrix;

static int mallet_pressed;
static vec3 blue_mallet_position;
static vec3 previous_blue_mallet_position;
static vec3 puck_position;
static vec3 puck_vector;

static Ray convert_normalized_2D_point_to_ray(float normalized_x, float normalized_y);
static void divide_by_w(vec4 vector);
static float clamp(float value, float min, float max);```

We’ll now begin with the code for handling a touch press:

```void on_touch_press(float normalized_x, float normalized_y) {
Ray ray = convert_normalized_2D_point_to_ray(normalized_x, normalized_y);

// Now test if this ray intersects with the mallet by creating a
// bounding sphere that wraps the mallet.
Sphere mallet_bounding_sphere = (Sphere) {
{blue_mallet_position[0],
blue_mallet_position[1],
blue_mallet_position[2]},
mallet_height / 2.0f};

// If the ray intersects (if the user touched a part of the screen that
// intersects the mallet's bounding sphere), then set malletPressed =
// true.
mallet_pressed = sphere_intersects_ray(mallet_bounding_sphere, ray);
}

static Ray convert_normalized_2D_point_to_ray(float normalized_x, float normalized_y) {
// We'll convert these normalized device coordinates into world-space
// coordinates. We'll pick a point on the near and far planes, and draw a
// line between them. To do this transform, we need to first multiply by
// the inverse matrix, and then we need to undo the perspective divide.
vec4 near_point_ndc = {normalized_x, normalized_y, -1, 1};
vec4 far_point_ndc = {normalized_x, normalized_y,  1, 1};

vec4 near_point_world, far_point_world;
mat4x4_mul_vec4(near_point_world, inverted_view_projection_matrix, near_point_ndc);
mat4x4_mul_vec4(far_point_world, inverted_view_projection_matrix, far_point_ndc);

// Why are we dividing by W? We multiplied our vector by an inverse
// matrix, so the W value that we end up is actually the *inverse* of
// what the projection matrix would create. By dividing all 3 components
// by W, we effectively undo the hardware perspective divide.
divide_by_w(near_point_world);
divide_by_w(far_point_world);

// We don't care about the W value anymore, because our points are now
// in world coordinates.
vec3 near_point_ray = {near_point_world[0], near_point_world[1], near_point_world[2]};
vec3 far_point_ray = {far_point_world[0], far_point_world[1], far_point_world[2]};
vec3 vector_between;
vec3_sub(vector_between, far_point_ray, near_point_ray);
return (Ray) {
{near_point_ray[0], near_point_ray[1], near_point_ray[2]},
{vector_between[0], vector_between[1], vector_between[2]}};
}

static void divide_by_w(vec4 vector) {
vector[0] /= vector[3];
vector[1] /= vector[3];
vector[2] /= vector[3];
}```

This code first takes normalized touch coordinates which it receives from the Android, iOS or emscripten front ends, and then turns those touch coordinates into a 3D ray in world space. It then intersects the 3D ray with a bounding sphere for the mallet to see if we’ve touched the mallet.

Let’s continue with the code for handling a touch drag:

```void on_touch_drag(float normalized_x, float normalized_y) {
if (mallet_pressed == 0)
return;

Ray ray = convert_normalized_2D_point_to_ray(normalized_x, normalized_y);
// Define a plane representing our air hockey table.
Plane plane = (Plane) {{0, 0, 0}, {0, 1, 0}};

// Find out where the touched point intersects the plane
// representing our table. We'll move the mallet along this plane.
vec3 touched_point;
ray_intersection_point(touched_point, ray, plane);

memcpy(previous_blue_mallet_position, blue_mallet_position,
sizeof(blue_mallet_position));

// Clamp to bounds
blue_mallet_position[0] =
blue_mallet_position[1] = mallet_height / 2.0f;
blue_mallet_position[2] =

// Now test if mallet has struck the puck.
vec3 mallet_to_puck;
vec3_sub(mallet_to_puck, puck_position, blue_mallet_position);
float distance = vec3_len(mallet_to_puck);

// The mallet has struck the puck. Now send the puck flying
// based on the mallet velocity.
vec3_sub(puck_vector, blue_mallet_position, previous_blue_mallet_position);
}
}

static float clamp(float value, float min, float max) {
return fmin(max, fmax(value, min));
}```

Once we’ve grabbed the mallet, we move it across the air hockey table by intersecting the new touch point with the table to determine the new position on the table. We then move the mallet to that new position. We also check if the mallet has struck the puck, and if so, we use the movement distance to calculate the puck’s new velocity.

We next need to update the lines that initialize our objects inside `on_surface_created()` as follows:

```puck = create_puck(puck_radius, puck_height, 32, puck_color);
red_mallet = create_mallet(mallet_radius, mallet_height, 32, red);
blue_mallet = create_mallet(mallet_radius, mallet_height, 32, blue);

blue_mallet_position[0] = 0;
blue_mallet_position[1] = mallet_height / 2.0f;
blue_mallet_position[2] = 0.4f;
puck_position[0] = 0;
puck_position[1] = puck_height / 2.0f;
puck_position[2] = 0;
puck_vector[0] = 0;
puck_vector[1] = 0;
puck_vector[2] = 0;```

The new linmath.h has merged in the custom code we added to our matrix_helper.h, so we no longer need that file. As part of those changes, our perspective method call in `on_surface_changed()` now needs the angle entered in radians, so let’s update that method call as follows:

```mat4x4_perspective(projection_matrix, deg_to_radf(45),
(float) width / (float) height, 1.0f, 10.0f);```

We can then update `on_draw_frame()` to add the new movement code. Let’s first add the following to the top, right after the call to `glClear()`:

```// Translate the puck by its vector

// If the puck struck a side, reflect it off that side.
if (puck_position[0] < left_bound + puck_radius
|| puck_position[0] > right_bound - puck_radius) {
puck_vector[0] = -puck_vector[0];
vec3_scale(puck_vector, puck_vector, 0.9f);
}
if (puck_position[2] < far_bound + puck_radius
|| puck_position[2] > near_bound - puck_radius) {
puck_vector[2] = -puck_vector[2];
vec3_scale(puck_vector, puck_vector, 0.9f);
}

// Clamp the puck position.
puck_position[0] =
puck_position[2] =

// Friction factor
vec3_scale(puck_vector, puck_vector, 0.99f);```

This code will update the puck’s position and cause it to go bouncing around the table. We’ll also need to add the following after the call to `mat4x4_mul(view_projection_matrix, projection_matrix, view_matrix);`:

`mat4x4_invert(inverted_view_projection_matrix, view_projection_matrix);`

This sets up the inverted view projection matrix, which we need for turning the normalized touch coordinates back into world space coordinates.

Let’s finish up the changes to game.c by updating the following calls to `position_object_in_scene()`:

```position_object_in_scene(blue_mallet_position[0], blue_mallet_position[1],
blue_mallet_position[2]);
// ...
position_object_in_scene(puck_position[0], puck_position[1], puck_position[2]);```

### Adding touch events to Android

With these changes in place, we now need to link in the touch events from each platform. We’ll start off with Android:

#### MainActivity.java

In MainActivity.java, we first need to update the way that we create the renderer in `onCreate()`:

```final RendererWrapper rendererWrapper = new RendererWrapper(this);
// ...
glSurfaceView.setRenderer(rendererWrapper);```

```glSurfaceView.setOnTouchListener(new OnTouchListener() {
@Override
public boolean onTouch(View v, MotionEvent event) {
if (event != null) {
// Convert touch coordinates into normalized device
// coordinates, keeping in mind that Android's Y
// coordinates are inverted.
final float normalizedX = (event.getX() / (float) v.getWidth()) * 2 - 1;
final float normalizedY = -((event.getY() / (float) v.getHeight()) * 2 - 1);

if (event.getAction() == MotionEvent.ACTION_DOWN) {
glSurfaceView.queueEvent(new Runnable() {
@Override
public void run() {
rendererWrapper.handleTouchPress(normalizedX, normalizedY);
}});
} else if (event.getAction() == MotionEvent.ACTION_MOVE) {
glSurfaceView.queueEvent(new Runnable() {
@Override
public void run() {
rendererWrapper.handleTouchDrag(normalizedX, normalizedY);
}});
}

return true;
} else {
return false;
}
}});```

This touch listener takes the incoming touch events from the user, converts them into normalized coordinates in OpenGL’s normalized device coordinate space, and then calls the renderer wrapper which will pass the event on into our native code.

#### RendererWrapper.java

We’ll need to add the following to RendererWrapper.java:

```public void handleTouchPress(float normalizedX, float normalizedY) {
on_touch_press(normalizedX, normalizedY);
}

public void handleTouchDrag(float normalizedX, float normalizedY) {
on_touch_drag(normalizedX, normalizedY);
}

private static native void on_touch_press(float normalized_x, float normalized_y);

private static native void on_touch_drag(float normalized_x, float normalized_y);```

#### renderer_wrapper.c

We’ll also need to add the following to renderer_wrapper.c in our jni folder:

```JNIEXPORT void JNICALL Java_com_learnopengles_airhockey_RendererWrapper_on_1touch_1press(
JNIEnv* env, jclass cls, jfloat normalized_x, jfloat normalized_y) {
UNUSED(env);
UNUSED(cls);
on_touch_press(normalized_x, normalized_y);
}

JNIEXPORT void JNICALL Java_com_learnopengles_airhockey_RendererWrapper_on_1touch_1drag(
JNIEnv* env, jclass cls, jfloat normalized_x, jfloat normalized_y) {
UNUSED(env);
UNUSED(cls);
on_touch_drag(normalized_x, normalized_y);
}```

We now have everything in place for Android, and if we run the app, it should look similar to as seen below:

To add support for iOS, we need to update ViewController.m and add support for touch events. To do that and update the frame rate at the same time, let’s add the following to `viewDidLoad:` before the call to `[self setupGL]`:

```view.userInteractionEnabled = YES;
self.preferredFramesPerSecond = 60;```

To listen to the touch events, we need to override a few methods. Let’s add the following methods before `- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect`:

```static CGPoint getNormalizedPoint(UIView* view, CGPoint locationInView)
{
const float normalizedX = (locationInView.x / view.bounds.size.width) * 2.f - 1.f;
const float normalizedY = -((locationInView.y / view.bounds.size.height) * 2.f - 1.f);
return CGPointMake(normalizedX, normalizedY);
}

- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
[super touchesBegan:touches withEvent:event];
UITouch* touchEvent = [touches anyObject];
CGPoint locationInView = [touchEvent locationInView:self.view];
CGPoint normalizedPoint = getNormalizedPoint(self.view, locationInView);
on_touch_press(normalizedPoint.x, normalizedPoint.y);
}

- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event
{
[super touchesMoved:touches withEvent:event];
UITouch* touchEvent = [touches anyObject];
CGPoint locationInView = [touchEvent locationInView:self.view];
CGPoint normalizedPoint = getNormalizedPoint(self.view, locationInView);
on_touch_drag(normalizedPoint.x, normalizedPoint.y);
}

- (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event
{
[super touchesEnded:touches withEvent:event];
}

- (void)touchesCancelled:(NSSet *)touches withEvent:(UIEvent *)event
{
[super touchesCancelled:touches withEvent:event];
}```

This is similar to the Android code in that it takes the input touch event, converts it to OpenGL’s normalized device coordinate space, and then sends it on to our game code.

Our iOS app should look similar to the following image:

Adding support for emscripten is just as easy. Let’s first add the following to the top of main.c:

```static void handle_input();
// ...
int is_dragging;```

At the beginning of `do_frame()`, add a call to `handle_input();`:

```static void do_frame()
{
handle_input();
// ...```

Add the following for `handle_input`:

```static void handle_input()
{
glfwPollEvents();
const int left_mouse_button_state = glfwGetMouseButton(GLFW_MOUSE_BUTTON_1);
if (left_mouse_button_state == GLFW_PRESS) {
int x_pos, y_pos;
glfwGetMousePos(&x_pos, &y_pos);
const float normalized_x = ((float)x_pos / (float) width) * 2.f - 1.f;
const float normalized_y = -(((float)y_pos / (float) height) * 2.f - 1.f);

if (is_dragging == 0) {
is_dragging = 1;
on_touch_press(normalized_x, normalized_y);
} else {
on_touch_drag(normalized_x, normalized_y);
}
} else {
is_dragging = 0;
}
}```

This code sets `is_dragging` depending on whether we just clicked the primary mouse button or if we’re currently dragging the mouse. Depending on the case, we’ll call either `on_touch_press` or ` on_touch_drag`. The code to normalize the coordinates is the same as in Android and iOS, and indeed a case could be made to abstract out into the common game code, and just pass in the raw coordinates relative to the view size to that game code.

After compiling with emcc make, we should get output similar to the below:

### Exploring further

That concludes our air hockey project! The full source code for this lesson can be found at the GitHub project. You can find a more in-depth look at the concepts behind the project from the perspective of Java Android in OpenGL ES 2 for Android: A Quick-Start Guide. For exploring further, there are many things you could add, like improved graphics, support for sound, a simple AI, multiplayer (on the same device), scoring, or a menu system.

Whether you end up using a commercial cross-platform solution like Unity or Corona, or whether you decide to go the independent route, I hope this series was helpful to you and most importantly, that you enjoy your future projects ahead and have a lot of fun with them. 🙂

## Adding a 3d Perspective and Object Rendering to Our Air Hockey Project in Native C Code

For this post in the air hockey series, we’ll learn how to render our scene from a 3D perspective, as well as how to add a puck and two mallets to the scene. We’ll also see how easy it is to bring these changes to Android, iOS, and emscripten.

### Prerequisites

This lesson continues the air hockey project series, building upon the code from GitHub for ‘article-2-loading-png-file’. Here are the previous posts in this series:

### Adding support for a matrix library

The first thing we’ll do is add support for a matrix library so we can use the same matrix math on all three platforms, and then we’ll introduce the changes to our code from the top down. There are a lot of libraries out there, so I decided to use linmath.h by Wolfgang Draxinger for its simplicity and compactness. Since it’s on GitHub, we can easily add it to our project by running the following git command from the root airhockey/ folder:

`git submodule add https://github.com/datenwolf/linmath.h.git src/3rdparty/linmath`

### Updating our game code

We’ll introduce all of the changes from the top down, so let’s begin by replacing everything inside game.c as follows:

```#include "game.h"
#include "game_objects.h"
#include "asset_utils.h"
#include "buffer.h"
#include "image.h"
#include "linmath.h"
#include "math_helper.h"
#include "matrix.h"
#include "platform_gl.h"
#include "platform_asset_utils.h"
#include "program.h"
#include "texture.h"

static const float puck_height = 0.02f;
static const float mallet_height = 0.15f;

static Table table;
static Puck puck;
static Mallet red_mallet;
static Mallet blue_mallet;

static TextureProgram texture_program;
static ColorProgram color_program;

static mat4x4 projection_matrix;
static mat4x4 model_matrix;
static mat4x4 view_matrix;

static mat4x4 view_projection_matrix;
static mat4x4 model_view_projection_matrix;

static void position_table_in_scene();
static void position_object_in_scene(float x, float y, float z);```

We’ve added all of the new includes, constants, variables, and function declarations that we’ll need for our new game code. We’ll use `Table`, `Puck`, and `Mallet` to represent our drawable objects, `TextureProgram` and `ColorProgram` to represent our shader programs, and the `mat4x4` (a datatype from linmath.h) matrices for our OpenGL matrices. In our draw loop, we’ll call position_table_in_scene() to position the table, and position_object_in_scene() to position our other objects.

For those of you who have also followed the Java tutorials from OpenGL ES 2 for Android: A Quick-Start Guide, you’ll recognize that this has a lot in common with the air hockey project from the first part of the book. The code for that project can be freely downloaded from The Pragmatic Bookshelf.

#### `on_surface_created()`

```void on_surface_created() {
glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
glEnable(GL_DEPTH_TEST);

table = create_table(

vec4 puck_color = {0.8f, 0.8f, 1.0f, 1.0f};
vec4 red = {1.0f, 0.0f, 0.0f, 1.0f};
vec4 blue = {0.0f, 0.0f, 1.0f, 1.0f};

puck = create_puck(0.06f, puck_height, 32, puck_color);
red_mallet = create_mallet(0.08f, mallet_height, 32, red);
blue_mallet = create_mallet(0.08f, mallet_height, 32, blue);

texture_program = get_texture_program(build_program_from_assets(
color_program = get_color_program(build_program_from_assets(
}```

Our new `on_surface_created()` enables depth-testing, initializes the table, puck, and mallets, and loads in the shader programs.

#### `on_surface_changed(int width, int height)`

```void on_surface_changed(int width, int height) {
glViewport(0, 0, width, height);
mat4x4_perspective(projection_matrix, 45, (float) width / (float) height, 1, 10);
mat4x4_look_at(view_matrix, 0.0f, 1.2f, 2.2f, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f);
}```

Our new `on_surface_changed(int width, int height)` now takes in two parameters for the width and the height, and it sets up a projection matrix, and then sets up the view matrix to be slightly above and behind the origin, with an eye position of (0, 1.2, 2.2).

#### `on_draw_frame()`

```void on_draw_frame() {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
mat4x4_mul(view_projection_matrix, projection_matrix, view_matrix);

position_table_in_scene();
draw_table(&table, &texture_program, model_view_projection_matrix);

position_object_in_scene(0.0f, mallet_height / 2.0f, -0.4f);
draw_mallet(&red_mallet, &color_program, model_view_projection_matrix);

position_object_in_scene(0.0f, mallet_height / 2.0f, 0.4f);
draw_mallet(&blue_mallet, &color_program, model_view_projection_matrix);

// Draw the puck.
position_object_in_scene(0.0f, puck_height / 2.0f, 0.0f);
draw_puck(&puck, &color_program, model_view_projection_matrix);
}```

Our new `on_draw_frame()` positions and draws the table, mallets, and the puck.

Because we changed the definition of `on_surface_changed()`, we also have to change the declaration in game.h. Change `void on_surface_changed();` to `void on_surface_changed(int width, int height);`.

```static void position_table_in_scene() {
// The table is defined in terms of X & Y coordinates, so we rotate it
// 90 degrees to lie flat on the XZ plane.
mat4x4 rotated_model_matrix;
mat4x4_identity(model_matrix);
mat4x4_mul(
model_view_projection_matrix, view_projection_matrix, rotated_model_matrix);
}

static void position_object_in_scene(float x, float y, float z) {
mat4x4_identity(model_matrix);
mat4x4_translate_in_place(model_matrix, x, y, z);
mat4x4_mul(model_view_projection_matrix, view_projection_matrix, model_matrix);
}```

These functions update the matrices to let us position the table, puck, and mallets in the scene. We’ll define all of the extra functions that we need soon.

Now we’ll start drilling down into each part of the program and make the changes necessary for our game code to work. Let’s begin by updating our shaders. First, let’s rename our vertex shader shader.vsh to texture_shader.vsh and update it as follows:

```uniform mat4 u_MvpMatrix;

attribute vec4 a_Position;
attribute vec2 a_TextureCoordinates;

varying vec2 v_TextureCoordinates;

void main()
{
v_TextureCoordinates = a_TextureCoordinates;
gl_Position = u_MvpMatrix * a_Position;
}```

We’ll also need a new set of shaders to render our puck and mallets. Let’s add the following new shaders:

```uniform mat4 u_MvpMatrix;
attribute vec4 a_Position;
void main()
{
gl_Position = u_MvpMatrix * a_Position;
}```

```precision mediump float;
uniform vec4 u_Color;
void main()
{
gl_FragColor = u_Color;
}```

### Creating our game objects

Now we’ll add support for generating and drawing our game objects. Let’s begin with game_objects.h:

```#include "platform_gl.h"
#include "program.h"
#include "linmath.h"

typedef struct {
GLuint texture;
GLuint buffer;
} Table;

typedef struct {
vec4 color;
GLuint buffer;
int num_points;
} Puck;

typedef struct {
vec4 color;
GLuint buffer;
int num_points;
} Mallet;

Table create_table(GLuint texture);
void draw_table(const Table* table, const TextureProgram* texture_program, mat4x4 m);

Puck create_puck(float radius, float height, int num_points, vec4 color);
void draw_puck(const Puck* puck, const ColorProgram* color_program, mat4x4 m);

Mallet create_mallet(float radius, float height, int num_points, vec4 color);
void draw_mallet(const Mallet* mallet, const ColorProgram* color_program, mat4x4 m);```

We’ve defined three C structs to hold the data for our table, puck, and mallets, and we’ve declared functions to create and draw these objects.

#### Drawing a table

Let’s continue with game_objects.c:

```#include "game_objects.h"
#include "buffer.h"
#include "platform_gl.h"
#include "program.h"
#include "linmath.h"
#include <math.h>

// Triangle fan
// position X, Y, texture S, T
static const float table_data[] = { 0.0f,  0.0f, 0.5f, 0.5f,
-0.5f, -0.8f, 0.0f, 0.9f,
0.5f, -0.8f, 1.0f, 0.9f,
0.5f,  0.8f, 1.0f, 0.1f,
-0.5f,  0.8f, 0.0f, 0.1f,
-0.5f, -0.8f, 0.0f, 0.9f};

Table create_table(GLuint texture) {
return (Table) {texture,
create_vbo(sizeof(table_data), table_data, GL_STATIC_DRAW)};
}

void draw_table(const Table* table, const TextureProgram* texture_program, mat4x4 m)
{
glUseProgram(texture_program->program);

glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, table->texture);
glUniformMatrix4fv(texture_program->u_mvp_matrix_location, 1,
GL_FALSE, (GLfloat*)m);
glUniform1i(texture_program->u_texture_unit_location, 0);

glBindBuffer(GL_ARRAY_BUFFER, table->buffer);
glVertexAttribPointer(texture_program->a_position_location, 2, GL_FLOAT,
GL_FALSE, 4 * sizeof(GL_FLOAT), BUFFER_OFFSET(0));
glVertexAttribPointer(texture_program->a_texture_coordinates_location, 2, GL_FLOAT,
GL_FALSE, 4 * sizeof(GL_FLOAT), BUFFER_OFFSET(2 * sizeof(GL_FLOAT)));
glEnableVertexAttribArray(texture_program->a_position_location);
glEnableVertexAttribArray(texture_program->a_texture_coordinates_location);
glDrawArrays(GL_TRIANGLE_FAN, 0, 6);

glBindBuffer(GL_ARRAY_BUFFER, 0);
}```

After the imports, this is the code to create and draw the table data. This is essentially the same as what we had before, with the coordinates adjusted a bit to change the table into a rectangle.

#### Generating circles and cylinders

Before we can draw a puck or a mallet, we’ll need to add some helper functions to draw a circle or a cylinder. Let’s define those now:

```static inline int size_of_circle_in_vertices(int num_points) {
return 1 + (num_points + 1);
}

static inline int size_of_open_cylinder_in_vertices(int num_points) {
return (num_points + 1) * 2;
}```

We first need two helper functions to calculate the size of a circle or a cylinder in terms of vertices. A circle drawn as a triangle fan has one vertex for the center, `num_points` vertices around the circle, and one more vertex to close the circle. An open-ended cylinder drawn as a triangle strip doesn’t have a center point, but it does have two vertices for each point around the circle, and two more vertices to close off the circle.

```static inline int gen_circle(float* out, int offset,
float center_x, float center_y, float center_z,
{
out[offset++] = center_x;
out[offset++] = center_y;
out[offset++] = center_z;

int i;
for (i = 0; i <= num_points; ++i) {
float angle_in_radians = ((float) i / (float) num_points)
* ((float) M_PI * 2.0f);
out[offset++] = center_y;
}

return offset;
}```

This code will generate a circle, given a center point, a radius, and the number of points around the circle.

```static inline int gen_cylinder(float* out, int offset,
float center_x, float center_y, float center_z,
float height, float radius, int num_points)
{
const float y_start = center_y - (height / 2.0f);
const float y_end = center_y + (height / 2.0f);

int i;
for (i = 0; i <= num_points; i++) {
float angle_in_radians = ((float) i / (float) num_points)
* ((float) M_PI * 2.0f);

out[offset++] = x_position;
out[offset++] = y_start;
out[offset++] = z_position;

out[offset++] = x_position;
out[offset++] = y_end;
out[offset++] = z_position;
}

return offset;
}```

This code will generate the vertices for an open-ended cylinder. Note that for both the circle and the cylinder, the loop goes from 0 to `num_points`, so the first and last points around the circle are duplicated in order to close the loop around the circle.

#### Drawing a puck

Let’s add the code to generate and draw the puck:

```Puck create_puck(float radius, float height, int num_points, vec4 color)
{
float data[(size_of_circle_in_vertices(num_points)
+ size_of_open_cylinder_in_vertices(num_points)) * 3];

int offset = gen_circle(data, 0, 0.0f, height / 2.0f, 0.0f, radius, num_points);
gen_cylinder(data, offset, 0.0f, 0.0f, 0.0f, height, radius, num_points);

return (Puck) {{color[0], color[1], color[2], color[3]},
create_vbo(sizeof(data), data, GL_STATIC_DRAW),
num_points};
}```

A puck contains one open-ended cylinder, and a circle to top off that cylinder.

```void draw_puck(const Puck* puck, const ColorProgram* color_program, mat4x4 m)
{
glUseProgram(color_program->program);

glUniformMatrix4fv(color_program->u_mvp_matrix_location, 1, GL_FALSE, (GLfloat*)m);
glUniform4fv(color_program->u_color_location, 1, puck->color);

glBindBuffer(GL_ARRAY_BUFFER, puck->buffer);
glVertexAttribPointer(color_program->a_position_location, 3, GL_FLOAT,
GL_FALSE, 0, BUFFER_OFFSET(0));
glEnableVertexAttribArray(color_program->a_position_location);

int circle_vertex_count = size_of_circle_in_vertices(puck->num_points);
int cylinder_vertex_count = size_of_open_cylinder_in_vertices(puck->num_points);

glDrawArrays(GL_TRIANGLE_FAN, 0, circle_vertex_count);
glDrawArrays(GL_TRIANGLE_STRIP, circle_vertex_count, cylinder_vertex_count);
glBindBuffer(GL_ARRAY_BUFFER, 0);
}```

To draw the puck, we pass in the uniforms and attributes, and then we draw the circle as a triangle fan, and the cylinder as a triangle strip.

#### Drawing a mallet

Let’s continue with the code to create and draw a mallet:

```Mallet create_mallet(float radius, float height, int num_points, vec4 color)
{
float data[(size_of_circle_in_vertices(num_points) * 2
+ size_of_open_cylinder_in_vertices(num_points) * 2) * 3];

float base_height = height * 0.25f;
float handle_height = height * 0.75f;

int offset = gen_circle(data, 0, 0.0f, -base_height, 0.0f, radius, num_points);
offset = gen_circle(data, offset,
0.0f, height * 0.5f, 0.0f,
offset = gen_cylinder(data, offset,
0.0f, -base_height - base_height / 2.0f, 0.0f,
gen_cylinder(data, offset,
0.0f, height * 0.5f - handle_height / 2.0f, 0.0f,

return (Mallet) {{color[0], color[1], color[2], color[3]},
create_vbo(sizeof(data), data, GL_STATIC_DRAW),
num_points};
}```

A mallet contains two circles and two open-ended cylinders, positioned and sized so that the mallet’s base is wider and shorter than the mallet’s handle.

```void draw_mallet(const Mallet* mallet, const ColorProgram* color_program, mat4x4 m)
{
glUseProgram(color_program->program);

glUniformMatrix4fv(color_program->u_mvp_matrix_location, 1, GL_FALSE, (GLfloat*)m);
glUniform4fv(color_program->u_color_location, 1, mallet->color);

glBindBuffer(GL_ARRAY_BUFFER, mallet->buffer);
glVertexAttribPointer(color_program->a_position_location, 3, GL_FLOAT,
GL_FALSE, 0, BUFFER_OFFSET(0));
glEnableVertexAttribArray(color_program->a_position_location);

int circle_vertex_count = size_of_circle_in_vertices(mallet->num_points);
int cylinder_vertex_count = size_of_open_cylinder_in_vertices(mallet->num_points);
int start_vertex = 0;

glDrawArrays(GL_TRIANGLE_FAN, start_vertex, circle_vertex_count);
start_vertex += circle_vertex_count;
glDrawArrays(GL_TRIANGLE_FAN, start_vertex, circle_vertex_count);
start_vertex += circle_vertex_count;
glDrawArrays(GL_TRIANGLE_STRIP, start_vertex, cylinder_vertex_count);
start_vertex += cylinder_vertex_count;
glDrawArrays(GL_TRIANGLE_STRIP, start_vertex, cylinder_vertex_count);
glBindBuffer(GL_ARRAY_BUFFER, 0);
}```

Drawing the mallet is similar to drawing the puck, except that now we draw two circles and two cylinders.

We’ll need to add a helper function that we’re currently using in game.c; create a new header file called math_helper.h, and add the following code:

```#include <math.h>

static inline float deg_to_radf(float deg) {
return deg * (float)M_PI / 180.0f;
}```

Since C’s trigonometric functions expect passed-in values to be in radians, we’ll use this function to convert degrees into radians, where needed.

While linmath.h contains a lot of useful functions, there’s a few missing that we need for our game code. Create a new header file called matrix.h, and begin by adding the following code, all adapted from Android’s OpenGL `Matrix` class:

```#include "linmath.h"
#include <math.h>
#include <string.h>

/* Adapted from Android's OpenGL Matrix.java. */

static inline void mat4x4_perspective(mat4x4 m, float y_fov_in_degrees,
float aspect, float n, float f)
{
const float angle_in_radians = (float) (y_fov_in_degrees * M_PI / 180.0);
const float a = (float) (1.0 / tan(angle_in_radians / 2.0));

m[0][0] = a / aspect;
m[1][0] = 0.0f;
m[2][0] = 0.0f;
m[3][0] = 0.0f;

m[1][0] = 0.0f;
m[1][1] = a;
m[1][2] = 0.0f;
m[1][3] = 0.0f;

m[2][0] = 0.0f;
m[2][1] = 0.0f;
m[2][2] = -((f + n) / (f - n));
m[2][3] = -1.0f;

m[3][0] = 0.0f;
m[3][1] = 0.0f;
m[3][2] = -((2.0f * f * n) / (f - n));
m[3][3] = 0.0f;
}```

We’ll use `mat4x4_perspective()` to setup a perspective projection matrix.

```static inline void mat4x4_translate_in_place(mat4x4 m, float x, float y, float z)
{
int i;
for (i = 0; i < 4; ++i) {
m[3][i] += m[0][i] * x
+  m[1][i] * y
+  m[2][i] * z;
}
}```

This helper function lets us translate a matrix in place.

```static inline void mat4x4_look_at(mat4x4 m,
float eyeX, float eyeY, float eyeZ,
float centerX, float centerY, float centerZ,
float upX, float upY, float upZ)
{
// See the OpenGL GLUT documentation for gluLookAt for a description
// of the algorithm. We implement it in a straightforward way:

float fx = centerX - eyeX;
float fy = centerY - eyeY;
float fz = centerZ - eyeZ;

// Normalize f
vec3 f_vec = {fx, fy, fz};
float rlf = 1.0f / vec3_len(f_vec);
fx *= rlf;
fy *= rlf;
fz *= rlf;

// compute s = f x up (x means "cross product")
float sx = fy * upZ - fz * upY;
float sy = fz * upX - fx * upZ;
float sz = fx * upY - fy * upX;

// and normalize s
vec3 s_vec = {sx, sy, sz};
float rls = 1.0f / vec3_len(s_vec);
sx *= rls;
sy *= rls;
sz *= rls;

// compute u = s x f
float ux = sy * fz - sz * fy;
float uy = sz * fx - sx * fz;
float uz = sx * fy - sy * fx;

m[0][0] = sx;
m[0][1] = ux;
m[0][2] = -fx;
m[0][3] = 0.0f;

m[1][0] = sy;
m[1][1] = uy;
m[1][2] = -fy;
m[1][3] = 0.0f;

m[2][0] = sz;
m[2][1] = uz;
m[2][2] = -fz;
m[2][3] = 0.0f;

m[3][0] = 0.0f;
m[3][1] = 0.0f;
m[3][2] = 0.0f;
m[3][3] = 1.0f;

mat4x4_translate_in_place(m, -eyeX, -eyeY, -eyeZ);
}```

We can use `mat4x4_look_at()` like a camera, and use it to position the scene in a certain way.

We’re almost done the changes to our core code. Let’s wrap up those changes by adding the following code:

#### program.h

```#pragma once
#include "platform_gl.h"

typedef struct {
GLuint program;

GLint a_position_location;
GLint a_texture_coordinates_location;
GLint u_mvp_matrix_location;
GLint u_texture_unit_location;
} TextureProgram;

typedef struct {
GLuint program;

GLint a_position_location;
GLint u_mvp_matrix_location;
GLint u_color_location;
} ColorProgram;

TextureProgram get_texture_program(GLuint program);
ColorProgram get_color_program(GLuint program);```

#### program.c

```#include "program.h"
#include "platform_gl.h"

TextureProgram get_texture_program(GLuint program)
{
return (TextureProgram) {
program,
glGetAttribLocation(program, "a_Position"),
glGetAttribLocation(program, "a_TextureCoordinates"),
glGetUniformLocation(program, "u_MvpMatrix"),
glGetUniformLocation(program, "u_TextureUnit")};
}

ColorProgram get_color_program(GLuint program)
{
return (ColorProgram) {
program,
glGetAttribLocation(program, "a_Position"),
glGetUniformLocation(program, "u_MvpMatrix"),
glGetUniformLocation(program, "u_Color")};
}```

We first need to update Android.mk and add the following to `LOCAL_SRC_FILES`:

```				   \$(CORE_RELATIVE_PATH)/game_objects.c \
\$(CORE_RELATIVE_PATH)/program.c \```

We also need to add a new `LOCAL_C_INCLUDES`:

`LOCAL_C_INCLUDES += \$(PROJECT_ROOT_PATH)/3rdparty/linmath/`

We then need to update renderer_wrapper.c and change the call to `on_surface_changed();` to ` on_surface_changed(width, height);`. Once we’ve done that, we should be able to run the app on our Android device, and it should look similar to the following image:

For iOS, we just need to open up the Xcode project and add the necessary references to linmath.h and our new core files to the appropriate folder groups, and then we need to update ViewController.m and change `on_surface_changed();` to the following:

`on_surface_changed([[self view] bounds].size.width, [[self view] bounds].size.height);`

Once we run the app, it should look similar to the following image:

For emscripten, we need to update the Makefile and add the following lines to `SOURCES`:

```		  ../../core/game_objects.c \
../../core/program.c \```

We’ll also need to add the following lines to `OBJECTS`:

```		  ../../core/game_objects.o \
../../core/program.o \```

We then just need to update main.c, move the constants `width` and `height` from inside `init_gl()` to outside the function near the top of the file, and update the call to `on_surface_changed();` to `on_surface_changed(width, height);`. We can then build the file by calling `emmake make`, which should produce a file that looks as follows:

See how easy that was? Now that we have a minimal cross-platform framework in place, it’s very easy for us to bring changes to the core code across to each platform.

### Exploring further

The full source code for this lesson can be found at the GitHub project. In the next post, we’ll take a look at user input so we can move our mallet around the screen.

## Loading a PNG into Memory and Displaying It as a Texture with OpenGL ES 2: Adding Support for Emscripten

In the last two posts in this series, we added support for loading a PNG file into OpenGL as a texture, and then we displayed that texture on the screen:

In this post, we’ll add support for emscripten.

### Prerequisites

To complete this lesson, you’ll need to have completed Loading a PNG into Memory and Displaying It as a Texture with OpenGL ES 2: Adding Support for iOS. The previous emscripten post, Calling OpenGL from C on the Web by Using Emscripten, Sharing Common Code with Android and iOS, covers emscripten installation and setup.

You can also just download the completed project for this part of the series from GitHub and check out the code from there.

### Updating the emscripten code

There’s just one new file that we we need to add to /airhockey/src/platform/emscripten/, which is platform_asset_utils.c:

```#include "platform_asset_utils.h"
#include "platform_file_utils.h"
#include <assert.h>
#include <stdio.h>
#include <stdlib.h>

FileData get_asset_data(const char* relative_path) {
assert(relative_path != NULL);
return get_file_data(relative_path);
}

void release_asset_data(const FileData* file_data) {
assert(file_data != NULL);
release_file_data(file_data);
}```

For emscripten, there is nothing special to do since it supports a virtual file system using pre-embedded resources. For loading assets, all we need to do here is just forward the calls on to the platform-independent file loading functions that we defined in the previous post.

Since emscripten doesn’t have zlib built into it like Android and iOS do, we’ll need to add that as a third-party dependency. Download zlib 1.2.8 from http://zlib.net/ and extract it to /airhockey/src/3rdparty/libzlib. We won’t need to do anything else to get it to compile.

#### Updating the Makefile

To get things to compile and run, let’s replace the Makefile in the emscripten directory with the following contents:

```CFLAGS = -O2 -I. -I../../core -I../common -I../../3rdparty/libpng -I../../3rdparty/libzlib -Wall -Wextra
LDFLAGS = --embed-file ../../../assets@/

OBJECTS = main.o \
platform_asset_utils.o \
../common/platform_log.o \
../common/platform_file_utils.o \
../../core/buffer.o \
../../core/asset_utils.o \
../../core/game.o \
../../core/image.o \
../../core/texture.o \
../../3rdparty/libpng/png.o \
../../3rdparty/libpng/pngerror.o \
../../3rdparty/libpng/pngget.o \
../../3rdparty/libpng/pngmem.o \
../../3rdparty/libpng/pngrio.o \
../../3rdparty/libpng/pngrtran.o \
../../3rdparty/libpng/pngrutil.o \
../../3rdparty/libpng/pngset.o \
../../3rdparty/libpng/pngtrans.o \
../../3rdparty/libpng/pngwio.o \
../../3rdparty/libpng/pngwrite.o \
../../3rdparty/libpng/pngwtran.o \
../../3rdparty/libpng/pngwutil.o \
../../3rdparty/libzlib/crc32.o \
../../3rdparty/libzlib/deflate.o \
../../3rdparty/libzlib/infback.o \
../../3rdparty/libzlib/inffast.o \
../../3rdparty/libzlib/inflate.o \
../../3rdparty/libzlib/inftrees.o \
../../3rdparty/libzlib/trees.o \
../../3rdparty/libzlib/zutil.o
TARGET = airhockey.html

all: \$(TARGET)

\$(TARGET): \$(OBJECTS)
\$(CC) \$(CFLAGS) -o \$@ \$(LDFLAGS) \$(OBJECTS)

clean:
\$(RM) \$(TARGET) \$(OBJECTS)```

This Makefile specifies all of the required object files. When we run `make`, it will find the source files automatically and compile them into objects, using the `CFLAGS` that we’ve defined above. When we run this Makefile through `emmake`, `emcc` will use the `--embed-file ../../../assets@/` line to package all of the assets into our target HTML file; the `@/` syntax tells emscripten to place the assets at the root of the virtual file system. More information can be found at the emscripten wiki.

If your emscripten is configured and ready to go, then you can build the program by running `emmake` as follows, changing the path to `emmake` to reflect where you’ve installed emscripten.

`MacBook-Air:emscripten user\$ /opt/emscripten/emmake make -j 8`

If all went well, airhockey.html should look similar to the following HTML:

For information on installing and configuring emscripten, see Calling OpenGL from C on the Web by Using Emscripten, Sharing Common Code with Android and iOS or the emscripten tutorial.

#### Exploring further

The full source code for this lesson can be found at the GitHub project. For the next few posts, we’re going to start doing more with our base so that we can start making this look more like an actual air hockey game!

## Calling OpenGL from C on the Web by Using Emscripten, Sharing Common Code with Android and iOS

In the last two posts, we started building up a simple system to reuse a common set of C code in Android and iOS:

In this post, we’ll also add support for emscripten, an LLVM-to-JavaScript compiler that can convert C and C++ code into JavaScript. Emscripten is quite a neat piece of technology, and has led to further improvements to JavaScript engines, such as asm.js. Check out all of the demos over at the wiki.

### Prerequisites

For this post, you’ll need to have Emscripten installed and configured; we’ll cover installation instructions further below. It’ll also be helpful if you’ve completed the first two posts in this series: OpenGL from C on Android by using the NDK and Calling OpenGL from C on iOS, Sharing Common Code with Android. If not, then you can also download the code from GitHub and follow along.

### Installing emscripten

#### Installing on Windows (tested on Windows 8)

There is a set of detailed instructions available at https://github.com/kripken/emscripten/wiki/Using-Emscripten-on-Windows. There’s no need to build anything from source as there’s prebuilt binaries for everything you need.

Here are a few gotchas that you might run into during the install:

• The GCC and Clang archives need to be extracted to the same location, such as C:\mingw64.
• The paths in .emscripten should be specified with forward slashes, as in ‘C:/mingw64’, or double backward slashes, as in ‘C:\\mingw64’.
• TEMP_DIR in .emscripten should be set to a valid path, such as ‘C:\\Windows\\Temp’.

You can then test the install by entering the following commands into a command prompt from the emscripten directory:

`python emcc tests\hello_world.cpp -o hello_world.html`
`hello_world.html`

#### Installing on Mac OS X (tested on OS X 10.8.4)

The instructions over at https://gist.github.com/dweekly/5873953 should get you up and running. Instead of `brew install node`, you can also enter `sudo port install nodejs`, if using MacPorts. I installed emscripten and LLVM into the /opt directory.

First you should run emcc from the emscripten directory to create a default config file in ~/.emscripten. After configuring ~/.emscripten and checking that all paths are correct, you can test the install by entering the following into a terminal shell from the emscripten directory:

`./emcc tests/hello_world.cpp -o hello_world.html`
`open hello_world.html`

#### Installing on Ubuntu Linux (tested on Ubuntu 13.04)

The following commands should be entered into a terminal shell; They were adapted from https://earthserver.com/Setting_up_emscripten_development_environment_on_Linux:

##### Installing prerequisites

`sudo apt-get update; sudo apt-get install build-essential openjdk-7-jdk openjdk-7-jre-headless git`

##### Installing node.js:

Download the latest node.js from http://nodejs.org/, extract it, and then build & install it with the following commands from inside the nodejs source directory:

`./configure`
`make`
`sudo make install`

##### Installing LLVM

`sudo apt-get install llvm clang`

##### Installing emscripten

`sudo mkdir /opt/emscripten`
`sudo chmod 777 /opt/emscripten`
`cd /opt`
`git clone git://github.com/kripken/emscripten.git emscripten`

##### Configuring emscripten

`cd emscripten`
`./emcc`

This command will print out a listing with the auto-detected paths for LLVM and other utilities. Check that all paths are correct, and edit ~/.emscripten if any are not.

You can then test out the install by entering the following commands:

`./emcc tests/hello_world.cpp -o hello_world.html`
`xdg-open hello_world.html`

If all goes well, you should then see a browser window open with “hello, world!” printed out in a box.

Let’s start by creating a new folder called emscripten in the airhockey folder. In that new folder, let’s create a new source file called main.c, beginning with the following contents:

```#include <stdlib.h>
#include <stdio.h>
#include <GL/glfw.h>
#include <emscripten/emscripten.h>
#include "game.h"

int init_gl();
void do_frame();
void shutdown_gl();

int main()
{
if (init_gl() == GL_TRUE) {
on_surface_created();
on_surface_changed();
emscripten_set_main_loop(do_frame, 0, 1);
}

shutdown_gl();

return 0;
}
```

In this C source file, we’ve cleared a few functions, and then we’ve defined the main body of our program. The program will begin by calling `init_gl()` (a function that we’ll define further below) to initialize OpenGL, then it will call `on_surface_created()` and `on_surface_changed()` from our common code, and then it will call a special emscripten function, `emscripten_set_main_loop()`, which can simulate an infinite loop by using the browser’s `requestAnimationFrame` mechanism.

Let’s complete the rest of the source file:

```int init_gl()
{
const int width = 480,
height = 800;

if (glfwInit() != GL_TRUE) {
printf("glfwInit() failed\n");
return GL_FALSE;
}

if (glfwOpenWindow(width, height, 8, 8, 8, 8, 16, 0, GLFW_WINDOW) != GL_TRUE) {
printf("glfwOpenWindow() failed\n");
return GL_FALSE;
}

return GL_TRUE;
}

void do_frame()
{
on_draw_frame();
glfwSwapBuffers();
}

void shutdown_gl()
{
glfwTerminate();
}
```

In the rest of this code, we use GLFW, an OpenGL library for managing OpenGL contexts, creating windows, and handling input. Emscripten has special support for GLFW built into it, so that the calls will be translated to matching JavaScript code on compilation.

Like we did for Android and iOS, we also need to define where the OpenGL headers are stored for our common code. Save the following into a new file called glwrapper.h in airhockey/emscripten/:

```#include <GLES2/gl2.h>
```

### Building the code and running it in a browser

To build the program, run the following command in a terminal shell from airhockey/emscripten/:

`emcc -I. -I../common main.c ../common/game.c -o airhockey.html`

In the GitHub project, there’s also a Makefile which will build airhockey.html when `emmake make` is called. This Makefile can also be used on Windows by running `python emmake mingw32-make`, putting the right paths where appropriate. To see the code in action, just open up airhockey.html in a browser.

When we ask emscripten to generate an HTML file, it will generate an HTML file that contains the embedded code, which you can see further below (WebGL support is required to see the OpenGL code in action):

#### Exploring further

The full source code for this lesson can be found at the GitHub project. Now that we have a base setup in Android, iOS, and emscripten, we can start fleshing out our project in the next few posts. Emscripten is pretty neat, and I definitely recommend checking out the samples over at https://github.com/kripken/emscripten/wiki!

## WebGL Lesson One: Getting Started

This is the first tutorial for learning OpenGL ES 2 on the web, using WebGL. In this lesson, we’ll look at how to create a basic WebGL instance and display stuff to the screen, as well as what you need in order to view WebGL in your browser. There will also be an introduction to shaders and matrices.

#### What is WebGL?

Previously, if you wanted to do real-time 3D graphics on the web, your only real option was to use a plugin such as Java or Flash. However, there is currently a push to bring hardware-accelerated graphics to the web called WebGL. WebGL is based on OpenGL ES 2, which means that we’ll need to use shaders. Since WebGL runs inside a web browser, we’ll also need to use JavaScript to control it.

#### Prerequisites

You’ll need a browser that supports WebGL, and you should also have the most recent drivers installed for your video card. You can visit Get WebGL to see if your browser supports WebGL and if not, it will tell you where you can get a browser that supports it.

The latest stable releases of Chrome and Firefox support WebGL, so you can always start there.

This lesson uses the following third-party libraries:

• webgl-utils.js — for basic initialization of an OpenGL context and rendering on browser request.
• glMatrix.js — for matrix operations.

#### Assumptions

The reader should be familiar with programming and 3D concepts on a basic level. The Khronos WebGL Public Wiki is a good place to start out.

#### Getting started

As I write this, I am also learning WebGL, so we’ll be learning together! We’ll look at how to get a context and start drawing stuff to the screen, and we’ll more or less follow lesson one for Android as this lesson is based on it. For those of you who followed the Android lesson, you may remember that getting an OpenGL context consisted of creating an activity and setting the content view to a GLSurfaceView object. We also provided a class which overrode GLSurfaceView.Renderer and provided methods which were called by the system.

With WebGL, it is just as easy to get things setup and running. The webgl-utils.js script provides us with two functions to get things going:

```function setupWebGL(canvas, opt_attribs);

function window.requestAnimFrame(callback, element);```

The setupWebGL() function takes care of initializing WebGL for us, as well as pointing the user to a browser that supports WebGL or further troubleshooting if there were errors initializing WebGL. More info on the optional parameters can be found at the WebGL Specification page, section 5.2.1.

The second function provides a cross-browser way of setting up a render callback. The browser will call the function provided in the callback parameter at a regular interval. The element parameter lets the browser know for which element the callback is firing.

In our script we have a function main() which is our main entry point, and is called once at the end of the script. In this function, we initialize WebGL with the following calls:

```    // Try to get a WebGL context
canvas = document.getElementById("canvas");

// We don't need a depth buffer.
// See https://www.khronos.org/registry/webgl/specs/1.0/ Section 5.2
gl = WebGLUtils.setupWebGL(canvas, { depth: false });```

If the calls were successful, then we go on to initialize our model data and set up our rendering callback.

#### Visualizing a 3D world

Like in lesson one for Android, we need to define our model data as an array of floating point numbers. These numbers can represent vertex positions, colors, or anything else that we need. Unlike OpenGL ES 2 on Android, WebGL does not support client-side buffers. This means that we need to load all of the data into WebGL using vertex buffer objects (VBOs). Thankfully, this is a pretty trivial step and it will be explained in more detail further below.

Before we transfer the data into WebGL, we’ll define it in client memory first using the Float32Array datatype. These typed arrays are an attempt to increase the performance of Javascript by adding typing information.

```		// Define points for equilateral triangles.
trianglePositions = new Float32Array([
// X, Y, Z,
-0.5, -0.25, 0.0,
0.5, -0.25, 0.0,
0.0, 0.559016994, 0.0]);

// This triangle is red, green, and blue.
triangle1Colors = new Float32Array([
// R, G, B, A
1.0, 0.0, 0.0, 1.0,
0.0, 0.0, 1.0, 1.0,
0.0, 1.0, 0.0, 1.0]);

...

```

All of the triangles can share the same position data, but we’ll define a different set of colors for each triangle.

##### Setting up initial parameters

After defining basic model data, our main function calls startRendering(), which takes care of setting up the viewport, building the shaders, and starting the rendering loop.

###### Setting up the viewport and projection matrix

First, we configure the viewport to be the same size as the canvas viewport. Note that this assumes a canvas that doesn’t change size, since we’re only doing this once.

```	// Set the OpenGL viewport to the same size as the canvas.
gl.viewport(0, 0, canvas.clientWidth, canvas.clientHeight);```

```	// Create a new perspective projection matrix. The height will stay the same
// while the width will vary as per aspect ratio.
var ratio = canvas.clientWidth / canvas.clientHeight;
var left = -ratio;
var right = ratio;
var bottom = -1.0;
var top = 1.0;
var near = 1.0;
var far = 10.0;

mat4.frustum(left, right, bottom, top, near, far, projectionMatrix);```
###### Configuring the view matrix and default parameters

Setting up the viewport and configuring the projection matrix is something we should do whenever the canvas has changed size. The next step is to set the default clear color as well as the view matrix.

```	// Set the background clear color to gray.
gl.clearColor(0.5, 0.5, 0.5, 1.0);

/* Configure camera */
// Position the eye behind the origin.
var eyeX = 0.0;
var eyeY = 0.0;
var eyeZ = 1.5;

// We are looking toward the distance
var lookX = 0.0;
var lookY = 0.0;
var lookZ = -5.0;

// Set our up vector. This is where our head would be pointing were we holding the camera.
var upX = 0.0;
var upY = 1.0;
var upZ = 0.0;

// Set the view matrix. This matrix can be said to represent the camera position.
var eye = vec3.create();
eye[0] = eyeX; eye[1] = eyeY; eye[2] = eyeZ;

var center = vec3.create();
center[0] = lookX; center[1] = lookY; center[2] = lookZ;

var up = vec3.create();
up[0] = upX; up[1] = upY; up[2] = upZ;

mat4.lookAt(eye, center, up, viewMatrix);```

In WebGL we can embed shaders in a few ways: we can embed them as a JavaScript string, we can embed them into the HTML of the page that contains the script, or we can put them in a separate file and link to that file from our script. In this lesson, we take the second approach:

```<script id="vertex_shader" type="x-shader/x-vertex"> uniform mat4 u_MVPMatrix; ```

``` ```

```... </script>```

```<script id="vertex_shader" type="x-shader/x-vertex"> precision mediump float; ```

``` ```

```... </script>```

We can then read in these scripts using the following code snippit:

```		// Read the embedded shader from the document.

{
}

// Pass in the shader source.

I mentioned a bit earlier that WebGL doesn’t support client-side buffers, so we need to upload our data into WebGL itself using buffer objects. This is actually pretty straightforward:

```    // Create buffers in OpenGL's working memory.
trianglePositionBufferObject = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, trianglePositionBufferObject);
gl.bufferData(gl.ARRAY_BUFFER, trianglePositions, gl.STATIC_DRAW);

triangleColorBufferObject1 = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, triangleColorBufferObject1);
gl.bufferData(gl.ARRAY_BUFFER, triangle1Colors, gl.STATIC_DRAW);

...

```

First we create a buffer object using createBuffer(), then we bind the buffer. Then we pass in the data using gl.bufferData() and tell OpenGL that this buffer will be used for static drawing; this hints to OpenGL that we will not be updating this buffer often.

###### WebGL versus OpenGL ES 2

You may have noticed that the WebGL API is a bit different than the base OpenGL ES 2 API: functions and variable names have had their “gl” or “GL_” prefixes removed. This actually makes the API a bit cleaner to use and read. At the same time, some functions have been modified a bit to mesh better with the JavaScript environment.

###### Setting up a rendering callback

We finally kick off the rendering loop by calling window.requestAnimFrame():

```	// Tell the browser we want render() to be called whenever it's time to draw another frame.
window.requestAnimFrame(render, canvas);```
##### Rendering to the screen

The code to render to the screen is pretty much a transpose of the lesson one code for Android. One main difference is that we call window.requestAnimFrame() at the end to request another animation frame.

```	// Request another frame
window.requestAnimFrame(render, canvas);```

#### Recap

If everything went well, you should end up with an animated canvas like the one just below:

Your browser does not support the canvas tag. This is a static example of what would be seen.

If you would like more explanations behind the shaders or other aspects of the program, please be sure to check out lesson one for Android.

##### Debugging

Debugging in JavaScript in the browser is a little more difficult than within an integrated environment such as Eclipse, but it can be done using tools such as Chrome’s inspector. You can also use the WebGL Inspector, which is a plugin that lets you delve into WebGL’s internals and get a better idea of what’s going on.

##### Embedding into WordPress

WebGL can easily be embedded into your posts and pages! You need a canvas, script includes for any third-party libraries, and a script body for your main script (this can also be an include).

Example of a canvas:
`<pre><canvas id="canvas" width="550" height="375">Your browser does not support the canvas tag. This is a static example of what would be seen.</canvas></pre>`

Example of a script include:
`<pre><script type="text/javascript" src="http://www.learnopengles.com/wordpress/wp-content/uploads/2011/06/webgl-utils.js"></script></pre>`

Example of an embedded script:
```<pre><script type="text/javascript"> /** * Lesson_one.js */```

``` ... ```

`</script></pre>`

The <pre> tag is important; otherwise WordPress will mangle your scripts and insert random paragraph tags and other stuff inside. Also, once you’ve inserted this code, you have to stick to using the HTML editor, as the visual editor will also mangle or delete your scripts.

#### Exploring further

Try changing the animation speed, vertex points, or colors, and see what happens!

The full source code for this lesson can be downloaded from the project site on GitHub.

Don’t hesitate to ask any questions or offer feedback, and thanks for stopping by!

```

uniform mat4 u_MVPMatrix;   // A constant representing the combined model/view/projection matrix.

attribute vec4 a_Position;  // Per-vertex position information we will pass in.
attribute vec4 a_Color;     // Per-vertex color information we will pass in.

varying vec4 v_Color;       // This will be passed into the fragment shader.

void main()                 // The entry point for our vertex shader.
{
v_Color = a_Color;      // Pass the color through to the fragment shader.
// It will be interpolated across the triangle.

// gl_Position is a special variable used to store the final position.
// Multiply the vertex by the matrix to get the final point in normalized screen coordinates.
gl_Position = u_MVPMatrix * a_Position;
}

precision mediump float;       // Set the default precision to medium. We don't need as high of a
// precision in the fragment shader.
varying vec4 v_Color;          // This is the color from the vertex shader interpolated across the
// triangle per fragment.
void main()                    // The entry point for our fragment shader.
{
gl_FragColor = v_Color;    // Pass the color directly through the pipeline.
}

/**
* Lesson_one.js
*/

// We make use of the WebGL utility library, which was downloaded from here:
// https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/sdk/demos/common/webgl-utils.js
//
// It defines two functions which we use here:
//
// // Creates a WebGL context.
// WebGLUtils.setupWebGL(canvas);
//
// Requests an animation callback. See: https://developer.mozilla.org/en/DOM/window.requestAnimationFrame
// window.requestAnimFrame(callback, node);
//
// We also make use of the glMatrix file which can be downloaded from here:
//

/** Hold a reference to the WebGLContext */
var gl = null;

/** Hold a reference to the canvas DOM object. */
var canvas = null;

/**
* Store the model matrix. This matrix is used to move models from object space (where each model can be thought
* of being located at the center of the universe) to world space.
*/
var modelMatrix = mat4.create();

/**
* Store the view matrix. This can be thought of as our camera. This matrix transforms world space to eye space;
* it positions things relative to our eye.
*/
var viewMatrix = mat4.create();

/** Store the projection matrix. This is used to project the scene onto a 2D viewport. */
var projectionMatrix = mat4.create();

/** Allocate storage for the final combined matrix. This will be passed into the shader program. */
var mvpMatrix = mat4.create();

/** Store our model data in a Float32Array buffer. */
var trianglePositions;
var triangle1Colors;
var triangle2Colors;
var triangle3Colors;

/** Store references to the vertex buffer objects (VBOs) that will be created. */
var trianglePositionBufferObject;
var triangleColorBufferObject1;
var triangleColorBufferObject2;
var triangleColorBufferObject3;

/** This will be used to pass in the transformation matrix. */
var mvpMatrixHandle;

/** This will be used to pass in model position information. */
var positionHandle;

/** This will be used to pass in model color information. */
var colorHandle;

/** Size of the position data in elements. */
var positionDataSize = 3;

/** Size of the color data in elements. */
var colorDataSize = 4;

{
var error;

{

{
}

// Pass in the shader source.

// Get the compilation status.

// If the compilation failed, delete the shader.
if (!compiled)
{
}
}

{
throw("Error creating shader " + sourceScriptId + ": " + error);
}

}

// Helper function to link a program
{
// Create a program object and store the handle to it.
var programHandle = gl.createProgram();

if (programHandle != 0)
{
// Bind the vertex shader to the program.

// Bind the fragment shader to the program.

// Bind attributes
gl.bindAttribLocation(programHandle, 0, "a_Position");
gl.bindAttribLocation(programHandle, 1, "a_Color");

// If the link failed, delete the program.
{
gl.deleteProgram(programHandle);
programHandle = 0;
}
}

if (programHandle == 0)
{
throw("Error creating program.");
}

return programHandle;
}

//Called when we have the context
function startRendering()
{
/* Configure viewport */
// Set the OpenGL viewport to the same size as the canvas.
gl.viewport(0, 0, canvas.clientWidth, canvas.clientHeight);

// Create a new perspective projection matrix. The height will stay the same
// while the width will vary as per aspect ratio.
var ratio = canvas.clientWidth / canvas.clientHeight;
var left = -ratio;
var right = ratio;
var bottom = -1.0;
var top = 1.0;
var near = 1.0;
var far = 10.0;

mat4.frustum(left, right, bottom, top, near, far, projectionMatrix);

/* Configure general parameters */

// Set the background clear color to gray.
gl.clearColor(0.5, 0.5, 0.5, 1.0);

/* Configure camera */
// Position the eye behind the origin.
var eyeX = 0.0;
var eyeY = 0.0;
var eyeZ = 1.5;

// We are looking toward the distance
var lookX = 0.0;
var lookY = 0.0;
var lookZ = -5.0;

// Set our up vector. This is where our head would be pointing were we holding the camera.
var upX = 0.0;
var upY = 1.0;
var upZ = 0.0;

// Set the view matrix. This matrix can be said to represent the camera position.
var eye = vec3.create();
eye[0] = eyeX; eye[1] = eyeY; eye[2] = eyeZ;
var center = vec3.create();
center[0] = lookX; center[1] = lookY; center[2] = lookZ;
var up = vec3.create();
up[0] = upX; up[1] = upY; up[2] = upZ;
mat4.lookAt(eye, center, up, viewMatrix);

// Create a program object and store the handle to it.

// Set program handles. These will later be used to pass in values to the program.
mvpMatrixHandle = gl.getUniformLocation(programHandle, "u_MVPMatrix");
positionHandle = gl.getAttribLocation(programHandle, "a_Position");
colorHandle = gl.getAttribLocation(programHandle, "a_Color");

// Tell OpenGL to use this program when rendering.
gl.useProgram(programHandle);

// Create buffers in OpenGL's working memory.
trianglePositionBufferObject = gl.createBuffer();
//    checkError();
gl.bindBuffer(gl.ARRAY_BUFFER, trianglePositionBufferObject);
//    checkError();
gl.bufferData(gl.ARRAY_BUFFER, trianglePositions, gl.STATIC_DRAW);
//    checkError();

triangleColorBufferObject1 = gl.createBuffer();
//    checkError();
gl.bindBuffer(gl.ARRAY_BUFFER, triangleColorBufferObject1);
//    checkError();
gl.bufferData(gl.ARRAY_BUFFER, triangle1Colors, gl.STATIC_DRAW);
//    checkError();

triangleColorBufferObject2 = gl.createBuffer();
//    checkError();
gl.bindBuffer(gl.ARRAY_BUFFER, triangleColorBufferObject2);
//    checkError();
gl.bufferData(gl.ARRAY_BUFFER, triangle2Colors, gl.STATIC_DRAW);
//    checkError();

triangleColorBufferObject3 = gl.createBuffer();
//    checkError();
gl.bindBuffer(gl.ARRAY_BUFFER, triangleColorBufferObject3);
//    checkError();
gl.bufferData(gl.ARRAY_BUFFER, triangle3Colors, gl.STATIC_DRAW);
//    checkError();

// Tell the browser we want render() to be called whenever it's time to draw another frame.
window.requestAnimFrame(render, canvas);
}

// Callback called each time the browser wants us to draw another frame
function render(time)
{
// Clear the canvas
gl.clear(gl.COLOR_BUFFER_BIT);

// Do a complete rotation every 10 seconds.
var time = Date.now() % 10000;
var angleInDegrees = (360.0 / 10000.0) * time;
var angleInRadians = angleInDegrees / 57.2957795;

var xyz = vec3.create();

// Draw the triangle facing straight on.
mat4.identity(modelMatrix);
drawTriangle(triangleColorBufferObject1);

// Draw one translated a bit down and rotated to be flat on the ground.
mat4.identity(modelMatrix);
xyz[0] = 0; xyz[1] = -1; xyz[2] = 0;
mat4.translate(modelMatrix, xyz);
mat4.rotateX(modelMatrix, 90 / 57.2957795);
xyz[0] = 0; xyz[1] = 0; xyz[2] = 1;
drawTriangle(triangleColorBufferObject2);

// Draw one translated a bit to the right and rotated to be facing to the left.
mat4.identity(modelMatrix);
xyz[0] = 1; xyz[1] = 0; xyz[2] = 0;
mat4.translate(modelMatrix, xyz);
mat4.rotateY(modelMatrix, 90 / 57.2957795);
xyz[0] = 0; xyz[1] = 0; xyz[2] = 1;
drawTriangle(triangleColorBufferObject3);

// Send the commands to WebGL
gl.flush();

// Request another frame
window.requestAnimFrame(render, canvas);
}

function checkError()
{
var error = gl.getError();

if (error)
{
throw("error: " + error);
}
}

// Draws a triangle from the given vertex data.
function drawTriangle(triangleColorBufferObject)
{
// Pass in the position information
//	console.log("positionHandle=" +  positionHandle);
//	console.log("colorHandle=" +  colorHandle);
gl.enableVertexAttribArray(positionHandle);
//    checkError();

gl.bindBuffer(gl.ARRAY_BUFFER, trianglePositionBufferObject);
gl.vertexAttribPointer(positionHandle, positionDataSize, gl.FLOAT, false,
0, 0);
//    checkError();

// Pass in the color information
gl.enableVertexAttribArray(colorHandle);
//    checkError();

gl.bindBuffer(gl.ARRAY_BUFFER, triangleColorBufferObject);
gl.vertexAttribPointer(colorHandle, colorDataSize, gl.FLOAT, false,
0, 0);
//    checkError();

// This multiplies the view matrix by the model matrix, and stores the result in the modelview matrix
// (which currently contains model * view).
mat4.multiply(viewMatrix, modelMatrix, mvpMatrix);
// Matrix.multiplyMM(mMVPMatrix, 0, mViewMatrix, 0, mModelMatrix, 0);

// This multiplies the modelview matrix by the projection matrix, and stores the result in the MVP matrix
// (which now contains model * view * projection).
mat4.multiply(projectionMatrix, mvpMatrix, mvpMatrix);

// Matrix.multiplyMM(mMVPMatrix, 0, mProjectionMatrix, 0, mMVPMatrix, 0);

gl.uniformMatrix4fv(mvpMatrixHandle, false, mvpMatrix);
//    checkError();
gl.drawArrays(gl.TRIANGLES, 0, 3);
//    checkError();
//    console.log("Made it past one frame");
}

// Main entry point
function main()
{
// Try to get a WebGL context
canvas = document.getElementById("canvas");

// We don't need a depth buffer. See https://www.khronos.org/registry/webgl/specs/1.0/ Section 5.2 for more info.
gl = WebGLUtils.setupWebGL(canvas, { depth: false });

if (gl != null)
{
// Init model data.

// Define points for equilateral triangles.
trianglePositions = new Float32Array([
// X, Y, Z,
-0.5, -0.25, 0.0,
0.5, -0.25, 0.0,
0.0, 0.559016994, 0.0]);

// This triangle is red, green, and blue.
triangle1Colors = new Float32Array([
// R, G, B, A
1.0, 0.0, 0.0, 1.0,
0.0, 0.0, 1.0, 1.0,
0.0, 1.0, 0.0, 1.0]);

// This triangle is yellow, cyan, and magenta.
triangle2Colors = new Float32Array([
// R, G, B, A
1.0, 1.0, 0.0, 1.0,
0.0, 1.0, 1.0, 1.0,
1.0, 0.0, 1.0, 1.0]);

// This triangle is white, gray, and black.
triangle3Colors = new Float32Array([
// R, G, B, A
1.0, 1.0, 1.0, 1.0,
0.5, 0.5, 0.5, 1.0,
0.0, 0.0, 0.0, 1.0]);

startRendering();
}
}

// Execute the main entry point
main();
```