In this post in the air hockey series, we’re going to wrap up our air hockey project and add touch event handling and basic collision detection with support for Android, iOS, and emscripten.
The first thing we’ll do is update the core to add touch interaction to the game. We’ll first need to add some helper functions to a new core file called geometry.h.
These are a few typedefs that build upon linmath.h to add a few basic types that we’ll use in our code. Let’s wrap up geometry.h:
static inline int sphere_intersects_ray(Sphere sphere, Ray ray);
static inline float distance_between(vec3 point, Ray ray);
static inline void ray_intersection_point(vec3 result, Ray ray, Plane plane);
static inline int sphere_intersects_ray(Sphere sphere, Ray ray) {
if (distance_between(sphere.center, ray) < sphere.radius)
return 1;
return 0;
}
static inline float distance_between(vec3 point, Ray ray) {
vec3 p1_to_point;
vec3_sub(p1_to_point, point, ray.point);
vec3 p2_to_point;
vec3 translated_ray_point;
vec3_add(translated_ray_point, ray.point, ray.vector);
vec3_sub(p2_to_point, point, translated_ray_point);
// The length of the cross product gives the area of an imaginary
// parallelogram having the two vectors as sides. A parallelogram can be
// thought of as consisting of two triangles, so this is the same as
// twice the area of the triangle defined by the two vectors.
// http://en.wikipedia.org/wiki/Cross_product#Geometric_meaning
vec3 cross_product;
vec3_mul_cross(cross_product, p1_to_point, p2_to_point);
float area_of_triangle_times_two = vec3_len(cross_product);
float length_of_base = vec3_len(ray.vector);
// The area of a triangle is also equal to (base * height) / 2. In
// other words, the height is equal to (area * 2) / base. The height
// of this triangle is the distance from the point to the ray.
float distance_from_point_to_ray = area_of_triangle_times_two / length_of_base;
return distance_from_point_to_ray;
}
// http://en.wikipedia.org/wiki/Line-plane_intersection
// This also treats rays as if they were infinite. It will return a
// point full of NaNs if there is no intersection point.
static inline void ray_intersection_point(vec3 result, Ray ray, Plane plane) {
vec3 ray_to_plane_vector;
vec3_sub(ray_to_plane_vector, plane.point, ray.point);
float scale_factor = vec3_mul_inner(ray_to_plane_vector, plane.normal)
/ vec3_mul_inner(ray.vector, plane.normal);
vec3 intersection_point;
vec3 scaled_ray_vector;
vec3_scale(scaled_ray_vector, ray.vector, scale_factor);
vec3_add(intersection_point, ray.point, scaled_ray_vector);
memcpy(result, intersection_point, sizeof(intersection_point));
}
We’ll do a line-sphere intersection test to see if we’ve touched the mallet using our fingers or a mouse. Once we’ve grabbed the mallet, we’ll do a line-plane intersection test to determine where to place the mallet on the board.
We’ll now begin with the code for handling a touch press:
void on_touch_press(float normalized_x, float normalized_y) {
Ray ray = convert_normalized_2D_point_to_ray(normalized_x, normalized_y);
// Now test if this ray intersects with the mallet by creating a
// bounding sphere that wraps the mallet.
Sphere mallet_bounding_sphere = (Sphere) {
{blue_mallet_position[0],
blue_mallet_position[1],
blue_mallet_position[2]},
mallet_height / 2.0f};
// If the ray intersects (if the user touched a part of the screen that
// intersects the mallet's bounding sphere), then set malletPressed =
// true.
mallet_pressed = sphere_intersects_ray(mallet_bounding_sphere, ray);
}
static Ray convert_normalized_2D_point_to_ray(float normalized_x, float normalized_y) {
// We'll convert these normalized device coordinates into world-space
// coordinates. We'll pick a point on the near and far planes, and draw a
// line between them. To do this transform, we need to first multiply by
// the inverse matrix, and then we need to undo the perspective divide.
vec4 near_point_ndc = {normalized_x, normalized_y, -1, 1};
vec4 far_point_ndc = {normalized_x, normalized_y, 1, 1};
vec4 near_point_world, far_point_world;
mat4x4_mul_vec4(near_point_world, inverted_view_projection_matrix, near_point_ndc);
mat4x4_mul_vec4(far_point_world, inverted_view_projection_matrix, far_point_ndc);
// Why are we dividing by W? We multiplied our vector by an inverse
// matrix, so the W value that we end up is actually the *inverse* of
// what the projection matrix would create. By dividing all 3 components
// by W, we effectively undo the hardware perspective divide.
divide_by_w(near_point_world);
divide_by_w(far_point_world);
// We don't care about the W value anymore, because our points are now
// in world coordinates.
vec3 near_point_ray = {near_point_world[0], near_point_world[1], near_point_world[2]};
vec3 far_point_ray = {far_point_world[0], far_point_world[1], far_point_world[2]};
vec3 vector_between;
vec3_sub(vector_between, far_point_ray, near_point_ray);
return (Ray) {
{near_point_ray[0], near_point_ray[1], near_point_ray[2]},
{vector_between[0], vector_between[1], vector_between[2]}};
}
static void divide_by_w(vec4 vector) {
vector[0] /= vector[3];
vector[1] /= vector[3];
vector[2] /= vector[3];
}
This code first takes normalized touch coordinates which it receives from the Android, iOS or emscripten front ends, and then turns those touch coordinates into a 3D ray in world space. It then intersects the 3D ray with a bounding sphere for the mallet to see if we’ve touched the mallet.
Let’s continue with the code for handling a touch drag:
void on_touch_drag(float normalized_x, float normalized_y) {
if (mallet_pressed == 0)
return;
Ray ray = convert_normalized_2D_point_to_ray(normalized_x, normalized_y);
// Define a plane representing our air hockey table.
Plane plane = (Plane) {{0, 0, 0}, {0, 1, 0}};
// Find out where the touched point intersects the plane
// representing our table. We'll move the mallet along this plane.
vec3 touched_point;
ray_intersection_point(touched_point, ray, plane);
memcpy(previous_blue_mallet_position, blue_mallet_position,
sizeof(blue_mallet_position));
// Clamp to bounds
blue_mallet_position[0] =
clamp(touched_point[0], left_bound + mallet_radius, right_bound - mallet_radius);
blue_mallet_position[1] = mallet_height / 2.0f;
blue_mallet_position[2] =
clamp(touched_point[2], 0.0f + mallet_radius, near_bound - mallet_radius);
// Now test if mallet has struck the puck.
vec3 mallet_to_puck;
vec3_sub(mallet_to_puck, puck_position, blue_mallet_position);
float distance = vec3_len(mallet_to_puck);
if (distance < (puck_radius + mallet_radius)) {
// The mallet has struck the puck. Now send the puck flying
// based on the mallet velocity.
vec3_sub(puck_vector, blue_mallet_position, previous_blue_mallet_position);
}
}
static float clamp(float value, float min, float max) {
return fmin(max, fmax(value, min));
}
Once we’ve grabbed the mallet, we move it across the air hockey table by intersecting the new touch point with the table to determine the new position on the table. We then move the mallet to that new position. We also check if the mallet has struck the puck, and if so, we use the movement distance to calculate the puck’s new velocity.
We next need to update the lines that initialize our objects inside on_surface_created() as follows:
The new linmath.h has merged in the custom code we added to our matrix_helper.h, so we no longer need that file. As part of those changes, our perspective method call in on_surface_changed() now needs the angle entered in radians, so let’s update that method call as follows:
We can then update on_draw_frame() to add the new movement code. Let’s first add the following to the top, right after the call to glClear():
// Translate the puck by its vector
vec3_add(puck_position, puck_position, puck_vector);
// If the puck struck a side, reflect it off that side.
if (puck_position[0] < left_bound + puck_radius
|| puck_position[0] > right_bound - puck_radius) {
puck_vector[0] = -puck_vector[0];
vec3_scale(puck_vector, puck_vector, 0.9f);
}
if (puck_position[2] < far_bound + puck_radius
|| puck_position[2] > near_bound - puck_radius) {
puck_vector[2] = -puck_vector[2];
vec3_scale(puck_vector, puck_vector, 0.9f);
}
// Clamp the puck position.
puck_position[0] =
clamp(puck_position[0], left_bound + puck_radius, right_bound - puck_radius);
puck_position[2] =
clamp(puck_position[2], far_bound + puck_radius, near_bound - puck_radius);
// Friction factor
vec3_scale(puck_vector, puck_vector, 0.99f);
This code will update the puck’s position and cause it to go bouncing around the table. We’ll also need to add the following after the call to mat4x4_mul(view_projection_matrix, projection_matrix, view_matrix);:
With these changes in place, we now need to link in the touch events from each platform. We’ll start off with Android:
MainActivity.java
In MainActivity.java, we first need to update the way that we create the renderer in onCreate():
final RendererWrapper rendererWrapper = new RendererWrapper(this);
// ...
glSurfaceView.setRenderer(rendererWrapper);
Let’s add the touch listener:
glSurfaceView.setOnTouchListener(new OnTouchListener() {
@Override
public boolean onTouch(View v, MotionEvent event) {
if (event != null) {
// Convert touch coordinates into normalized device
// coordinates, keeping in mind that Android's Y
// coordinates are inverted.
final float normalizedX = (event.getX() / (float) v.getWidth()) * 2 - 1;
final float normalizedY = -((event.getY() / (float) v.getHeight()) * 2 - 1);
if (event.getAction() == MotionEvent.ACTION_DOWN) {
glSurfaceView.queueEvent(new Runnable() {
@Override
public void run() {
rendererWrapper.handleTouchPress(normalizedX, normalizedY);
}});
} else if (event.getAction() == MotionEvent.ACTION_MOVE) {
glSurfaceView.queueEvent(new Runnable() {
@Override
public void run() {
rendererWrapper.handleTouchDrag(normalizedX, normalizedY);
}});
}
return true;
} else {
return false;
}
}});
This touch listener takes the incoming touch events from the user, converts them into normalized coordinates in OpenGL’s normalized device coordinate space, and then calls the renderer wrapper which will pass the event on into our native code.
RendererWrapper.java
We’ll need to add the following to RendererWrapper.java:
We now have everything in place for Android, and if we run the app, it should look similar to as seen below:
Adding support for iOS
To add support for iOS, we need to update ViewController.m and add support for touch events. To do that and update the frame rate at the same time, let’s add the following to viewDidLoad: before the call to [self setupGL]:
To listen to the touch events, we need to override a few methods. Let’s add the following methods before - (void)glkView:(GLKView *)view drawInRect:(CGRect)rect:
This is similar to the Android code in that it takes the input touch event, converts it to OpenGL’s normalized device coordinate space, and then sends it on to our game code.
Our iOS app should look similar to the following image:
Adding support for emscripten
Adding support for emscripten is just as easy. Let’s first add the following to the top of main.c:
static void handle_input();
// ...
int is_dragging;
At the beginning of do_frame(), add a call to handle_input();:
This code sets is_dragging depending on whether we just clicked the primary mouse button or if we’re currently dragging the mouse. Depending on the case, we’ll call either on_touch_press or on_touch_drag. The code to normalize the coordinates is the same as in Android and iOS, and indeed a case could be made to abstract out into the common game code, and just pass in the raw coordinates relative to the view size to that game code.
After compiling with emcc make, we should get output similar to the below:
Exploring further
That concludes our air hockey project! The full source code for this lesson can be found at the GitHub project. You can find a more in-depth look at the concepts behind the project from the perspective of Java Android in OpenGL ES 2 for Android: A Quick-Start Guide. For exploring further, there are many things you could add, like improved graphics, support for sound, a simple AI, multiplayer (on the same device), scoring, or a menu system.
Whether you end up using a commercial cross-platform solution like Unity or Corona, or whether you decide to go the independent route, I hope this series was helpful to you and most importantly, that you enjoy your future projects ahead and have a lot of fun with them. 🙂
From the libgdx blog: “Intel Developer Zone and are holding a couple of code fests over the next few weeks in Berlin, New York and Santa Clara. During these events, Intel will help you port your Android apps using native code to x86, free of charge!”
Zero to Sixty in One Second – the developer & designer behind acko.net has redesigned his header and website using WebGL, and I have to say that it looks very cool.
For this post in the air hockey series, we’ll learn how to render our scene from a 3D perspective, as well as how to add a puck and two mallets to the scene. We’ll also see how easy it is to bring these changes to Android, iOS, and emscripten.
The first thing we’ll do is add support for a matrix library so we can use the same matrix math on all three platforms, and then we’ll introduce the changes to our code from the top down. There are a lot of libraries out there, so I decided to use linmath.h by Wolfgang Draxinger for its simplicity and compactness. Since it’s on GitHub, we can easily add it to our project by running the following git command from the root airhockey/ folder:
We’ve added all of the new includes, constants, variables, and function declarations that we’ll need for our new game code. We’ll use Table, Puck, and Mallet to represent our drawable objects, TextureProgram and ColorProgram to represent our shader programs, and the mat4x4 (a datatype from linmath.h) matrices for our OpenGL matrices. In our draw loop, we’ll call position_table_in_scene() to position the table, and position_object_in_scene() to position our other objects.
For those of you who have also followed the Java tutorials from OpenGL ES 2 for Android: A Quick-Start Guide, you’ll recognize that this has a lot in common with the air hockey project from the first part of the book. The code for that project can be freely downloaded from The Pragmatic Bookshelf.
Our new on_surface_changed(int width, int height) now takes in two parameters for the width and the height, and it sets up a projection matrix, and then sets up the view matrix to be slightly above and behind the origin, with an eye position of (0, 1.2, 2.2).
Our new on_draw_frame() positions and draws the table, mallets, and the puck.
Because we changed the definition of on_surface_changed(), we also have to change the declaration in game.h. Change void on_surface_changed(); to void on_surface_changed(int width, int height);.
Adding new helper functions
static void position_table_in_scene() {
// The table is defined in terms of X & Y coordinates, so we rotate it
// 90 degrees to lie flat on the XZ plane.
mat4x4 rotated_model_matrix;
mat4x4_identity(model_matrix);
mat4x4_rotate_X(rotated_model_matrix, model_matrix, deg_to_radf(-90.0f));
mat4x4_mul(
model_view_projection_matrix, view_projection_matrix, rotated_model_matrix);
}
static void position_object_in_scene(float x, float y, float z) {
mat4x4_identity(model_matrix);
mat4x4_translate_in_place(model_matrix, x, y, z);
mat4x4_mul(model_view_projection_matrix, view_projection_matrix, model_matrix);
}
These functions update the matrices to let us position the table, puck, and mallets in the scene. We’ll define all of the extra functions that we need soon.
Adding new shaders
Now we’ll start drilling down into each part of the program and make the changes necessary for our game code to work. Let’s begin by updating our shaders. First, let’s rename our vertex shader shader.vsh to texture_shader.vsh and update it as follows:
After the imports, this is the code to create and draw the table data. This is essentially the same as what we had before, with the coordinates adjusted a bit to change the table into a rectangle.
Generating circles and cylinders
Before we can draw a puck or a mallet, we’ll need to add some helper functions to draw a circle or a cylinder. Let’s define those now:
We first need two helper functions to calculate the size of a circle or a cylinder in terms of vertices. A circle drawn as a triangle fan has one vertex for the center, num_points vertices around the circle, and one more vertex to close the circle. An open-ended cylinder drawn as a triangle strip doesn’t have a center point, but it does have two vertices for each point around the circle, and two more vertices to close off the circle.
static inline int gen_circle(float* out, int offset,
float center_x, float center_y, float center_z,
float radius, int num_points)
{
out[offset++] = center_x;
out[offset++] = center_y;
out[offset++] = center_z;
int i;
for (i = 0; i <= num_points; ++i) {
float angle_in_radians = ((float) i / (float) num_points)
* ((float) M_PI * 2.0f);
out[offset++] = center_x + radius * cos(angle_in_radians);
out[offset++] = center_y;
out[offset++] = center_z + radius * sin(angle_in_radians);
}
return offset;
}
This code will generate a circle, given a center point, a radius, and the number of points around the circle.
This code will generate the vertices for an open-ended cylinder. Note that for both the circle and the cylinder, the loop goes from 0 to num_points, so the first and last points around the circle are duplicated in order to close the loop around the circle.
A mallet contains two circles and two open-ended cylinders, positioned and sized so that the mallet’s base is wider and shorter than the mallet’s handle.
Since C’s trigonometric functions expect passed-in values to be in radians, we’ll use this function to convert degrees into radians, where needed.
Adding matrix helper functions
While linmath.h contains a lot of useful functions, there’s a few missing that we need for our game code. Create a new header file called matrix.h, and begin by adding the following code, all adapted from Android’s OpenGL Matrix class:
We’ll use mat4x4_perspective() to setup a perspective projection matrix.
static inline void mat4x4_translate_in_place(mat4x4 m, float x, float y, float z)
{
int i;
for (i = 0; i < 4; ++i) {
m[3][i] += m[0][i] * x
+ m[1][i] * y
+ m[2][i] * z;
}
}
This helper function lets us translate a matrix in place.
static inline void mat4x4_look_at(mat4x4 m,
float eyeX, float eyeY, float eyeZ,
float centerX, float centerY, float centerZ,
float upX, float upY, float upZ)
{
// See the OpenGL GLUT documentation for gluLookAt for a description
// of the algorithm. We implement it in a straightforward way:
float fx = centerX - eyeX;
float fy = centerY - eyeY;
float fz = centerZ - eyeZ;
// Normalize f
vec3 f_vec = {fx, fy, fz};
float rlf = 1.0f / vec3_len(f_vec);
fx *= rlf;
fy *= rlf;
fz *= rlf;
// compute s = f x up (x means "cross product")
float sx = fy * upZ - fz * upY;
float sy = fz * upX - fx * upZ;
float sz = fx * upY - fy * upX;
// and normalize s
vec3 s_vec = {sx, sy, sz};
float rls = 1.0f / vec3_len(s_vec);
sx *= rls;
sy *= rls;
sz *= rls;
// compute u = s x f
float ux = sy * fz - sz * fy;
float uy = sz * fx - sx * fz;
float uz = sx * fy - sy * fx;
m[0][0] = sx;
m[0][1] = ux;
m[0][2] = -fx;
m[0][3] = 0.0f;
m[1][0] = sy;
m[1][1] = uy;
m[1][2] = -fy;
m[1][3] = 0.0f;
m[2][0] = sz;
m[2][1] = uz;
m[2][2] = -fz;
m[2][3] = 0.0f;
m[3][0] = 0.0f;
m[3][1] = 0.0f;
m[3][2] = 0.0f;
m[3][3] = 1.0f;
mat4x4_translate_in_place(m, -eyeX, -eyeY, -eyeZ);
}
We can use mat4x4_look_at() like a camera, and use it to position the scene in a certain way.
Adding shader program wrappers
We’re almost done the changes to our core code. Let’s wrap up those changes by adding the following code:
We then need to update renderer_wrapper.c and change the call to on_surface_changed(); to on_surface_changed(width, height);. Once we’ve done that, we should be able to run the app on our Android device, and it should look similar to the following image:
Adding support for iOS
For iOS, we just need to open up the Xcode project and add the necessary references to linmath.h and our new core files to the appropriate folder groups, and then we need to update ViewController.m and change on_surface_changed(); to the following:
We then just need to update main.c, move the constants width and height from inside init_gl() to outside the function near the top of the file, and update the call to on_surface_changed(); to on_surface_changed(width, height);. We can then build the file by calling emmake make, which should produce a file that looks as follows:
See how easy that was? Now that we have a minimal cross-platform framework in place, it’s very easy for us to bring changes to the core code across to each platform.
Exploring further
The full source code for this lesson can be found at the GitHub project. In the next post, we’ll take a look at user input so we can move our mallet around the screen.
As some of you may already know, Apple recently announced the iPhone 5s & 5c at their annual iPhone event, and one of the new updates is that the iPhone 5s will also be coming with support for OpenGL ES 3.0! Google announced support for OpenGL ES 3.0 with their release of Android 4.3 not too long ago, so the new version is slowly making its way onto mobile devices.
OpenGL ES 3.0 is backwards-compatible with OpenGL ES 2.0, so everything you learned about OpenGL ES 2.0 still applies. This post from Phoronix goes into more detail about what the new version brings: A Look At OpenGL ES 3.0: Lots Of Good Stuff.
On to the roundup:
Ghoshehsoft’s Blog – A look at many topics related to OpenGL ES 2.0 on Android.
Project Anarchy – “Project Anarchy is a complete end to end game engine and state-of-the-art toolset for mobile. Project Anarchy also comprises a vibrant game development community centered right here at www.projectanarchy.com. Project Anarchy includes an entirely free license to ship your game on iOS, Android and Tizen platforms.”
The first thing we’ll do is add new supporting files to the common platform code, as we’ll need them for both iOS and emscripten. These new files should go in /airhockey/src/platform/common:
We’ll use these two functions to read data from a file and return it in a memory buffer, and release that buffer when we no longer need to keep it around. For iOS & emscripten, our asset loading code will wrap these file loading functions.
To load in an asset that’s been bundled with the application, we first prefix the path with ‘/assets/’, and then we use the mainBundle of the application to get the path for the resource. Once we’ve done that, we can use the regular file reading code that we’ve defined in platform_file_utils.c.
iOS experts: When I was researching how to do this, I wasn’t sure if this was the best way or even the right way, but it does seem to work. I’d love to know if there’s another way to do this that is more appropriate, perhaps just by grabbing the path of the application and concatenating that with the relative path?
Aside from adding this new file, we just need to add some references to the project and then we’ll be able to compile & run the app.
Adding the libpng files
Right-click the project and select Add Files to “Air Hockey”…. Add the following C files from the libpng folder, and add them as a new folder group:
Remove the common folder group that may be left there from the last lesson, and then add all of the files from the core folder as a new folder group. Do the same for all of the files in /platform/common. Finally, add the assets folder as a folder reference, not as a folder group. That will link the assets folder directly into the project and package those files with the application.
We’ll also need to link to libz.dylib. To do this, click on the ‘airhockey’ target, select Build Phases, expand Link Binary With Libraries, and add a reference to ‘libz.dylib’.
The Xcode Project Navigator should look similar to the below:
It might make more sense to link in the libpng sources as a static library somehow, but I found that this compiled very fast even from a clean build. Once you run the application in the simulator, it should look similar to the following image:
Now the same code that we used in Android is running on iOS to load in a texture, with very little work required to customize it for iOS! One of the advantages of this approach is that we can also take advantage of the vastly superior debugging and profiling capabilities of Xcode (as compared to what you get in Eclipse with the NDK!), and Xcode can also build the project far faster than the Android tools can, leading to quicker iteration times.
Exploring further
The full source code for this lesson can be found at the GitHub project. In the next post, we’ll also cover an Emscripten target, and we’ll see that it also won’t take much work to support. As always, let me know your feedback. 🙂
In the last post, we covered how to call OpenGL from C on Android by using the NDK; in this post, we’ll call into the same common code from an Objective-C codebase which will run on an iOS device.
We’ll be using Xcode in this lesson. There are other IDEs available, such as AppCode. If you don’t have a Mac available for development, there is more info on alternatives available here:
For this article, I used Xcode 4.6.3 with the iOS 6.1 Simulator.
Getting started
We’ll create a new project from an Xcode template with support for OpenGL already configured. You can follow along all of the code at the GitHub project.
To create the new project, open Xcode, and select File->New->Project…. When asked to choose a template, select Application under iOS, and then select OpenGL Game and select Next. Enter ‘Air Hockey’ as the Product Name, enter whatever you prefer for the Organization Name and Company Identifier, select Universal next to Devices, and make sure that Use Storyboards and Use Automatic Reference Counting are both checked, then select Next.
Place the project in a new folder called ios inside of the airhockey folder that we worked with from the previous post. This means that we should now have three folders inside of airhockey: android, common, and ios. Don’t check Create local git repository for this project, as we already setup a git repository in the previous post.
Once the project’s been created, you should see a new folder called Air Hockey inside of the ios folder, and inside of Air Hockey, there’ll be another folder called Air Hockey, as well as a project folder called Air Hockey.xcodeproj.
Flattening the Xcode project
I personally prefer to flatten this all out and put everything in the ios folder, and the following steps will show you how to do this; please feel free to skip this section if you don’t mind having the nested folders:
Close the project in Xcode, and then move all of the files inside of the second Air Hockey folder so that they are directly inside of the ios folder.
Move Air Hockey.xcodeproj so that it’s also inside of the ios folder. The extra Air Hockey folders can now be deleted.
Right-click Air Hockey.xcodeproj in the Finder and select Show Package Contents.
Edit project.pbxproj in a text editor, and delete all occurrences of ‘Air Hockey/’.
Go back to the ios folder and open up Air Hockey.xcodeproj in Xcode.
Select View->Navigator->Show Project Navigator and View->Utilities->Show File Inspector.
Select Air Hockey in the Project Navigator on the left. On the right in the File Inspector, click the button under Relative to Group, to the right of Air Hockey, select some random folder (this is to work around a bug) and select Choose, then click it again and select the ios folder this time.
Right-click the project root in the Project Navigator (the item that looks like “Air Hockey, 1 target, iOS SDK 6.1”).
Select Add Files to “Air Hockey”….
Select the common folder, which will be located next to the ios folder, make sure that Copy items into destination group’s folder (if needed) is not checked, that Create groups for any added folders is selected, and that Air Hockey is selected next to Add to targets, then select Add.
You should now see the common folder appear in the Project Navigator, with game.h and game.c inside.
Understanding how iOS manages OpenGL through the GLKit framework
When we created a new project with the OpenGL Game template, Xcode set things up so that when the application is launched, it displays an OpenGL view on the screen, and drives that view with a special view controller. A view controller in iOS manages a set of views, and can be thought of as being sort of like the iOS counterpart of an Android Activity.
When the application is launched, the OS reads the storyboard file, which tells it to create a new view controller that is subclassed from GLKViewController and add it to the application’s window. This view controller is part of the GLKit framework and provides an OpenGL ES rendering loop. It can be configured with a preferred frame rate, and it can also automatically handle application-level events, such as pausing the rendering loop when the application is about to go to the background.
That GLKViewController contains a GLKView as its root view, which is what creates and manages the actual OpenGL framebuffer. This GLKView is automatically linked to the GLKViewController, so that when it’s time to draw a new frame, it will call a method called drawInRect: in our GLKViewController.
Before moving on to the next step, you may want to check out the default project by running it in the simulator, just to see what it looks like.
Calling our common code from the view controller
The default code in the view controller does a lot more than we need, since it creates an entire demo scene. We want to keep things simple for now and just see that we can call OpenGL from C and wrap that with the view controller, so let’s open up ViewController.m, delete everything, and start off by adding the following code:
This includes game.h so that we can call our common functions, defines a new property to hold the EAGL context, and declares a method called setupGL: which we’ll define soon. Let’s continue the code:
Once the GLKView has been loaded into memory, viewDidLoad: will be called so that we can initialize our OpenGL context. We initialize an OpenGL ES 2 context here and assign it to the context property by calling:
This allocates a new instance of an EAGLContext, which manages all of the state and resources required to render using OpenGL ES. We then initialize that instance by calling initWithAPI:, passing in a special token which tells it to initialize the context for OpenGL ES 2 rendering.
For those of you not used to Objective-C syntax, here’s what this could look like if it were using Java syntax:
this.context = new EAGLContext().initWithAPI(kEAGLRenderingAPIOpenGLES2);
Once we have an EAGLContext, we assign it to the view, we configure the view’s depth buffer format, and then we call the following to do further OpenGL setup:
[self setupGL]
We’ll define this method further below. dealloc: will be called when the view controller is destroyed, so there we release the EAGLContext if needed by calling the following:
Here is where we call our game code to do the actual rendering. In setupGL:, we set the EAGLContext so that we have a valid context to use for drawing, and then we call on_surface_created() and on_surface_changed() from our common code. Every time a new frame needs to be drawn, drawInRect: will be called, so there we call on_draw_frame().
Why don’t we also need to set the context from drawInRect:? This method is actually a delegate method which is declared in GLKViewDelegate and called by the GLKView, and the view takes care of setting the context and framebuffer target for us before it calls our delegate. For those of you from the Java world, this is like having our class implement an interface and passing ourselves as a listener to another class, so that it can call us back via that interface. Our view controller automatically set itself as the delegate when it was linked to the GLKView by the storyboard.
We don’t have to do things this way — we could also subclass GLKView instead and override its drawRect: method. Delegation is simply a preferred pattern in iOS when subclassing isn’t required.
As a quick reminder, here’s what we had defined in our three functions from game.c:
So, when we actually run our project, we should expect the screen to get cleared to red.
Before we can build and run the project, we’ll need to add a glwrapper.h header to the project, like we did for the Android project in the previous post. In the same folder as ViewController.m, add a new header file called glwrapper.h, and add the following contents:
#include <OpenGLES/ES2/gl.h>
Building and running the project
We should now be able to build and run the project in the iOS simulator. Click the play button to run the app and launch the simulator. Once it’s launched, you should see a screen similar to the following image:
And that’s it! By using GLKit, we can easily wrap our OpenGL code and call it from Objective-C.
To tell iOS and the App Store that our application should not be displayed to unsupported devices, we can add the key ‘opengles-2’ to Air Hockey-Info.plist, inside Required device capabilities.
Exploring further
The full source code for this lesson can be found at the GitHub project. For further reading, I recommend the following excellent intro to GLKit, which goes into more detail on using GLKView, GLKViewController and other areas of GLKit:
In the next part of this series, we’ll take a look at using emscripten to create a new project that also calls into our common code and compiles it for the web. I am coming to iOS from a background in Java and Android and I am new to iOS and Objective-C, so please let me know if anything doesn’t make sense here, and I’ll go and fix it up. 🙂