Lately, I’ve been more interested in learning about emulators and interpreters, down to the way that CPUs work at a low level. In university, the furthest down we got was C++ and we didn’t spend that much time there, so everything under that has always felt a little like black magic to me. To overcome that, I took a look into what it would take to write a toy emulator.
After briefly considering an NES emulator as my first project, I discovered the CHIP-8 VM and decided that would make a really neat first project. There are lots of resources for the CHIP-8 and it’s also a simple implementation, which means it can be done over a weekend or even faster.
I decided to write the emulator using Rust, a systems programming language that I’ve been dabbling in on and off whenever I feel like taking a break from Java. In the past, I would have dabbled in C or C++ but as time goes by, I feel that it makes more sense to focus on Rust.
Why?
Like C and C++, Rust compiles down to native code and runs without the overhead of a VM while also giving you full access to the platform underneath. Unlike C and C++, it also has some nice features that make it feel modern and fresh, like memory safety without garbage collection, super-charged enums, and a great build system that has built-in support for dependencies, unit testing, and more.
In fact, the only real beef I have with Rust is that it’s still young, so its support for mobile development is not quite up to par with the C and C++ support provided by Apple and Google. I’m sure this will improve with time. (After spending some time with Swift, I would also wish for no semicolons and maybe nicer optional unwrapping, but, at least with the semicolons, that ship has probably already sailed. :))
By writing the interpreter in Rust and taking advantage of the unit test features, it was very easy to build up the CHIP-8 VM, and I only ended up with a couple of pesky logic bugs due to being unsure about the implementation in a couple of places and due to a misuse of the slicing syntax.
Once I was done with the Rust implementation, I just copy-pasted the whole thing into Javascript, which you can check out at the end of this post. The code is available on GitHub, and here are a couple of other implementations you can also check out:
I would also recommend checking out nand2tetris, a really interesting course which teaches you how to build your own toy CPU from NAND gates. The course can be audited for free on Coursera.
In the future, instead of copy/pasting to JS, I might be able to just compile straight to WebAssembly:
In this post in the air hockey series, we’re going to wrap up our air hockey project and add touch event handling and basic collision detection with support for Android, iOS, and emscripten.
The first thing we’ll do is update the core to add touch interaction to the game. We’ll first need to add some helper functions to a new core file called geometry.h.
These are a few typedefs that build upon linmath.h to add a few basic types that we’ll use in our code. Let’s wrap up geometry.h:
static inline int sphere_intersects_ray(Sphere sphere, Ray ray);
static inline float distance_between(vec3 point, Ray ray);
static inline void ray_intersection_point(vec3 result, Ray ray, Plane plane);
static inline int sphere_intersects_ray(Sphere sphere, Ray ray) {
if (distance_between(sphere.center, ray) < sphere.radius)
return 1;
return 0;
}
static inline float distance_between(vec3 point, Ray ray) {
vec3 p1_to_point;
vec3_sub(p1_to_point, point, ray.point);
vec3 p2_to_point;
vec3 translated_ray_point;
vec3_add(translated_ray_point, ray.point, ray.vector);
vec3_sub(p2_to_point, point, translated_ray_point);
// The length of the cross product gives the area of an imaginary
// parallelogram having the two vectors as sides. A parallelogram can be
// thought of as consisting of two triangles, so this is the same as
// twice the area of the triangle defined by the two vectors.
// http://en.wikipedia.org/wiki/Cross_product#Geometric_meaning
vec3 cross_product;
vec3_mul_cross(cross_product, p1_to_point, p2_to_point);
float area_of_triangle_times_two = vec3_len(cross_product);
float length_of_base = vec3_len(ray.vector);
// The area of a triangle is also equal to (base * height) / 2. In
// other words, the height is equal to (area * 2) / base. The height
// of this triangle is the distance from the point to the ray.
float distance_from_point_to_ray = area_of_triangle_times_two / length_of_base;
return distance_from_point_to_ray;
}
// http://en.wikipedia.org/wiki/Line-plane_intersection
// This also treats rays as if they were infinite. It will return a
// point full of NaNs if there is no intersection point.
static inline void ray_intersection_point(vec3 result, Ray ray, Plane plane) {
vec3 ray_to_plane_vector;
vec3_sub(ray_to_plane_vector, plane.point, ray.point);
float scale_factor = vec3_mul_inner(ray_to_plane_vector, plane.normal)
/ vec3_mul_inner(ray.vector, plane.normal);
vec3 intersection_point;
vec3 scaled_ray_vector;
vec3_scale(scaled_ray_vector, ray.vector, scale_factor);
vec3_add(intersection_point, ray.point, scaled_ray_vector);
memcpy(result, intersection_point, sizeof(intersection_point));
}
We’ll do a line-sphere intersection test to see if we’ve touched the mallet using our fingers or a mouse. Once we’ve grabbed the mallet, we’ll do a line-plane intersection test to determine where to place the mallet on the board.
We’ll now begin with the code for handling a touch press:
void on_touch_press(float normalized_x, float normalized_y) {
Ray ray = convert_normalized_2D_point_to_ray(normalized_x, normalized_y);
// Now test if this ray intersects with the mallet by creating a
// bounding sphere that wraps the mallet.
Sphere mallet_bounding_sphere = (Sphere) {
{blue_mallet_position[0],
blue_mallet_position[1],
blue_mallet_position[2]},
mallet_height / 2.0f};
// If the ray intersects (if the user touched a part of the screen that
// intersects the mallet's bounding sphere), then set malletPressed =
// true.
mallet_pressed = sphere_intersects_ray(mallet_bounding_sphere, ray);
}
static Ray convert_normalized_2D_point_to_ray(float normalized_x, float normalized_y) {
// We'll convert these normalized device coordinates into world-space
// coordinates. We'll pick a point on the near and far planes, and draw a
// line between them. To do this transform, we need to first multiply by
// the inverse matrix, and then we need to undo the perspective divide.
vec4 near_point_ndc = {normalized_x, normalized_y, -1, 1};
vec4 far_point_ndc = {normalized_x, normalized_y, 1, 1};
vec4 near_point_world, far_point_world;
mat4x4_mul_vec4(near_point_world, inverted_view_projection_matrix, near_point_ndc);
mat4x4_mul_vec4(far_point_world, inverted_view_projection_matrix, far_point_ndc);
// Why are we dividing by W? We multiplied our vector by an inverse
// matrix, so the W value that we end up is actually the *inverse* of
// what the projection matrix would create. By dividing all 3 components
// by W, we effectively undo the hardware perspective divide.
divide_by_w(near_point_world);
divide_by_w(far_point_world);
// We don't care about the W value anymore, because our points are now
// in world coordinates.
vec3 near_point_ray = {near_point_world[0], near_point_world[1], near_point_world[2]};
vec3 far_point_ray = {far_point_world[0], far_point_world[1], far_point_world[2]};
vec3 vector_between;
vec3_sub(vector_between, far_point_ray, near_point_ray);
return (Ray) {
{near_point_ray[0], near_point_ray[1], near_point_ray[2]},
{vector_between[0], vector_between[1], vector_between[2]}};
}
static void divide_by_w(vec4 vector) {
vector[0] /= vector[3];
vector[1] /= vector[3];
vector[2] /= vector[3];
}
This code first takes normalized touch coordinates which it receives from the Android, iOS or emscripten front ends, and then turns those touch coordinates into a 3D ray in world space. It then intersects the 3D ray with a bounding sphere for the mallet to see if we’ve touched the mallet.
Let’s continue with the code for handling a touch drag:
void on_touch_drag(float normalized_x, float normalized_y) {
if (mallet_pressed == 0)
return;
Ray ray = convert_normalized_2D_point_to_ray(normalized_x, normalized_y);
// Define a plane representing our air hockey table.
Plane plane = (Plane) {{0, 0, 0}, {0, 1, 0}};
// Find out where the touched point intersects the plane
// representing our table. We'll move the mallet along this plane.
vec3 touched_point;
ray_intersection_point(touched_point, ray, plane);
memcpy(previous_blue_mallet_position, blue_mallet_position,
sizeof(blue_mallet_position));
// Clamp to bounds
blue_mallet_position[0] =
clamp(touched_point[0], left_bound + mallet_radius, right_bound - mallet_radius);
blue_mallet_position[1] = mallet_height / 2.0f;
blue_mallet_position[2] =
clamp(touched_point[2], 0.0f + mallet_radius, near_bound - mallet_radius);
// Now test if mallet has struck the puck.
vec3 mallet_to_puck;
vec3_sub(mallet_to_puck, puck_position, blue_mallet_position);
float distance = vec3_len(mallet_to_puck);
if (distance < (puck_radius + mallet_radius)) {
// The mallet has struck the puck. Now send the puck flying
// based on the mallet velocity.
vec3_sub(puck_vector, blue_mallet_position, previous_blue_mallet_position);
}
}
static float clamp(float value, float min, float max) {
return fmin(max, fmax(value, min));
}
Once we’ve grabbed the mallet, we move it across the air hockey table by intersecting the new touch point with the table to determine the new position on the table. We then move the mallet to that new position. We also check if the mallet has struck the puck, and if so, we use the movement distance to calculate the puck’s new velocity.
We next need to update the lines that initialize our objects inside on_surface_created() as follows:
The new linmath.h has merged in the custom code we added to our matrix_helper.h, so we no longer need that file. As part of those changes, our perspective method call in on_surface_changed() now needs the angle entered in radians, so let’s update that method call as follows:
We can then update on_draw_frame() to add the new movement code. Let’s first add the following to the top, right after the call to glClear():
// Translate the puck by its vector
vec3_add(puck_position, puck_position, puck_vector);
// If the puck struck a side, reflect it off that side.
if (puck_position[0] < left_bound + puck_radius
|| puck_position[0] > right_bound - puck_radius) {
puck_vector[0] = -puck_vector[0];
vec3_scale(puck_vector, puck_vector, 0.9f);
}
if (puck_position[2] < far_bound + puck_radius
|| puck_position[2] > near_bound - puck_radius) {
puck_vector[2] = -puck_vector[2];
vec3_scale(puck_vector, puck_vector, 0.9f);
}
// Clamp the puck position.
puck_position[0] =
clamp(puck_position[0], left_bound + puck_radius, right_bound - puck_radius);
puck_position[2] =
clamp(puck_position[2], far_bound + puck_radius, near_bound - puck_radius);
// Friction factor
vec3_scale(puck_vector, puck_vector, 0.99f);
This code will update the puck’s position and cause it to go bouncing around the table. We’ll also need to add the following after the call to mat4x4_mul(view_projection_matrix, projection_matrix, view_matrix);:
With these changes in place, we now need to link in the touch events from each platform. We’ll start off with Android:
MainActivity.java
In MainActivity.java, we first need to update the way that we create the renderer in onCreate():
final RendererWrapper rendererWrapper = new RendererWrapper(this);
// ...
glSurfaceView.setRenderer(rendererWrapper);
Let’s add the touch listener:
glSurfaceView.setOnTouchListener(new OnTouchListener() {
@Override
public boolean onTouch(View v, MotionEvent event) {
if (event != null) {
// Convert touch coordinates into normalized device
// coordinates, keeping in mind that Android's Y
// coordinates are inverted.
final float normalizedX = (event.getX() / (float) v.getWidth()) * 2 - 1;
final float normalizedY = -((event.getY() / (float) v.getHeight()) * 2 - 1);
if (event.getAction() == MotionEvent.ACTION_DOWN) {
glSurfaceView.queueEvent(new Runnable() {
@Override
public void run() {
rendererWrapper.handleTouchPress(normalizedX, normalizedY);
}});
} else if (event.getAction() == MotionEvent.ACTION_MOVE) {
glSurfaceView.queueEvent(new Runnable() {
@Override
public void run() {
rendererWrapper.handleTouchDrag(normalizedX, normalizedY);
}});
}
return true;
} else {
return false;
}
}});
This touch listener takes the incoming touch events from the user, converts them into normalized coordinates in OpenGL’s normalized device coordinate space, and then calls the renderer wrapper which will pass the event on into our native code.
RendererWrapper.java
We’ll need to add the following to RendererWrapper.java:
We now have everything in place for Android, and if we run the app, it should look similar to as seen below:
Air Hockey with touch, running on a Galaxy Nexus
Adding support for iOS
To add support for iOS, we need to update ViewController.m and add support for touch events. To do that and update the frame rate at the same time, let’s add the following to viewDidLoad: before the call to [self setupGL]:
To listen to the touch events, we need to override a few methods. Let’s add the following methods before - (void)glkView:(GLKView *)view drawInRect:(CGRect)rect:
This is similar to the Android code in that it takes the input touch event, converts it to OpenGL’s normalized device coordinate space, and then sends it on to our game code.
Our iOS app should look similar to the following image:
Air Hockey with touch, on iOS
Adding support for emscripten
Adding support for emscripten is just as easy. Let’s first add the following to the top of main.c:
static void handle_input();
// ...
int is_dragging;
At the beginning of do_frame(), add a call to handle_input();:
This code sets is_dragging depending on whether we just clicked the primary mouse button or if we’re currently dragging the mouse. Depending on the case, we’ll call either on_touch_press or on_touch_drag. The code to normalize the coordinates is the same as in Android and iOS, and indeed a case could be made to abstract out into the common game code, and just pass in the raw coordinates relative to the view size to that game code.
After compiling with emcc make, we should get output similar to the below:
Exploring further
That concludes our air hockey project! The full source code for this lesson can be found at the GitHub project. You can find a more in-depth look at the concepts behind the project from the perspective of Java Android in OpenGL ES 2 for Android: A Quick-Start Guide. For exploring further, there are many things you could add, like improved graphics, support for sound, a simple AI, multiplayer (on the same device), scoring, or a menu system.
Whether you end up using a commercial cross-platform solution like Unity or Corona, or whether you decide to go the independent route, I hope this series was helpful to you and most importantly, that you enjoy your future projects ahead and have a lot of fun with them. 🙂
For this post in the air hockey series, we’ll learn how to render our scene from a 3D perspective, as well as how to add a puck and two mallets to the scene. We’ll also see how easy it is to bring these changes to Android, iOS, and emscripten.
The first thing we’ll do is add support for a matrix library so we can use the same matrix math on all three platforms, and then we’ll introduce the changes to our code from the top down. There are a lot of libraries out there, so I decided to use linmath.h by Wolfgang Draxinger for its simplicity and compactness. Since it’s on GitHub, we can easily add it to our project by running the following git command from the root airhockey/ folder:
We’ve added all of the new includes, constants, variables, and function declarations that we’ll need for our new game code. We’ll use Table, Puck, and Mallet to represent our drawable objects, TextureProgram and ColorProgram to represent our shader programs, and the mat4x4 (a datatype from linmath.h) matrices for our OpenGL matrices. In our draw loop, we’ll call position_table_in_scene() to position the table, and position_object_in_scene() to position our other objects.
For those of you who have also followed the Java tutorials from OpenGL ES 2 for Android: A Quick-Start Guide, you’ll recognize that this has a lot in common with the air hockey project from the first part of the book. The code for that project can be freely downloaded from The Pragmatic Bookshelf.
Our new on_surface_changed(int width, int height) now takes in two parameters for the width and the height, and it sets up a projection matrix, and then sets up the view matrix to be slightly above and behind the origin, with an eye position of (0, 1.2, 2.2).
Our new on_draw_frame() positions and draws the table, mallets, and the puck.
Because we changed the definition of on_surface_changed(), we also have to change the declaration in game.h. Change void on_surface_changed(); to void on_surface_changed(int width, int height);.
Adding new helper functions
static void position_table_in_scene() {
// The table is defined in terms of X & Y coordinates, so we rotate it
// 90 degrees to lie flat on the XZ plane.
mat4x4 rotated_model_matrix;
mat4x4_identity(model_matrix);
mat4x4_rotate_X(rotated_model_matrix, model_matrix, deg_to_radf(-90.0f));
mat4x4_mul(
model_view_projection_matrix, view_projection_matrix, rotated_model_matrix);
}
static void position_object_in_scene(float x, float y, float z) {
mat4x4_identity(model_matrix);
mat4x4_translate_in_place(model_matrix, x, y, z);
mat4x4_mul(model_view_projection_matrix, view_projection_matrix, model_matrix);
}
These functions update the matrices to let us position the table, puck, and mallets in the scene. We’ll define all of the extra functions that we need soon.
Adding new shaders
Now we’ll start drilling down into each part of the program and make the changes necessary for our game code to work. Let’s begin by updating our shaders. First, let’s rename our vertex shader shader.vsh to texture_shader.vsh and update it as follows:
After the imports, this is the code to create and draw the table data. This is essentially the same as what we had before, with the coordinates adjusted a bit to change the table into a rectangle.
Generating circles and cylinders
Before we can draw a puck or a mallet, we’ll need to add some helper functions to draw a circle or a cylinder. Let’s define those now:
We first need two helper functions to calculate the size of a circle or a cylinder in terms of vertices. A circle drawn as a triangle fan has one vertex for the center, num_points vertices around the circle, and one more vertex to close the circle. An open-ended cylinder drawn as a triangle strip doesn’t have a center point, but it does have two vertices for each point around the circle, and two more vertices to close off the circle.
static inline int gen_circle(float* out, int offset,
float center_x, float center_y, float center_z,
float radius, int num_points)
{
out[offset++] = center_x;
out[offset++] = center_y;
out[offset++] = center_z;
int i;
for (i = 0; i <= num_points; ++i) {
float angle_in_radians = ((float) i / (float) num_points)
* ((float) M_PI * 2.0f);
out[offset++] = center_x + radius * cos(angle_in_radians);
out[offset++] = center_y;
out[offset++] = center_z + radius * sin(angle_in_radians);
}
return offset;
}
This code will generate a circle, given a center point, a radius, and the number of points around the circle.
This code will generate the vertices for an open-ended cylinder. Note that for both the circle and the cylinder, the loop goes from 0 to num_points, so the first and last points around the circle are duplicated in order to close the loop around the circle.
A mallet contains two circles and two open-ended cylinders, positioned and sized so that the mallet’s base is wider and shorter than the mallet’s handle.
Since C’s trigonometric functions expect passed-in values to be in radians, we’ll use this function to convert degrees into radians, where needed.
Adding matrix helper functions
While linmath.h contains a lot of useful functions, there’s a few missing that we need for our game code. Create a new header file called matrix.h, and begin by adding the following code, all adapted from Android’s OpenGL Matrix class:
We’ll use mat4x4_perspective() to setup a perspective projection matrix.
static inline void mat4x4_translate_in_place(mat4x4 m, float x, float y, float z)
{
int i;
for (i = 0; i < 4; ++i) {
m[3][i] += m[0][i] * x
+ m[1][i] * y
+ m[2][i] * z;
}
}
This helper function lets us translate a matrix in place.
static inline void mat4x4_look_at(mat4x4 m,
float eyeX, float eyeY, float eyeZ,
float centerX, float centerY, float centerZ,
float upX, float upY, float upZ)
{
// See the OpenGL GLUT documentation for gluLookAt for a description
// of the algorithm. We implement it in a straightforward way:
float fx = centerX - eyeX;
float fy = centerY - eyeY;
float fz = centerZ - eyeZ;
// Normalize f
vec3 f_vec = {fx, fy, fz};
float rlf = 1.0f / vec3_len(f_vec);
fx *= rlf;
fy *= rlf;
fz *= rlf;
// compute s = f x up (x means "cross product")
float sx = fy * upZ - fz * upY;
float sy = fz * upX - fx * upZ;
float sz = fx * upY - fy * upX;
// and normalize s
vec3 s_vec = {sx, sy, sz};
float rls = 1.0f / vec3_len(s_vec);
sx *= rls;
sy *= rls;
sz *= rls;
// compute u = s x f
float ux = sy * fz - sz * fy;
float uy = sz * fx - sx * fz;
float uz = sx * fy - sy * fx;
m[0][0] = sx;
m[0][1] = ux;
m[0][2] = -fx;
m[0][3] = 0.0f;
m[1][0] = sy;
m[1][1] = uy;
m[1][2] = -fy;
m[1][3] = 0.0f;
m[2][0] = sz;
m[2][1] = uz;
m[2][2] = -fz;
m[2][3] = 0.0f;
m[3][0] = 0.0f;
m[3][1] = 0.0f;
m[3][2] = 0.0f;
m[3][3] = 1.0f;
mat4x4_translate_in_place(m, -eyeX, -eyeY, -eyeZ);
}
We can use mat4x4_look_at() like a camera, and use it to position the scene in a certain way.
Adding shader program wrappers
We’re almost done the changes to our core code. Let’s wrap up those changes by adding the following code:
We then need to update renderer_wrapper.c and change the call to on_surface_changed(); to on_surface_changed(width, height);. Once we’ve done that, we should be able to run the app on our Android device, and it should look similar to the following image:
Air hockey, running on a Galaxy Nexus
Adding support for iOS
For iOS, we just need to open up the Xcode project and add the necessary references to linmath.h and our new core files to the appropriate folder groups, and then we need to update ViewController.m and change on_surface_changed(); to the following:
We then just need to update main.c, move the constants width and height from inside init_gl() to outside the function near the top of the file, and update the call to on_surface_changed(); to on_surface_changed(width, height);. We can then build the file by calling emmake make, which should produce a file that looks as follows:
See how easy that was? Now that we have a minimal cross-platform framework in place, it’s very easy for us to bring changes to the core code across to each platform.
Exploring further
The full source code for this lesson can be found at the GitHub project. In the next post, we’ll take a look at user input so we can move our mallet around the screen.
In the last two posts in this series, we added support for loading a PNG file into OpenGL as a texture, and then we displayed that texture on the screen:
For emscripten, there is nothing special to do since it supports a virtual file system using pre-embedded resources. For loading assets, all we need to do here is just forward the calls on to the platform-independent file loading functions that we defined in the previous post.
Adding third-party dependencies
Since emscripten doesn’t have zlib built into it like Android and iOS do, we’ll need to add that as a third-party dependency. Download zlib 1.2.8 from http://zlib.net/ and extract it to /airhockey/src/3rdparty/libzlib. We won’t need to do anything else to get it to compile.
Updating the Makefile
To get things to compile and run, let’s replace the Makefile in the emscripten directory with the following contents:
This Makefile specifies all of the required object files. When we run make, it will find the source files automatically and compile them into objects, using the CFLAGS that we’ve defined above. When we run this Makefile through emmake, emcc will use the --embed-file ../../../assets@/ line to package all of the assets into our target HTML file; the @/ syntax tells emscripten to place the assets at the root of the virtual file system. More information can be found at the emscripten wiki.
If your emscripten is configured and ready to go, then you can build the program by running emmake as follows, changing the path to emmake to reflect where you’ve installed emscripten.
MacBook-Air:emscripten user$ /opt/emscripten/emmake make -j 8
If all went well, airhockey.html should look similar to the following HTML:
The full source code for this lesson can be found at the GitHub project. For the next few posts, we’re going to start doing more with our base so that we can start making this look more like an actual air hockey game!
In this post, we’ll also add support for emscripten, an LLVM-to-JavaScript compiler that can convert C and C++ code into JavaScript. Emscripten is quite a neat piece of technology, and has led to further improvements to JavaScript engines, such as asm.js. Check out all of the demos over at the wiki.
The instructions over at https://gist.github.com/dweekly/5873953 should get you up and running. Instead of brew install node, you can also enter sudo port install nodejs, if using MacPorts. I installed emscripten and LLVM into the /opt directory.
First you should run emcc from the emscripten directory to create a default config file in ~/.emscripten. After configuring ~/.emscripten and checking that all paths are correct, you can test the install by entering the following into a terminal shell from the emscripten directory:
./emcc tests/hello_world.cpp -o hello_world.html open hello_world.html
Installing on Ubuntu Linux (tested on Ubuntu 13.04)
Download the latest node.js from http://nodejs.org/, extract it, and then build & install it with the following commands from inside the nodejs source directory:
This command will print out a listing with the auto-detected paths for LLVM and other utilities. Check that all paths are correct, and edit ~/.emscripten if any are not.
You can then test out the install by entering the following commands:
If all goes well, you should then see a browser window open with “hello, world!” printed out in a box.
Adding support for emscripten
Let’s start by creating a new folder called emscripten in the airhockey folder. Inthat new folder, let’s create a new source file called main.c, beginning with the following contents:
In this C source file, we’ve cleared a few functions, and then we’ve defined the main body of our program. The program will begin by calling init_gl() (a function that we’ll define further below) to initialize OpenGL, then it will call on_surface_created() and on_surface_changed() from our common code, and then it will call a special emscripten function, emscripten_set_main_loop(), which can simulate an infinite loop by using the browser’s requestAnimationFrame mechanism.
In the rest of this code, we use GLFW, an OpenGL library for managing OpenGL contexts, creating windows, and handling input. Emscripten has special support for GLFW built into it, so that the calls will be translated to matching JavaScript code on compilation.
Like we did for Android and iOS, we also need to define where the OpenGL headers are stored for our common code. Save the following into a new file called glwrapper.h in airhockey/emscripten/:
#include <GLES2/gl2.h>
Building the code and running it in a browser
To build the program, run the following command in a terminal shell from airhockey/emscripten/:
In the GitHub project, there’s also a Makefile which will build airhockey.html when emmake make is called. This Makefile can also be used on Windows by running python emmake mingw32-make, putting the right paths where appropriate. To see the code in action, just open up airhockey.html in a browser.
When we ask emscripten to generate an HTML file, it will generate an HTML file that contains the embedded code, which you can see further below (WebGL support is required to see the OpenGL code in action):
Exploring further
The full source code for this lesson can be found at the GitHub project. Now that we have a base setup in Android, iOS, and emscripten, we can start fleshing out our project in the next few posts. Emscripten is pretty neat, and I definitely recommend checking out the samples over at https://github.com/kripken/emscripten/wiki!
Some of you have been curious about what the air hockey game from the book would be like if we brought it over to other platforms. I would like to find out, myself. 🙂 In the spirit of my last post about cross-platform development, I want to port the air hockey project over to a native cross-platform code base that can be built for Android and iOS, and even the web by using emscripten and WebGL. Everything will be open-source and available on GitHub.
Here are some of the things that we’ll have to figure out and learn along the way:
Setting up a simple build system for each platform.
Initializing OpenGL.
Adding support for basic touch and collision detection.
In the next post, we’ll take a look at setting up a simple build system to initialize OpenGL across these different platforms. Here are all of the posts for the series so far:
I’ve recently been spending time travelling overseas, taking a bit of a break after reaching an important milestone with the book, and also taking a bit of a rest from working for myself! The trip has been good so far, and I’ve even been keeping up to date with items from the RSS feed. Here is some of the news that I wanted to share with y’all, as well as to get your thoughts:
Book nearing production
OpenGL ES for Android: A Quick-Start Guide reached its final beta a couple of weeks ago, and is now being readied to be sent off to the printers. I would like to thank everyone again for their feedback and support; I am so grateful for it, and happy that the book is now going out the door. I’d also like to give a special thanks to Mario Zechner, the creator behind libgdx and Beginning Android Games, for generously contributing his foreword and a lot of valuable feedback!
Site news
Not too long ago, I decided to add a new forums section to the site to hopefully build up some more community involvement and get a two-way dialogue going; unfortunately, things didn’t quite take off. The forums have also suffered from spam and some technical issues, and recently I was even locked out of the forum administration. I have no idea what happened or how to fix it, so since the posting rate was low, I am just putting the forums on ice for now.
I’d still love to find a way to have some more discussions happening on the site. In which other ways do you believe that I could improve the site so that I could encourage this? I’d love to hear your thoughts.
Topics to explore further
I’ve also been thinking about new topics to explore and write about, as a lot of exciting things are happening with 3D on the mobile and web. One big trend that seems to be taking place: Native is making a comeback.
For many years, C and C++ were proclaimed to be dead languages, lingering around only for legacy reasons, and soon to be replaced by the glorious world of managed languages. Having started out my own development career in Java, I can agree that the Java world does have a lot of advantages. The language is easier to learn than a behemoth like C++, and, at least on the desktop, the performance on the JVM can even come close to rivalling native languages.
So, why the resurgence in C and C++? Here are some of my thoughts:
The world is not just limited to the desktop anymore, and there are more important platforms to target than ever before. C and C++ excel at cross-platform portability, as just about every platform has a C/C++ compiler. By contrast, the JVM and .NET runtimes are limited to certain platforms, and Android’s Dalvik VM is not as good as the JVM in producing fast, efficient JIT compiled code. Yes, there are bytecode translators and commercial alternatives such as Xamarin’s Mono platforms for mobile, but this comes with its own set of disadvantages.
Resource usage can be more important than programmer productivity. This is true in big, expensive data centers, and it’s also true on mobile, where smaller downloads and lower battery usage can lead to happier customers.
C and C++ are still king when it comes to fast, efficient compiled code that can be compiled almost anywhere. Other native would-be competitors lose out because they are either not as fast or not as widely available on the different platforms. When productivity becomes more important than performance, these alternatives also get squeezed out by the managed and scripting languages.
As much as C and C++ excel at the things they’re good at, they also come with a lot of legacy cruft. C++ is a huge language, and it gets larger with each new standard. On the other hand, at least the compilers give you some freedom. Don’t want to use the STL? Roll out your own custom containers. Don’t want the cost/limitations of exception handling and RTTI? Compile with -fno-exceptions and -fno-rtti. Undefined behavior is another nasty issue which can rear its head, though compilers like Clang now feature additional tools to help catch and fix these errors. With data-oriented design and sensible error handling, C++ code can be both fast and maintainable.
Compiling C and C++ to the web
With tools like emscripten, you can now compile your C/C++ code to JavaScript and run it in a browser, and if you use the asm.js subset, it can actually run with very good performance, enough to run a modern 3D game using JavaScript and WebGL. I’ve always been skeptical of the whole “JavaScript everywhere” meme, because how can the web truly become an open computing platform by forcing the use of one language for everything? There’s no way a single language can be equally suitable for all tasks, and why would I want to develop a second code base just for the web? For this reason, I used to believe that Google’s Native Client held more promise, since it can run native code with almost no speed loss. Why use JavaScript when you can just execute directly on the CPU and bring your existing code over?
Now I see things a bit differently and I think that the asm.js approach has a lot of merit to it. NaCl has been around for years now, and it still only runs in Google Chrome, and then only on certain platforms and only if the software is distributed through the Chrome store, or if the user enables a developer flag. The asm.js approach, on the other end, can run on every browser that supports modern JavaScript. This approach is also portable, meaning it will work into the foreseeable future, even on new device architectures. NaCl, on the other hand, is limited to what was compiled. Portable NaCl is supposed to fix this, but it’s been a work-in-progress for years now, and given the experience with NaCl, it may never find its way to another browser besides Google Chrome. Combined with WebGL, compiling to JavaScript really opens up the web to a lot of new possibilities, one where you can deploy across the web without being tied to a single browser or plugin. The BananaBread demo shows just some of what is possible.
I’d like to learn more about writing OpenGL apps that can run on Android, iOS, and the web, all with a single code base in C++. I know that this is also possible with Java by using Google’s Web Toolkit and bytecode translators (after all, this is how libgdx does it), but I’d like to learn something different, outside of the Java sphere. Is this something that you guys would be interested in reading more of? This is all relatively new to me and I’m currently exploring, so as always, looking forward to your feedback. 🙂