Introducing Four: It’s WebGL, but Easier

Jason Petersen
Share

WebGL has been around for a few years now and we have watched it mature into the reliable and widely supported graphics technology it is today. With big companies like Google, Mozilla, and Microsoft advocating for its use, it’s hard not being curious about it.

Since its specifications were finalized in 2011, it has gained a lot of attraction. With the help of frameworks like ThreeJS, BabylonJS and Play Canvas, this area has become less daunting. Thanks to them it’s much easier to pick up, but it still requires a good learning effort as it is a different discipline altogether.

This article will briefly introduce you to what WebGL is and then I’ll cover Four, a framework I created to help developers delve quickly into the WebGL world. In case you want to see what Four and WebGL can do for you, take a look at this simple demo I built.

What is WebGL?

WebGL is a graphics API based on the Open Graphics Library for Embedded Systems (OpenGL ES 2.0). This allows browsers that support it to render three dimensional elements in the HTML canvas element. OpenGL ES 2.0 was chosen because it was a reputable open-standard for computer graphics and, more importantly, it was designed to perform optimally on embedded devices, such as mobiles and tablets. This was crucial given the broad device accessibility of modern browsers.

The API itself is exposed through JavaScript. The API is low level, so its use can result in a lot of repetitive and complex code. In addition, the nature of typical OpenGL-based applications imposed programming design paradigms and data structures this language was not prepared for, such as object-oriented programming and unary operators that enabled fast matrix manipulation. This can be problematic for physical simulations dependent on the manipulation of large matrix structures. This is where Four comes in.

Introducing Four

four logo

Over the past three years working in the realms of web-based physics simulations using WebGL, I have realized the lack of a web-based physics engine. This is probably due to the large amounts of data manipulation required to do this. To address this problem, I started to develop Four.

Four is a framework to develop 3D content for the web. It lets you avoid the burden of repetition and complexity to speed up and simplify the development while exposing the flexibility of the WebGL API. This is done by exposing several layers, each built on the top of the previous one, that give you access to a different level of abstraction. Depending on your needs, you can access the most basic level or a higher one. This allows you, as the developer, to focus on what is important: getting something on the screen.

Please note that Four uses the gl-matrix library for matrix and vector manipulation, which is included in the framework. So, to use it, you need to have some working knowledge of the gl-matrix library. In case you don’t know what it is, I recommend to take a look at the gl-matrix library documentation.

Four is at a very early stage since I’ve released the first public version a few days ago. Its final goal is to use GPGPU techniques to shift the physics logic to the GPU where it can execute on a powerful parallel multi-core processor architecture. This opens up the web to a world of performant three dimensional physical simulations.

In this article, I’m not going to cover GPGPU in details. In case you want to read more about this topic, I suggest you to read the related Wikipedia page.

How Four is Structured

Fundamental to the framework are the three levels of abstractions: Bedrock, Masonry, and Rest. In the following sections, I’m going to cover each of these layers.

Bedrock

The Bedrock layer reflects very closely the language of the WebGL API. To this layer belong the scaffolding classes that you would use to set up shaders, link programs, and configure framebuffer render targets. A few examples are listed below:

  • Shaders: Used to maintain the source code that defines the rendering routines
  • Programs: Objects to which shaders can be attached
  • Attributes and Uniforms: Maintain the variables defined in the shader source code with the attribute and uniform storage qualifier respectively
  • Framebuffers: Create render targets for your application. A generic framebuffer instance establishes a reference to the canvas as the destination for rendering
  • Textures: Storage containers for images usually mapped onto meshes to fake various details
  • Vertex Array Objects: Maintain the storage buffers for the vertex data to be processed in the shaders

Masonry

Above the Bedrock layer reside the Masonry classes. They use the Bedrock layer to abstract new concepts to achieve various tasks, from rendering meshes to architecting structures.

Structures are particularly noteworthy. They employ a similar “grouping” nature of the struct in shaders in that they collate uniforms, such as lights and cameras. A camera has, among others, a projection matrix, model view matrix, and a normal matrix. All of them exist as uniforms in the shader to render scenes. A structure for a camera would generate the uniforms and assume their values to exist within properties of the structure using the same name. Subsequently binding a structure would automatically apply these values to the generated uniforms. What makes this appealing is the ability to expose additional functionality through the camera structure for the eventual computation and update of its uniforms at render time.

Rest

At the highest level is the Rest abstraction. This hides much of the low-level functionality to help you develop content quickly. Examples from this layer include various forms of lighting and material effects. It is important to note that you will need the shaders to support the rendering capabilities of this layer. This can be found in the four documentation in their respective pages. You can also use structures from the Masonry layer to develop new abstractions for the Rest layer.

Now that I have given you an overview of the framework, it’s time to see it in action.

Getting Started with Four

The first thing you need to do is to download Four from its GitHub repository. Once done, include the script in your HTML page:

<script src="path/to/four.min.js"></script>

At this point, you need to include an HTML canvas element.

<canvas width="500" height="500"></canvas>

The canvas is the viewport to which the final scene will be rendered. If no width or height attributes are set, the framework assumes their respective viewport size.

With this in place, you’re ready to use Four. To help you in understanding how it works, let’s see an example.

Rendering and Rotating a Mesh

The first step is to create a handler for the canvas using a framebuffer.

var view = new Four.Framebuffer();

The program attaches the shaders that will be used to model and render the scene. The selector argument specifies a CSS class selector that points to the location of the shaders in the HTML.

var program = new Four.Program({ selector: '.my-shader-class' });

We further construct a mesh shape, a light source to illuminate the scene, and a three dimensional perspective projection through which to view it.

var camera = new Four.PerspectiveCamera({
    program: program, location: [50, 50, 50]
});

var light = new Four.Light({
    program: program,
    location: [10, 10, 10]
});

var mesh = new Four.Mesh({
    buffers: new Four.VertexArrayObject({
        program: program,
        attributes: ['vec3 position']
    }),
    vertices: []
    material: new Four.Material({
        program: program,
        diffuse: 0x9F8A60
    })
});

The final snippet adds the mesh to a scene and renders it to the view. The pre-render execution routine of the scene rotates the scene around the mesh 0.25 degrees every frame.

scene = new Four.Scene();

scene.put(mesh);
scene.render(view, camera, function() {
    program.bind();
    light.bind();

    scene.rotation += 0.25;
});

With this code we can create a scene, add a mesh to it, and light it up. To conclude our example, we need to create the shaders needed to generate the output. Let’s do this!

The Shaders

Alongside the canvas and JavaScript you need the shader scripts. These are programs that run on the GPU to model and render the data provided by the mesh. This is developed using the Graphics Library Shading Language (GLSL) and requires both a vertex and fragment shader.

The shaders should be included using “shader script tags” in the HTML. A shader tag takes two forms:

<!-- Vertex shader -->
<script class="my-shader-class" type="x-shader/x-vertex"></script>

<!-- Fragment shader -->
<script class="my-shader-class" type="x-shader/x-fragment"></script>

It’s important that their classes have the same value as the selector passed to the program in the JavaScript above. Apply the same class to a combination of a vertex and fragment shader to link a program.

The vertex shader executes once for every vertex passed through the a_position position attribute. The output of the vertex shader is assigned to the built-in variable gl_Position.

<script class="your-shader-class" type="x-shader/x-vertex">
    #version 100
    precision lowp float;

    struct camera {
        mat4 projectionMatrix;
        mat4 modelViewMatrix;
        mat3 normalMatrix;
    }

    uniform camera u_camera;
    attribute vec3 a_position;    

    void main() {
        gl_Position = camera.projectionMatrix * camera.modelViewMatrix *
                      vec4(a_position, 1.0);
    }
</script>

Between the vertex and fragment processors, there are two things that need to happen before the scene can be rendered. Firstly, the vertices are connected to construct the mesh using the outputted vertices from the vertex processor. Secondly, fragments are computed to be subsequently shaded with the color outputted by the fragment processor in gl_FragColor.

<script class="your-shader-class" type="x-shader/x-fragment">
	#version 100 
    precision lowp float;
    
    void main() {
        gl_FragColor = vec4(1.0);
    }
</script>

With our rendering pipeline completed, our scene can be rendered to the view.

The Future of Four

As I mentioned in the introduction, Four is at a very early stage. So, it needs more work before we can move onto building the physics engine. In the upcoming versions, you can expect the following features to be added:

  • Mesh defaults for basic geometries e.g. cube, tetrahedral, sphere, and so on
  • Reflection mapping
  • Shadow mapping
  • Normal mapping
  • Additional mesh loaders
  • Keyframe animation
  • Effects – bloom, cel…
  • And more…

Conclusion

WebGL is a technology to render 3D content for the web but its API can be difficult to use. Four is a framework that tries abstract away this difficulty so you can focus on your content. With its few layers of abstraction, it is flexible to the needs of the developer. It also encourages developers to breakdown these abstractions to increase their understanding of how graphics applications work.