Conway’s Game Of Life – Cellular Automata and Renderbuffers in Three.js

From our sponsor: B28192607 Conway’s Game Of Life – Cellular Automata and Renderbuffers in Three.jsskeleton Conway’s Game Of Life – Cellular Automata and Renderbuffers in Three.jsCreate stunning brand assets with the help of our AI-powered creative assistant. get started today.

Simple rules can generate structured, complex systems. And beautiful pictures often follow. This is the basic idea behind the Game of Life, a cellular automaton devised by British mathematician John Horton Conway in the 1970s. Often referred to as ‘Life’, it is perhaps one of the most popular and well-known examples of cellular automata. There are many examples and tutorials on the web that go over implementing this, such as it by Daniel Shiffman,

But in many of these instances this computation is run on a CPU, limiting the potential complexity and amount of cells in the system. So this article will go over implementing the Game of Life in WebGL which allows for GPU-accelerated computations (= more complex and detailed images). Writing WebGL on my own can be very painful so it is being implemented using three.js, a WebGL graphics library. This will require some advanced rendering techniques, so some basic familiarity with Three.JS and GLSL will be helpful along the way.

cellular automata

Conway called The Game of Life a cellular automaton and it makes sense to consider a more abstract view of what that means. It is related to automata theory in theoretical computer science, but really it is about making some simple rules. A cellular automaton is a model of a system consisting of automata, called cells, that are interconnected through some simple logic that allows modeling complex behavior. A cellular automaton has the following characteristics:

  • Cells live on a grid that can be 1D or higher-dimensional (in our game of life this is a 2D grid of pixels)
  • Each cell has only one current state. In our example there are only two possibilities: 0 or 1 / dead or alive
  • Each cell has a neighborhood, a list of adjacent cells.

The basic working principle of a cellular automaton usually includes the following steps:

  • An initial (global) state is selected by specifying a state for each cell.
  • A new generation is formed according to a certain rule that determines the new position of each cell with respect to:
    • cell current status
    • position of cells in its neighborhood
neighbourhood Conway’s Game Of Life – Cellular Automata and Renderbuffers in Three.js
The state of a cell together with its neighbors determine the state in the next generation.

As already mentioned, The Game of Life is based on a 2D grid. The early stage consists of cells that are either alive or dead. We generate the next generation of cells according to only four rules:

  • Any living cell with less than two living neighbors dies as if due to low population.
  • Any living cell with two or three living neighbors survives to the next generation.
  • Any living cell with more than three living neighbors dies, as if due to overpopulation.
  • Any dead cell becomes a living cell with exactly three living neighbors, as if by reproduction.

Conway’s Game of Life uses a Moore neighborhood, which is made up of the current cell and the eight cells that surround it, so those are the ones we’ll look at in this example. It has many variations and possibilities, and life is actually turing completeBut this post is about implementing it in WebGL with Three.js so we’ll stick to a basic version but feel free to research More,

three.js

Now that most of the theory is out of the way, we can finally start implementing the game of life.

Three.js is a fairly high level WebGL library, but it lets you decide how deep you want to go. So it provides a lot of options to control how scenes are structured and rendered and allows users to get closer to the WebGL API by writing and passing custom shaders in GLSL. buffer properties,

Each cell in the Game of Life needs information about its neighborhood. But in WebGL all fragments are processed by the GPU at once, so when the fragment shader is in the middle of processing a pixel, there is no way to directly access information about another fragment. But there is a solution. In a segmented shader, if we pass a texture, we can easily query neighboring pixels in the texture as long as we know its width and height. The idea is to allow all kinds of post-processing effects to be applied to scenes.

We will start with the initial state of the system. To get any interesting results, we need non-uniform initial-conditions. In this example we’ll place the cells randomly on the screen, so we’ll render a regular noise texture for the first frame. Of course we can start with other types of noise but this is the easiest way to start.

/**
 * Sizes
 */
const sizes = 
	width: window.innerWidth,
	height: window.innerHeight
;

/**
 * Scenes
 */
//Scene will be rendered to the screen
const scene = new THREE.Scene();

/**
 * Textures
 */
//The generated noise texture
const dataTexture = createDataTexture();

/**
 * Meshes
 */
// Geometry
const geometry = new THREE.PlaneGeometry(2, 2);

//Screen resolution
const resolution = new THREE.Vector3(sizes.width, sizes.height, window.devicePixelRatio);

//Screen Material
const quadMaterial = new THREE.ShaderMaterial(
	uniforms: 
		uTexture:  value: dataTexture ,
		uResolution: 
			value: resolution
		
	,
	vertexShader: document.getElementById('vertexShader').textContent,
	fragmentShader: document.getElementById('fragmentShader').textContent
);

// Meshes
const mesh = new THREE.Mesh(geometry, quadMaterial);
scene.add(mesh);

/**
 * Animate
 */

const tick = () => 
    //The texture will get rendere to the default framebuffer
	renderer.render(scene, camera);

	// Call tick again on the next frame
	window.requestAnimationFrame(tick);
;

tick();

This code simply initializes a Three.js scene and adds a 2D plane to fill the screen (the snippet doesn’t show all the basic boilerplate code). The plane is supplied with a shader material, which for now does nothing but display a texture in its fragment shader. In this code we generate a texture using datatexture, It would also be possible to load an image as a texture, in which case we would need to keep track of the exact texture size. Since the scene will occupy the entire screen, creating a texture with viewport dimensions seems like the simpler solution for this tutorial. Currently the view will be rendered in the default framebuffer (device screen).

see pen Disturbance: Base Frequency by Jason Andrew (@jasonandrewth) Feather codepen.light

framebuffers

When writing WebGL applications, whether they are using the vanilla API or a higher level library like Three.JS, after setting the view the results are rendered to the default WebGL framebuffer, which is the device screen (as done above) ).

But there is also the option of creating off-screen rendered framebuffers for image buffers on the GPU’s memory. They can then be used like regular textures for any purpose. This idea is used in WebGL when it comes to creating advanced post-processing effects such as depth-of-field, bloom, etc. by applying various effects to the scene once rendered. in three.js we can do this by using 3.WebGLRenderTarget, we’ll call our framebuffer renderbuffer,

/**
 * Scenes
 */
//Scene will be rendered to the screen
const scene = new THREE.Scene();
//Create a second scene that will be rendered to the off-screen buffer
const bufferScene = new THREE.Scene();

/**
 * Render Buffers
 */
// Create a new framebuffer we will use to render to
// the GPU memory
let renderBufferA = new THREE.WebGLRenderTarget(sizes.width, sizes.height, 
	// Below settings hold the uv coordinates and retain precision.
	minFilter: THREE.NearestFilter,
	magFilter: THREE.NearestFilter,
	format: THREE.RGBAFormat,
	type: THREE.FloatType,
	stencilBuffer: false
);

//Screen Material
const quadMaterial = new THREE.ShaderMaterial(
	uniforms: 
        //Now the screen material won't get a texture initially
        //The idea is that this texture will be rendered off-screen
		uTexture:  value: null ,
		uResolution: 
			value: resolution
		
	,
	vertexShader: document.getElementById('vertexShader').textContent,
	fragmentShader: document.getElementById('fragmentShader').textContent
);

//off-screen Framebuffer will receive a new ShaderMaterial
// Buffer Material
const bufferMaterial = new THREE.ShaderMaterial(
	uniforms: 
		uTexture:  value: dataTexture ,
		uResolution: 
			value: resolution
		
	,
	vertexShader: document.getElementById('vertexShader').textContent,
	//For now this fragment shader does the same as the one used above
	fragmentShader: document.getElementById('fragmentShaderBuffer').textContent
);

/**
 * Animate
 */

const tick = () => 
	// Explicitly set renderBufferA as the framebuffer to render to
	//the output of this rendering pass will be stored in the texture associated with renderBufferA
	renderer.setRenderTarget(renderBufferA);
	// This will the off-screen texture
	renderer.render(bufferScene, camera);

	mesh.material.uniforms.uTexture.value = renderBufferA.texture;
	//This will set the default framebuffer (i.e. the screen) back to being the output
	renderer.setRenderTarget(null);
	//Render to screen
	renderer.render(scene, camera);

	// Call tick again on the next frame
	window.requestAnimationFrame(tick);
;

tick();

There is nothing to see anymore, because when the view is rendered, it is rendered in an off-screen buffer.

see pen Disturbance: Base Frequency by Jason Andrew (@jasonandrewth) Feather codepen.light

To render the texture generated from the previous step in the fullscreen plane on our screen we will need to access it as a texture in the animation loop.

//In the animation loop before rendering to the screen
mesh.material.uniforms.uTexture.value = renderBufferA.texture;

And that’s all it takes to bring the noise back, except now it’s rendered off-screen and the output of that render is used as a texture in the framebuffer that renders on-screen.

see pen Disturbance: Base Frequency by Jason Andrew (@jasonandrewth) Feather codepen.light

Ping pong

Now that data has been provided to a texture, shaders can be used to perform normal calculations using the texture data. Within GLSL, textures are read-only, and we cannot write directly to our input textures, we can only “sample” them. However, by using an off-screen framebuffer, we can use the output of the shader itself to write to the texture. Then, if we can do multiple rendering passes at once, the output of one rendering pass becomes the input for the next. So we create two off-screen buffers. this technique is called ping pong buffering, We create a kind of simple ring buffer where after every frame we swap the off-screen buffer that is being read from with the off-screen buffer that is being written to. We can then use the off-screen buffer that was just written to, and display it on the screen. This lets us do iterative computation on the GPU, which is useful for all kinds of effects.

To achieve this in three.js, first we need to create a second framebuffer. we will call it renderbufferb, Then the ping-pong technique is actually done in the animation loop.

//Add another framebuffer
let renderBufferB = new THREE.WebGLRenderTarget(
    sizes.width,
    sizes.height,
    
        minFilter: THREE.NearestFilter,
        magFilter: THREE.NearestFilter,
        format: THREE.RGBAFormat,
        type: THREE.FloatType,
        stencilBuffer: false
    

    //At the end of each animation loop

    // Ping-pong the framebuffers by swapping them
    // at the end of each frame render
    // Now prepare for the next cycle by swapping renderBufferA and renderBufferB
    // so that the previous frame's *output* becomes the next frame's *input*
    const temp = renderBufferA
    renderBufferA = renderBufferB
    renderBufferB = temp
    //output becomes input
    bufferMaterial.uniforms.uTexture.value = renderBufferB.texture;
)

Now the render buffers are swapped every frame it will look the same but for example it is possible to log out and verify the texture being passed to the on-screen plane. Here’s a more in depth look at ping pong buffers in WebGL,

see pen Disturbance: Base Frequency by Jason Andrew (@jasonandrewth) Feather codepen.light

game of life

From here it’s all about applying to real life games. Since the rules are so simple, the resulting code is not too complex either and there is many good resources Those who go through coding it, so I’ll just go over the key ideas. All the logic for this will be in the Fragment shader which is rendered off-screen, which will render the texture for the next frame.

As mentioned earlier, we want to access neighboring fragments (or pixels) through the passed texture. This is achieved in nested for loops getneighbor Celebration. We skip our current cell and check the 8 surrounding pixels by sampling the texture at an offset. Then we check if pixel R A value above 0.5 means it is alive, and increments the count to represent living neighbors.

//GLSL in fragment shader
precision mediump float;
//The input texture
uniform sampler2D uTexture;
//Screen resolution
uniform vec3 uResolution;

// uv coordinates passed from vertex shader
varying vec2 vUvs;

float GetNeighbours(vec2 p) 
    float count = 0.0;

    for(float y = -1.0; y <= 1.0; y++) 
        for(float x = -1.0; x <= 1.0; x++) 

            if(x == 0.0 && y == 0.0)
                continue;

            // Scale the offset down
            vec2 offset = vec2(x, y) / uResolution.xy;
            // Apply offset and sample texture
            vec4 lookup = texture2D(uTexture, p + offset);
             // Accumulate the result
            count += lookup.r > 0.5 ? 1.0 : 0.0;
        
    

    return count;

Based on this count we can set rules. (Note how we can use standard UV coordinates here because the texture we created initially fills the screen. If we had started with an image texture of arbitrary dimensions, we would have coordinates must be scaled according to the exact pixel size (a value between 0.0 and 1.0)

//In the main function
    vec3 color = vec3(0.0);

    float neighbors = 0.0;

    neighbors += GetNeighbours(vUvs);

    bool alive = texture2D(uTexture, vUvs).x > 0.5;

    //cell is alive
    if(alive && (neighbors == 2.0 || neighbors == 3.0)) 

      //Any live cell with two or three live neighbours lives on to the next generation.
      color = vec3(1.0, 0.0, 0.0);

      //cell is dead
       else if (!alive && (neighbors == 3.0)) 
      //Any dead cell with exactly three live neighbours becomes a live cell, as if by reproduction.
        color = vec3(1.0, 0.0, 0.0);

      

    //In all other cases cell remains dead or dies so color stays at 0
    gl_FragColor = vec4(color, 1.0);

And that’s basically it, a working game of life using only GPU shaders, written in Three.JS. The textures will be sampled each frame via ping pong buffers, which creates the next generation in our cellular automaton, so there is no need to pass any extra variables to track time or frames to animate.

see pen Disturbance: Base Frequency by Jason Andrew (@jasonandrewth) Feather codepen.light

In brief, we first looked at the basic ideas behind cellular automata, a very powerful model of computation used to generate complex behavior. We were then able to implement this in Three.JS using ping pong buffering and framebuffers. Now there are endless possibilities to take this further, for example try adding different rules or mouse interactions.

inline layout switch idea

Leave a Reply