2D Global Illumination with Monte Carlo
This is a simple approach, easy to implement, and so it's a good starting point before diving into more complex methods.
The Monte Carlo methods are a way to approximate the value of integrals. It’s based on the idea of randomly sampling the function and taking the average of the results. It’s particularly useful to approximate integrals where the function is too complex or perhaps not even known analytically—a “black box”.
In this example I can pick random points on the x-axis between and , and for a sufficiently larger number of points, the average will be a reasonable estimate of the integral. Formula looks like this:
where is the number of samples and are random points in the interval .
Global Illumination
Similarly, or global illumination we want to integrate the light coming from all possible direction in space into a specific point. If the scene is slightly more complex than even a single object, it quickly becomes impossible to solve analytically.
Instead we’ll use a Monte Carlo method by picking random directions and see if we hit any light sources, and average the result from all directions. In my case, I first wanted to learn how to implement this for simple 2D scenes, mostly on the GPU.
First off, let’s start by setting up a simple scene with an SDF. I’ve put a circle light, represented in red, and two objects in blue. Turn on the SDF visualization to ‘see’ the actual distance field.
I’ve talked about SDFs in a previous article about simple shapes SDFs, but to recap an SDF is a function that returns the distance to the closest point of an object. It’s positive outside the object and negative inside. In the example above, the SDF allows us to draw the shapes, but it becomes more interesting when we introduce the raymarching algorithm.
Raymarching
Raymarching is a way to do raytracing by progressing along the ray of light. For Monte Carlo, we would start at the pixel we want to calculate light for, and pick a random direction.
We then ask the question: “how far can I travel without hitting any objects?”. The Signed Distance Field gives us the answer as it is the distance to the closest object, even though we may not ever hit that object.
In the example above, you can see that on the first iteration, we know that the closest object in the lower circle at a certain distance (a positive signed distance), so we only travel that distance.
We then repeat that process a number of times, with a maximum number of steps. For example we might decide to stop after 20 iterations. We may also stop if we’ve traveled too far away, because it might that we’ll never hit anything.
In the example above, in the second iteration, we are still close to the lower circle, so again we only travel that far. But on the third iteration, we are closest to the right circle. We travel that distance, and then we found that the signed distance is zero because we’re right on the edge of that right circle. That’s a hit!
If this circle was a light source, we know there is a direct light of sight to the pixel. So we can calculate how much does the light contributes to the pixel.
Putting it together
Let’s talk about what looks like in GLSL. Here’s my attempt at the raymarching:
void raymarch(vec2 rayDir, inout vec3 color) {
// Initialize the position to the position of the pixel we're rendering.
vec2 position = v_uv;
// We start at the pixel, so we are at a distance of 0 along the ray.
float distance = 0.0f;
for(int j = 0; j < 50; ++j) {
float lightSD, objSD;
// First we get the signed distance to any object in the scene at the
// current position. `sceneSDF` fills up the `lightSD` and `objSD`
// variables so we can distinguish. `sd` is the distance to either.
float sd = sceneSDF(position, lightSD, objSD);
if(lightSD <= EPSILON) {
// Hit a light source, which contributes to the color of the pixel.
color += u_lightColor * u_lightIntensity / (distance * distance);
break;
}
if(objSD <= EPSILON) {
// Hit an object, for now we don't do reflections.
break;
}
// Move along the ray by the distance to the nearest object. This may or
// may not put us on the surface of that object, because at that stage
// we don't know "where" this object is (above? below?).
position += rayDir * sd;
distance += sd;
}
}
void main() {
You’ll notice we use an EPSILON
here to check if the signed distance is
zero (or negative). This is because as we’re marching along the ray, there
are small numerical errors accumulating, and we need to account for this.
For example, in Figure 3 when we’re on the edge of the right circle, the
value of objSD
might be 0.0001, or it might be -0.0001, rarely exactly zero.
When calculating the signed distance we call a function sceneSDF
. This is
implemented by combining the various shapes we need:
float sceneSDF(vec2 pos, out float lightSD, out float objSD) {
lightSD = circleSDF(pos + vec2(0.15f, -0.3f), u_lightRadius);
float leftCircleSD = circleSDF(pos + vec2(-0.3f, 0.0f), 0.10f);
float rightBoxSD = boxSDF(pos + vec2(0.3f, 0.2f), vec2(0.15f, 0.15f));
objSD = min(leftCircleSD, rightBoxSD);
return min(lightSD, objSD);
}
The final Monte Carlo algo
Now that we can raymarch around the scene, we can implement the Monte Carlo method. We’ll send a number of rays uniformly around the pixel, and accumulate the light contributions from each ray.
For each pixel we’ll use the a slight different angle for the rays to avoid ‘banding’, when the rays are all aligned and we create visible macro patterns.
// We start with pure black, which is essentially the ambient light of
// our scene.
vec3 color = vec3(0.0f);
// We start with a random angle for the first ray.
float startAngle = rand(v_uv) * TWO_PI;
for(int i = 0; i < u_numSamples; ++i) {
// Then we'll shoot rays uniformly around the pixel.
float rayAngle = startAngle + float(i) / float(u_numSamples) * TWO_PI;
vec2 rayDir = vec2(cos(rayAngle), sin(rayAngle));
// Raymarch along the ray to update accumulate light into the color.
raymarch(rayDir, color);
}
// Average the samples
color /= float(u_numSamples);
At the end we average the samples to get a final color. This means if one ray hit the light source and all others missed, the final color will be a fairly dark pixel. If a pixel is very close to the light source and many rays hit the light, the final color is more likely to be bright.
Here’s the final result of this approach:
You’ll notice that it’s very noisy. This is because we’re only sampling only quite a few rays. If you increase the number of rays, the performance will dramatically drop, but it gets smoother.
We could do temporal accumulation to to average samples over time with different random angles each time, but that takes some time to settle. There are others options to denoise and improve the performance, but that’ll a topic for another article.