Dynamic Monte-Carlo is a method used in physics to simulate things where you know the energy, but not the detailed rules for the dynamics. The core of the algorithm is to pick a random location, make a change, and then figure out whether to keep that change or throw it out.

A basic example is the Ising model, which is a model of spins in magnets but you could just as well use for terrain generation, roving clouds of fog, etc. The components are:

– A grid of points that can take on discrete values (1 or -1)

– A way to evaluate the change in energy due to flipping one of the values

– Pick a site, flip it, and decide based on the change in energy whether to accept or reject

In the Ising model, the local change in energy is basically 2*s*N, where s is the old value at the site and N is the sum of values over nearest neighbors. This comes from an energy that looks like E=-s_i*s_j (in physics negative is ‘good’, so this energy tries to make all the sites have the same value).

For the Ising model (and in general for models of equilibrium physical systems) there’s a simple rule for accept/reject: if dE is negative, keep the change; otherwise, revert it with a probability 1-exp(-dE/T), where T is the temperature. Once you’ve tested a number of sites equal to the number of sites in the system, you’ve advanced time by one step. The conversion factor is important if you want consistent time between systems of different sizes.

The result is that over time, the system will relax towards a state that has the right energy for the temperature T that you choose. Okay, so that’s interesting maybe but why is it useful?

Well, putting aside solving any real physics problems, this is a way to generate dynamically evolving patterns. If the temperature is very high, this gives you random noise. As you lower the temperature that noise will start to coalesce into blobs of +1 and -1, and at some point (the critical temperature, which is about 2.27 for a square lattice) you’ll get a fractal pattern.

The resultant blobs could be used as land-masses. I’ve used them in a roguelike to have sort of evolving fragmented clouds of fog that move about. But you can also make the energy more interesting to get more interesting patterns. For example, lets say I use a square grid with four neighbors per site and make the energy look like:

*E = (s>0)*(abs(N)) *

The corresponding change in energy is a little more complicated, because you have to sum up over all the 5 affected sites when one site changes. At this point unless performance is an issue I’d just make the change and calculate the energy before and after with a sum over the affected neighbors, e.g. if the site is at x,y and s is the RES x RES array of values then the following bit of Javascript gives the idea:

*function bound(x) { // Clamp values to stay within the grid *

*if (x>=RES) return RES-1; *

* if (x<0) return 0; *

* return x; *

*} *

*function getNeighbors(x,y) { // Gets the number of neighbors at x,y *

* var xm,ym,N=0; *

* // There’s a cheaper way to get the nearest neighbors but I’m trying to keep the code short here. *

* for (ym=y-1;ym<=y+1;ym++) *

* for (xm=x-1;xm<=x+1;xm++) { *

* if (Math.abs(xm-x)+Math.abs(ym-y)==1) N+=s[bound(xm)][bound(ym)]; // Check if the site is a neighbor *

* } *

* return N; *

*} *

*function getEnergy(x,y,val) { // Gets the energy at x,y if s[x][y] = val *

* var i,E=0,xm,ym; *

* var oldVal=s[x][y]; *

* s[x][y]=val; *

* if (s>0) *

* E=Math.abs(getNeighbors(x,y)); *

* for (ym=y-1;ym<=y+1;ym++) *

* for (xm=x-1;xm<=x+1;xm++) { *

* if (Math.abs(xm-x)+Math.abs(ym-y)==1) *

* if (s[bound(xm)][bound(ym)]>0) { *

* var N=getNeighbors(xm,ym); *

* E+=abs(N); *

* } *

* } *

* s[x][y]=oldVal; *

* return E; *

*} *

What does this do? Well, if s=-1 the site has no impact on the energy. If s=1, the site wants to configure itself so that N=0, which means that it should have the same number of +1 and -1 neighbors. That means that you’re going to end up with shapes that have two links per site – so you should get something like the walls of a maze.

This method is basically just the core of a whole bunch of things based around the accept/reject premise. If you want there to always be the same number of +1’s and -1’s, then instead of flipping a site you can exchange it with a neighbor and then decide whether to accept/reject. This gives you a conservation law. You can even throw out the energy entirely and just use the accept/reject idea on some non-energetic rule. You can also add more states, multiple overlapping fields, etc.

For example, a question that came up on r/gamedev recently had to do with building a completely connected land-mass for a game by adding cells to the boundaries of the existing land. This is basically a form of Monte-Carlo – pick a random site, see if it borders land, and if so then turn it into land, otherwise don’t. By adding something like an energy to this process, the details of the shape of the land could be subtly altered. If water that is next to land on two sides and water on two sides does not like to change into land, you’ll get rivers.

Now, this method does have disadvantages. Its computationally very expensive compared to e.g. just throwing together a few noise functions. I would be cautious about using it in anything that has to be real-time unless you’re doing a GPU implementation of the algorithm, and these algorithms in particular are not as efficient on GPUs compared to e.g. partial differential equations due to the random site selection. So bear that in mind.

Next time I’ll extend some of these ideas to things like forest fire models and population dynamics, and we’ll talk a bit about using evolution in games.

**About the author:**

*Travelogue*

**The Hermit**, the owner of a game development group called Urban Hermit Games working on a HTML5 RPG game called*. Check out his game development blog.*