Imagine that we have an entity which does nothing until a random time in the future. At regular intervals it generates a random number, and once the random number is small enough, it does whatever it was waiting for. Maybe it then returns to the same idle state, polling the random numbers periodically again. In code:

void() gizmo_idle = { self.nextthink = 0.1; self.think = gizmo_idle; if(random() < 0.1) gizmo_activate(); }

We keep calling this code again and again, surely there’s got to be a better way? *Okay, I realise that there’s rarely an actual need to optimise things in QuakeC for performance, so just go with it for now. We’ll find there are benefits beyond reducing the amount of busy work our code does. The most QuakeC intensive entity in Quoth – which is so expensive it rockets to the top of the profile chart after adding just one to a map – had repeated idle polling like this before optimisation.*

Time for our first bit of maths. The process above of testing a random number against a fixed value until we get a success is called the Geometric Distribution. Read that link to see that the average number of trials needed to get a success in our code is 1 / 0.1 = 10. Since we are testing every 0.1 seconds, we expect the gizmo to activate after an average of 1 second. We can graph the probability of getting a success:

The probability is decreasing each time – this is because each column shows the probability of success after *exactly* that many attempts. The chance of succeeding after precisely 3 attempts is reduced due to the possibility of succeeding after 1 or 2! The chance of succeeding after *3 or fewer *attempts can be found by adding up the heights of the first 3 bars: roughly 0.1 + 0.09 + 0.08 = 0.27.

Here’s the big idea: generate a target value from `random()`

, and count how many of the bars we need to “stack” to reach a height greater than the target. The chance of stopping in each bar is equal to the height of that bar, so “number of attempts” has the same distribution as “number of bars”. We can work backwards from number of bars to calculate the corresponding nextthink time, e.g. if we generate 5 bars then wait for 0.5 seconds.

This plan is ok, but it’s fiddly to calculate and sum the heights of the bars. What we can do is replace the difficult staircase shape of the original graph with a smooth curve that approximates it. We can appeal to the magic of calculus to get a formula that gives us the area under the curve – replacing the total height from above with the area, to give us the probability of a success. We can *invert* that formula to go straight from a `random()`

value to the time that it corresponds to.

The smooth curve we’ll be using comes from the Exponential Distribution. If you’ve got a background in stats, you’ll know both the geometric and exponential distributions have the memoryless property, so this is a sensible substitution. We can certainly see from the graph that it has the right kind of shape:

Now the maths bit happens, we integrate, invert, and swap one of the variables about to get a formula which calculates an end-time when you input a random value:

**t = -A log(r)**

Here **A** is the expected length of time between activations, **r** is a seed value between 0 and 1 which should be generated from `random()`

, and **t** is the resulting time that the entity should wait for.

So what do we gain? Well, a few little things:

- We only have to call random once per activation
- It is known in advance how long the entity will take to fire
- The intervals between activations are now arbitrary length, rather than multiples of 0.1 second
- It’s much cooler (arguably)

The only missing piece is the log function, which isn’t available as a QuakeC builtin. The following code suffices

float(float x) special_log = { local float n, xp, sum; n = 1; xp = x; while(xp > 0.001 * n) // bound the size of the next term { sum = sum + xp / n; n = n + 1; xp = xp * x; } return sum; }

*n.b. the name special_log is a warning that this function is only suitable for our purposes here. In reality it calculates:
-log(1-x)
The minus sign is actually helpful, it saves us applying one later on. The 1-x part has no impact because x is a random value between 0 and 1, so 1 – x is identically random. Lastly, this function only behaves correctly for values between 0 and 1, so it’s lucky we don’t feed it anything else.*

That’s really all there is to today’s trick. When you want to delay an entity before it thinks, generate a random number, call `special_log`

on that random value, then multiply the result by the average waiting time you desire. Set the entity’s think time to that far in the future, job done! I won’t insult your skills to produce the code for that. Instead I’ll close with how knowing in advance how long the entity sleeps can help in subtle ways. It makes it easy to “clamp” the delay creating a minimum or maximum value without any “book-keeping”. It also lets you give an entity multiple random processes running in parallel. Store the next activation time for each process in a separate field, then every time the entity thinks, choose the smallest of the activation times as the nextthink – and set the matching think function. I’ll expand on multiple think tracks in a future post.

*Bonus Content: Here’s an alternative log function. It’s a bit more complicated but it avoids all the caveats of the special_log. It may even be quicker – each step takes more computation but convergence may be more rapid.*

float(float x) log = { local float n, xp, sum, term; n = 1; xp = (x-1)/(x+1); term = 2*xp; xp = xp * xp; do { sum = sum + term / (2*n + 1); term = term * xp; n = n + 1; } while(term > 0.0005 * n) // bound the size of the next term return sum; }