How I calculate spell casting successes


The idea for this article came about during my research and design phase into looking at different ways I could implement mana as one of the costs for casting spells.

During that research I quickly realised I had no way to determine if a spell could be successfully cast or not, thankfully my game is still at the Alpha stage (at the time of writing this article) so there wasn’t any real impact to either the game play or the design/coding structure.

My initial attempts at using Google to search for formulas or articles on putting together a formula weren’t very successful, in fact it was quite frustrating as people were talking around what I wanted – hence this article.

It wasn’t until I looked into how existing games approached this subject that I got a sense of what can be achieved.

The games I looked into were:

  • Nethack
  • Morrowind
  • Dungeons and Dragons 3rd edition

And they all provided inspiration to a greater or lesser degree.

I even took a detailed look into the D20 system (which underpins the D&D 3rd edition game – in case you didn’t know) but I found nothing of real interest for calculating a spell success formula.

At this point it is worth mentioning that I settled on using a modified form of the Nethack approach with some influence from Morrowind.

One interesting point that came from my research was how the different games (and myself) approached the notion of determining success, which boiled down to the perception of the designers/developers of those games.

Did they see the calculation as a way of determining successful spell casting or did they see it as a way to minimise the chance of failure to cast a spell?

This rather simple question is quite an important design point as it will influence the attributes you want to use in your calculations. If, for example, you want to generate a number that minimises failure then you would take a starting point of “this spell cannot be cast” or put another way “there’s a 100% certainty of not casting this spell”. You would then construct a formula that would reduce this number down to zero (or as close as possible). Your final calculation would revolve around getting a result that is higher than this spell cast failure number, which would indicate the spell has been successfully cast.

If, on the other hand, you take the design stand point of “I want the caster to succeed at casting this spell” then you would start with a spell cast success number of zero and apply a different set of attributes to increase that number as high as possible. Your final calculation would then revolve around achieving a result that is lower than this spell cast  success number, which would indicate the spell has been successfully cast.

Which approach did I take?

I took the “I want the caster to succeed at casting this spell” approach.

One important thing that I learnt from researching these games is that it was important to determine exactly what are the key elements (or attributes) that will impact on your calculation routines.

In my game the key elements to determine the spell being successfully cast are:

  • Spell current tier level
  • Primary spell casting stat of the caster
  • Spellcasters skill level for that spell
  • Spellcasters experience level for that spell
  • Spellcasters luck attribute
  • Penalties currently incurred by the spell caster

There were a few more elements I could have included such as more stats taken from the spell caster and other environmental restrictions but I felt that the above set provides for enough variability and should (hopefully) keep the player from fully understanding how things are calculated.

How do these key elements influence the spell success calculation routines?

Well not to give too much away each of the key attributes, apart from the penalties attribute, provide a positive input into the calculations. All of the key attributes are included at different points in the calculations. I can, however, tell you that only a small portion of the spellcasters luck attribute will be used during the calculations.

I can be more transparent with the steps my game goes through to produce a final result:

1. Determine the base spell casting chance this uses the primary spell  casting attribute of the caster.

2. Determine if the spell is easy or difficult to cast for the caster, this is a calculated value rather than a simple “natural” value.

3. Calculate the easy/difficult casting modifier which gives us an intermediary value that affects how the next step is actually calculated.

4. Add together the base casting chance to the calculated modifier which gives us our first view of a final spell casting success number.

5. Calculate any spell penalties that apply to the caster at the point of casting, this can include any equipment worn, the status of the caster plus any existing spell buffs and debuffs applied.

6. Add everything together, sprinkle a little bit of luck in and we have our final spell casting success number, which is then used in the final calculation to determine if the spell has been successfully cast and how much meta-energy is required.

7. Compare the outcome of the great RNG god and see what our final result is.

And that is that, our spell has either been cast or it has fizzled away.

What do you think does this seem like a good way to calculate the success chance of casting a spell? Do you know of a different method? Either way let me know by posting a comment, I’d love to hear about it.

P.S.

Whilst writing this article I got to thinking why couldn’t each race could have a different luck modifier, i.e. each race applies more or less luck than other races.

Another thought I had would be to generate a spell that improved your luck for a period of time.

Till next time, happy coding.

Advertisements
This entry was posted in design, direction, magic, Spellcasting and tagged , , , . Bookmark the permalink.