Roguelike Intelligence - Genetic Algorithms and Evolving State Machine AIs

From RogueBasin
Revision as of 12:28, 12 September 2006 by Lochok (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search
Roguelike Game AI, part 3A:
        Genetic Algorithms and evolving stateless AIs.

    Now we're going to get to the first technique that
I'd refer to as "Artificial Intelligence" rather than
"Pseudointelligence."  The difference between these two
terms, colloquially, is that when you're working with an
Artificial Intelligence technique, you can be genuinely
surprised by the results, where a pseudointelligence simply
does what you tell it.  But, as far as most people are
concerned, if it's what decides monster actions in a game,
they're happy to call it "AI" and such distinctions are
merely pedantic.

    Keep in mind that from this point onward, we are
going "above and beyond the call" as far as implementing
roguelike monsters.  I do this kind of engineering for a
living, and I'm not using it (at least not yet) even in my
own game.

    The fundamental algorithm for most kinds of
artificial intelligence, when boiled down to first
principles, is disarmingly simple:

        1. Develop some structure capable of complicated
           behavior that's also capable of being adjusted or
           tweaked just about infinitely.

        2. If it doesn't do what you want, make small
       changes in it in a way that's expected to improve
       its behavior.

        3. If it does do what you want, stop the program.

        4. goto 2.


    In this article, I'm going to first give a general
introduction to genetic algorithms and then talk about how
to apply them to the kind of stateless and state-machine
AI's I've talked about in the first couple of articles.  The
stateless-AI's and state-machine AI's constitute a structure
capable of complicated behavior and also capable of being
tweaked or adjusted a lot;  For a genetic algorithm, the
process of "adjusting it in a way that's expected to improve
its behavior" means "adjusting members of a population
randomly and then biasing survival vs. reproduction against
the ones whose behavior is not improved."

    Here is how a genetic algorithm, in general, works.
You have a "population" of individuals.  Each individual is
judged according to a fitness function that says how good it
is.  You then pick a couple of the "better" individuals, and
make a new individual by combining their sequences in some
way.  You replace one of the "worse" individuals with this
new individual, and the cycle repeats.  Every so often, you
pick some individual in the population and change something
about that individual at random.

    There are an infinite number of variations on this
general theme.  The parameters that the engineer has to work
with are population size, mutation rate, fitness function,
gene map, degree of elitism, and method of combining
individuals.  Many systems change one or more of these
parameters during the run.  Some discussion of what these
parameters are and how they influence the runs (read:
guidance at troubleshooting) will help you figure stuff
out and avoid some of the more popular mistakes.

        Population size is the number of individuals in the
population.  If it's too big, your genetic algorithm will
take a long time to converge on a solution.  If it's too
small, your genetic algorithm will quickly find a "solution"
but it may not be a very good one.

    The fitness function is absolutely critical. What
you're going for is that, in most cases, small changes in
the individuals should map to small changes in the overall
fitness, and similar values at similar locations in the
genomes should usually mean similar things.  But it's often
very hard to visualize in advance exactly how the fitness
function will map things.  For an extreme case, consider a
fitness function for finding a key in a modern cipher
system; basically, if the individual is the key, its
fitness is perfect and if it's anything at all else, its
fitness is perfectly wrong. A genetic algorithm can get
no "traction" here, because there's nothing to base fitness
decisions on.  For another extreme case, consider a fitness
function that looks at one number in the genome, takes four
minus the number, and squares it, regarding the lowest
numbers as the best fitness.  This presents a simple fitness
landscape with one "valley" centered on four.  Every change
to that number will map to a change in the fitness, and every
change that makes an individual more fit will bring it closer
to the global optimum.  The first system will never converge
except by blind luck; The second will converge trivially.
Roguelike-AI fitness functions are particularly hard to come
up with, but I'll talk more about that later.

    The mutation rate has to be considered in
combination with the mutation types and the fitness
landscape.  If you imagine that the fitness landscape has
broad curves and few local optima, you'll want a high
mutation rate with mutations that make smallish changes;
this is called an "exploitation" or "hill climbing"
strategy, because it "exploits" the area that the individual
is in, making new individuals very near it in the
expectation that those with higher fitness scores (those
"higher on the hillside") will survive to the next
generation, until the optimum is reached.  If you imagine
that the fitness landscape is very complex, you'll want a
lower mutation rate because the changes it makes will
necessarily be large.  This is called an "exploration"
strategy, because it makes individuals that are further away
from its parent.  Early in a run, lots of these will find
themselves at a better level of fitness than their parent,
and will in turn become parents themselves; late in the run
almost all mutants die.  This is the factor that is most
commonly twiddled during a run.  In a tactic called
"simulated annealing", there are periodic 'cataclysms'
during the run where the mutation rate is raised far above
its normal level to put individuals out there all over the
fitness landscape, then returned to a low level so that the
unfit ones die out over succeeding generations.  You also
get a lot of workers who start with a sustained, high
mutation rate and then gradually reduce it during the run.
What you need to know about mutation rate is not where to
set it, but rather what indicates that it's too high or too
low.  If the mutation rate is too low, then you'll get
convergence, but usually on some local optimum.  Successive
runs will find "solutions" of significantly differing
quality.  If the mutation rate is too high, you'll get
non-convergence; you'll have some highly fit individuals in
your population, but the proportion of highly-fit to other
individuals won't be increasing as generations pass.  When
it's just right, you'll get solutions in different runs that
have very similar (about as good as there is) fitness.

    The degree of elitism expresses how much of a
reproductive advantage an individual in this population has
as a result of being more fit.  The classic "newbie mistake"
in genetic algorithms is to always breed the very most fit
individuals -- absolute elitism.  This doesn't work.
Selection entirely at random also doesn't work, as it
confers no advantage to the more fit individuals.  When
you're selecting individuals to breed, or individuals to
eliminate, or both, fitness has to influence the decision,
but it can't dominate it.  Even the "worst" individual in
the population should have some chance of having offspring,
and even the "best" should have some odds of failing to make
it to the next generation.

    What I usually do is pick three individuals at
random, order them from best to worst in fitness, then
assign chances to breed and chances to be replaced for each
depending on their ranking within the three.  For example:

       "normal" elitism   "small" elitism  "slight" elitism
        %breed %replace   %breed %replace  %breed  %replace
best        50    20         40   25          35    32
medium      30    30         35   35          34    33
worst       20    50         25   40          31    35


    A high degree of elitism will eliminate potentially
valuable genetic material before that material gets
considered in combination with other things it could be
combined with; a low degree of elitism will preserve and
experiment with genetic material in different combinations,
gradually eliminating it only as it finds absolutely no use
for it.  The more complicated the problem you're trying to
solve (the more complex the fitness function and the more
complexity-rich the combinations of genes are), the lower
the degree of elitism you want.  You need to give the system
time to make its experiments.  Elitism has to be considered
in combination with mutation rate; mutation introduces new
genes, and if mutation is high and elitism is low new genes
will be introduced faster than the experiments with
different combinations can play themselves out. A very low
degree of elitism requires a low mutation rate to work in
your favor.  If elitism is too low for the current mutation
rate, you'll get non-convergence; however, unlike the
situation where the mutation rate is too high in general,
the fitness of even the most-fit individuals in the
population will fluctuate wildly, sometimes hitting high
notes but often just wandering.

    The gene map and the method of combination also have
to be considered together.  The gene map is usually a vector
or sequence of values, and the values control different
things.  The more those things influence each other's value
in the fitness function, the more likely you want it to be
that they are transferred *together* to an offspring.  This
is not actually essential, and it doesn't mean the
difference between convergence and nonconvergence; but it
can make the convergence process significantly faster.  The
most common method of combination for a vector is
"crossover," which means that a copy is made of one of the
parents, and then, values from the other parent are copied
over them from a randomly selected "beginning" point to an
"ending" point.  The result of this is that values closest
to each other in the gene map are also most likely to be
transferred to offspring together.  And what that means in
practical terms is that you want to put values that form a
particular strategy or behavior that's important, close
together in the gene map.




    Now how do we apply this technology to developing
AI's for roguelike monsters?  Well, it's hard.  Some
problems I can answer, like what would be a good gene map.
Others, like finding a good fitness function, are hard but I
can discuss a couple approaches to them.

    First of all, let's talk about evolving
stateless-AI's of the sort I talked about in article 1.
Each of these is, basically, a nested set of if-then
statements.  That is, they have a tree structure:

         test1              at left is an example of a
         /   \              a stateless AI, each node of the
     test2  test3           tree is a test or an action.
      / \     |  \
  act1 test4 act2 test5       below is this tree as a vector,
        / \       /  \        where leftchild(n) = 2n and
     act3 act4  act5 act6     rightchild(n) = 2n+1.

test1 test2 test3 act1 test4 act2 test5  act3 act4 act5 act6
1      2     3    4    5      6    7      10   11   14   15

For example test5 is found at index 7.  its leftchild is
found at index 7x2=14, and its rightchild is found at 7x2+1
= 15.  Note that there is nothing at vector indexes 8 and 9,
nor 12 and 13.

    This remapping trick with trees is valuable; it lets
you code with just a number what the relationships of nodes
are within the tree.  You get isomorphic structure between
different individuals, which means the same gene will
usually means the same thing in different individuals as
convergence happens.

    Also, you get the ability to use subtree swapping
for your combination function, which preserves entire
behavioral complexes from one parent to the next.  That is,
you just cut a branch off one parent tree and substitute a
branch randomly selected from the other parent tree.  That
preserves complexes of behavior that work together.  A good
mutation function is one that picks a node and replaces it
with a different node. You'd even have access to "small"
mutations, since you can replace actions or tests with
self-similar actions or tests having different parameters or
different implementations.  In practice, you'll want a
mutation that frequently picks "small" mutations and
occasionally picks "large" mutations. Just make sure that if
the mutation function substitutes a test for an action, all
the test's subbranches end in actions.

    Certain things you'll want to telescope into this
structure.  Remember that in the handwritten stateless-ai's
I had a test to "flight check" every action.  In this
structure, you don't want the flight checks to ever be
separated from the actions, so you'd make all actions have a
"left child" which could be another test or another action.
The child node would be executed if the specified action
couldn't be performed.  Now, that implies that these trees
are infinitely deep, which is not actually possible. So
you'll need some "default behavior" that happens when they
go off the bottom of the tree.  You can go simplistic and
pick "wait," relying on the GA to develop every bit of the
smarts on its own, or you can use the GA to amplify
handcoding by just having the "default" behavior be
executing the handwritten (and guaranteed terminating)
stateless-AI that you're trying to improve.

    So, we have a gene mapping and a combination
function.  I'd make a decision that the "individuals" never
got more than 1024 nodes (for animals) or more than 4096
nodes (for "intelligent" monsters) in them and that the
index number of any node simply is not allowed to exceed
MAXINT.  I'd include code in the branch-swap function to
truncate any branch that wanted an index number higher than
that.  I'd pick a population size around 150, a "normal" or
"small" rate of elitism, restrict the range of tests and
behaviors to things I considered "appropriate and possible"
for that monster type, and then I'd be looking for a fitness
function.

    Fitness functions are hard to find in RPG game
opponents. Sad but true, the fate of most monsters in a
roguelike game is to be on screen for about twenty seconds
and then get slaughtered.  Now, you could take "survival
time" as your fitness criterion, but all you would teach
your monsters is to be very good at running away.  If they
got good enough at it, the player would never even see them.
And that's probably not what you want.  You could take "hits
of damage inflicted on the player" as the measure of monster
worth, but you run into another problem in that most
monsters don't inflict any damage at all, and some of the
most fun ones do their stuff in a form other than physical
injuries.

    About the best you can do before we cover modeling
the player in the fourth section of this series of articles
is a fitness function that evaluates, over a monster's
lifetime, what it has cost the player.  A few hitpoints?  One-
tenth of a charge off a fireball wand?  Six charges from a rod
of illumination?  A round of sword swinging?  A speed potion?
Food and time spent digging? Whatever it is, you have to
assign credit for getting the player to use it to one or more
monsters, and then try to decide its value to the player.
Remember that the last hitpoint has infinite value and the
first has a very small value, so the monster who hits a player
for 4 damage when he's low on HP should get more rewarded than
the monster who hit for 4 damage when he was full up.  Add
it all up and assume that the monster who cost the player the
most was probably the one with the best AI.  You'd need to
build this directly into your combat code; having selected the
next three "kobold AI's" to test in its GA, the system would
have to wait and use them in the next three kobolds the player
meets and see which ones did better or worse.

    Since there'd be a huge level of variance due to just
plain luck, I'd use a very high degree of elitism on the
principle that I'm just compensating for getting noisy
measurements anyway.//

    But, the real-time test of interacting with the
player simply doesn't conclude enough experiments fast
enough to make a GA approach work really well.  You could
do it, but it would take a lot of games to develop good
monster AI's.  A "nice" fitness function, ideally, should
be something you can calculate without player input, so you
can leave the GA chewing on a problem overnight (or for a
week) and see what it comes up with, instead of waiting
for hundreds of games by dozens of playtesters, per run,
to provide the data.

    Basically, a genetic algorithm allows you to say
what goals to try to achieve without specifying the solution
-- but you have to be able to state those goals, and usually
the goals are hard to precisely define.  What normally
happens is that the first six or eight runs show you what's
wrong with the ways you've formulated your fitness function
by "raping" it in various ways with the simplest (and least
interesting) possible behavior that gets a good fitness
score in a way you didn't think of.  After that you start
getting results that point out subtle errors in your
conception of how the monsters' actions and tests worked, or
point out exploitable bugs in your combat code.  And once
you work through all of those, you'd start getting usable,
novel results and you start runs where you do a lot of
tweaking of your GA parameters such as mutation rate,
elitism, etc.  A lot of GA runs will result in things that
are pathetic, silly, or too embarrassing to talk about.

        But don't despair; there is a worthwhile fitness
function that can be successfully applied to a roguelike
game monster AI.  I'll explain it, but we can't actually do
it until after part 4, Modeling the Player.

    The only worthwhile approach to a fitness function
for monsters is run plus status evaluation.  Basically, this
takes a monster and its situation and tries to project it
some number of rounds into the future with different AI's.
At the end of that time, the status of the different
"virtual monsters" (one for each AI) is evaluated, and the
one with the most improved (or least degraded) status is
regarded as the best for purposes of this test.  It's much
easier to write a status evaluation function than it is to
write an AI evaluation function, so this is a method of
leveraging what you can do relatively easily to get what is
much much harder to do.  The big problem with this approach
is, since you're mainly interested in situations that
involve the player, this requires you to guess what the
player is going to do for the next few rounds.  And that is
hard.


Ray Dellinger