I just began reading more about Markov Chain Generators today, and am really intrigued by the whole process of building one. From my understanding, the future state is dependent upon the statistical past states to the present.
Example:
Hello World. Hello Dolly. Hello World.
"World" follows "Hello" ~66% of the time in that source.
If that is always the case, how then do you avoid out-putting the same results each time? The statistical-occurrences won't change with a static string, so am I right to assume that no variants will every be generated, unless the source-data is modified in some way?
How could I get variations from a static-source, considering the statistical-values, yet allowing some flexibility? Using my example above, how do I allow my generator to follow "Hello" with "Dolly," when "Dolly" only follows "Hello" 33% of the time?
I guess what I'm asking is, How do I base the probability of my next selection on the statistical presence of the words that follow my present selection? That way, "Dolly" shows up 33% of the time, and "World" shows up 66% of the time - or am I completely lost here?