Keras Reinforcement Learning Projects
上QQ阅读APP看书,第一时间看更新

Simulating Random Walks

Stochastic processes involve systems that evolve over time (but also more generally in space) according to probabilistic laws. Such systems or models describe the complex phenomena of the real world that have the possibility of being random. These phenomena are more frequent than we can believe. We encounter these phenomena when the quantities we are interested in are not predictable with absolute certainty. However, when such phenomena show a variability of possible outcomes that can be somehow explained or described, then we can introduce a probabilistic model of the phenomenon.

For example, say that we are examining the motion involved in a random walking movement. We study the motion of an object that is constrained to move along a straight line in the two directions allowed. At each movement, it moves randomly to the right or left, each step being of equal length and independent of the other steps. A Markov chain is a stochastic process whereby the evolution of a system depends only on its present state and not on its past state. A Markov chain is characterized by a set of states and by the probability of transition between states. Think of a point that can move randomly forward or backward along a line at discrete intervals of time, covering a certain distance at each interval. This is an example of a random walk. In this chapter, we will simulate a random walk using Markov chains through a Python code implementation.

In this chapter, we will cover the following topics:

  • Random walk
  • Random walk simulation
  • Basic probability concepts
  • Markov chain
  • Forecasting using a Markov chain
  • Markov chain text generator

At the end of the chapter, the reader will know the basic concepts of the Markov process, the basic concepts of random walks, how the random walk algorithms work, know how to use a Markov chain to forecast the weather, and how to simulate random walks using Markov chains.