next up previous
Next: Regular Markov Chain Up: MarkovChain_9_18 Previous: MarkovChain_9_18

Markov Chains

Suppose in small town there are three places to eat, two restaurants one Chinese and another one is Mexican restaurant. The third place is a pizza place. Everyone in town eats dinner in one of these places or has dinner at home.

Assume that 20% of those who eat in Chinese restaurant go to Mexican next time, 20% eat at home, and 30% go to pizza place. From those who eat in Mexican restaurant, 10% go to pizza place, 25% go to Chinese restaurant, and 25% eats at home next time. From those who eat at pizza place 30Those who eat at home 20% go to Chinese, 25% go to Mexican place, and 30% to pizza place. We call this situation a system. A person in the town can eat dinner in one of these four places, each of them called a state. In our example, the system has four states. We are interested in success of these places in terms of their business. For example after a given period of time, what percentage of people in town will go to pizza place?

Suppose there is a physical or mathematical system that has $k$ possible states and at any one time, the system is in one and only one of its $k$ states. And suppose that at a given observation period, say $n^{th}$ period, the probability of the system being in a particular state depends on its status at the n-1 period, such a system is called Markov Chain or Markov process.

In the example above there are four states for the system. Define $a_{ij}$ to be the probability of the system to be in state $ i$ after it was in state j ( at any observation ). The matrix $A = (a_{ij}$) is called the Transition matrix of the Markov Chain.

So transition matrix for example above, is

$A= \left[ \begin{array}{rrrr}
.25&.20&.25&.30\\
.20&.30&.25&.30\\
.25&.20&.40&.10\\
.30&.30&.10&.30\\
\end{array}
\right]$

The first column represents state of eating at home, the second column represents state of eating at the Chinese restaurant, the third column represents state of eating at the Mexican restaurant, and the fourth column represents state of eating at the Pizza Place.

Similarly the rows respectively represent eating at home, eating at the Chinese restaurant, eating at the Mexican restaurant and eating at the Pizza Place.




\begin{displaymath}\begin{array}{rrrrrr}
\mbox{H} & \mbox{C} &\mbox{M} &\mbox{P} \\
\end{array}\end{displaymath}


\begin{displaymath}A =\left[ \begin{array}{rrrrr}
.25&.20&.25&.30 \\
.20&.30...
...}
\mbox{H}\\
\mbox{C}\\
\mbox{M}\\
\mbox{P}\\
\end{array}\end{displaymath}



Note that the sum of each column in this matrix is one. Any matrix with this property is called a stochastic matrix probability matrix or a Markov matrix. We are interested in the following question:

What is the probability that the system is in the $i^{th}$ state, at the $n^{th}$ observation?

To answer this question, we first define the state vector. For a Markov Chain, which has k states, the state vector for an observation period $n$, is a column vector defined by




\begin{displaymath}x^{(n)} = \left[ \begin{array}{c}
x_1\\
x_2\\
\vdots\\
x_k\\
\end{array}
\right]\end{displaymath}



where, $x_i$ = probability that the system is in the $i^{th}$ state at the time of observation. Note that the sum of the entries of the state vector has to be one. Any column vector $x$,


\begin{displaymath}x = \left[ \begin{array}{c}
x_1\\
x_2\\
\vdots\\
x_k\\
\end{array}
\right]\end{displaymath}



where $x_1 + x_2 + \dots + x_k = 1$ is called a probability vector. Consider our example, suppose at the beginning, every one eats at home, that is the initial state vector $x^{(0) }$ is


\begin{displaymath}x^{(0)} = \left[ \begin{array}{r}
1\\
0\\
0\\
0\\
\end{array}
\right]\end{displaymath}

. In the next observation period, say end of the first week, the state vector will be

\begin{displaymath}x^{(1)} = A x^{(0)} = \left[ \begin{array}{r}
.25\\
.20\\
.25\\
.30\\
\end{array}
\right]\end{displaymath}

.


At the end of $2^{nd}$ week the state vector is

$x^{(2)} = Ax^{(1)} = \left[ \begin{array}{rrrr}
.25&.20&.25&.30\\
.20&.30&....
...\begin{array}{r}
.2550\\
.2625\\
.2325\\
.2500\\
\end{array}
\right]$


Note that we can compute $x^{(2)}$ directly using $x^{(0) }$ as


\begin{displaymath}x^{(2)} = Ax^{(1)} = A(Ax^{(0)}) = A^2 x^{(0)}.\end{displaymath}



Similarly, we can find the state vector for $5^{th}, 10^{th} and 20^{th}$ observation periods.


\begin{displaymath}x^{(5)} = A^5 x^{(0)} = \left[ \begin{array}{r}
.2495\\
.2634\\
.2339\\
.2532\\
\end{array}
\right]\end{displaymath}




\begin{displaymath}x^{(10)} = A^10 x^{(0)} = \left[ \begin{array}{r}
.2495\\
.2634\\
.2339\\
.2532\\
\end{array}
\right]\end{displaymath}




\begin{displaymath}x^{(20)} = A^20 x^{(0)} = \left[ \begin{array}{r}
.2495\\
.2634\\
.2339\\
.2532\\
\end{array}
\right]\end{displaymath}



Computing

\begin{displaymath}x^{(30)} = \left[ \begin{array}{r}
.2495\\
.2634\\
.233...
...
.2495\\
.2634\\
.2339\\
.2532\\
\end{array}
\right]\end{displaymath}

,



suggest that the state vector approached to some fixed vector, as the number of observation periods increase. This is not the case for every Markov Chain. For example if


\begin{displaymath}A = \left[ \begin{array}{rr}
0&1\\
1&0\\
\end{array}
\right]\end{displaymath}

, and

\begin{displaymath}x^{(0)} = \left[ \begin{array}{r}
1\\
0\\
\end{array}
\right]\end{displaymath}

,


we can compute the state vectors for different observation periods:


\begin{displaymath}x^{(1)} = \left[ \begin{array}{r}
0\\
1\\
\end{array}
...
...)} = \left[ \begin{array}{r}
0\\
1\\
\end{array}
\right]\end{displaymath}



These computations indicates that this system oscillates and does not approach any fixed vector.


next up previous
Next: Regular Markov Chain Up: MarkovChain_9_18 Previous: MarkovChain_9_18
Ali A. Daddel 2000-09-18