TechTorch

Location:HOME > Technology > content

Technology

Finding the Stationary Distribution of a Markov Chain with a 3x3 Transition Matrix

January 07, 2025Technology1402
Introduction to Markov Chains and Stationary Distribution Markov chain

Introduction to Markov Chains and Stationary Distribution

Markov chains are a powerful mathematical tool used to model systems that change over time. A Markov chain is defined by its transition matrix, which describes the probabilities of moving from one state to another. One of the most significant properties of a Markov chain is its stationary distribution, also known as the stable probabilities. This distribution represents the long-run behavior of the system and allows us to understand the probabilities of being in each state after a very long time.

Understanding the Stationary Distribution

The stationary distribution of a Markov chain is defined as a probability distribution over the states that remains unchanged under the action of the transition matrix. Mathematically, for a given transition matrix (P), the stationary distribution (pi) satisfies the equation:

Stationary Distribution Equation

(pi P pi)

This equation states that (pi) is a left eigenvector of (P) corresponding to the eigenvalue 1. Additionally, the distribution must satisfy the normalization condition:

Normalization Condition

(pi_1 pi_2 ldots pi_n 1)

where (pi_i) represents the probability of being in state (i).

Example with a 3x3 Transition Matrix

Consider a Markov chain with a 3x3 transition matrix (P), given by:

P  [[0, 1, 0],     [0, 0, 1],     [0.7, 0.3, 0]]

We need to find the stationary distribution for this matrix. Let's denote the stationary distribution as (pi [pi_1, pi_2, pi_3]).

Setting Up the Equations

Using the stationary distribution equation, we have:

(pi P pi)

Substituting the matrix (P) into this equation, we get:

[pi_1, pi_2, pi_3] * [[0, 1, 0],                       [0, 0, 1],                       [0.7, 0.3, 0]]  [pi_1, pi_2, pi_3]

Multiplying the matrix and the vector, we obtain the following system of equations:

(pi_1 0.7 pi_3)(pi_2 pi_1)(pi_3 pi_2)

From the second and third equations, we have:

(pi_2 pi_1) and (pi_3 pi_2 pi_1)

Substituting this into the first equation, we get:

(pi_1 0.7 pi_1)

Since (pi_1 eq 0), we can divide both sides by (pi_1), which gives:

1 0.7 0.3 (pi_1 / pi_1) 0.7 0.3 1

This implies (pi_1 0), which is a contradiction because the elements of (pi) must sum to 1.

Using the Normalization Condition

Since (pi_1 0), the normalization condition becomes:

0 pi_2 pi_3 1

Substituting (pi_2 pi_1 0) and (pi_3 pi_2 0) into the normalization condition, we get:

(pi_3 1)

Therefore, the stationary distribution is:

pi  [0, 0, 1]

This means that in the long run, the Markov chain will always be in state 3.

Conclusion

In conclusion, we have found the stationary distribution of the Markov chain with the given transition matrix. The vector of stable probabilities (stationary distribution) is (pi [0, 0, 1]), which indicates that the system will almost certainly be in state 3 over the long term.