Let be the probability that the state will occur on the repetition of the experiment,
Since one the states must occur on the repetition
Let be the probability that state , , occurs on repetition of the experiment
There are ways to be in state at step :
Step is . Probability of getting on step is , and probability of having after is . Therefore, by multiplication principle,
We get on step and on step . Then
...
Probability of occurrence of at step if at step is
Therefore, is sum of all these
Therefore,
In matrix form
where is a (row) probability vector and is a transition matrix
So
It is easy to check that this gives the same expression as
Stochastic matrices
The nonnegative matrix is stochastic if for all
Let be a stochastic matrix . Then all eigenvalues of are such that . Furthermore, is an eigenvalue of
To see that is an eigenvalue, write the definition of a stochastic matrix, i.e., has row sums 1. In vector form, Now remember that is an eigenvalue of , with associated eigenvector , iff . So, in the expression , we read an eigenvector, , and an eigenvalue,
Asymptotic behaviour of a DTMC
Asymptotic behavior
Let be the initial distribution (row) vector. Then
Iterating, we get that for any ,
Therefore,
Additional properties of stochastic matrices
If are stochastic matrices, then is a stochastic matrix
If is a stochastic matrix, then for any , is a stochastic matrix
Regular DTMC
Not much use in disease models
In disease models, the state is absorbing, so chains are not regular
Worth taking a quick look at, though: regular DTMC come into play when looking at movement between locations, for instance
Regular Markov chain
A regular Markov chain is one in which is positive for some integer , i.e., has only positive entries, no zero entries
A nonnegative matrix is primitive if, and only if, there is an integer such that is positive
A Markov chain is regular if, and only if, the transition matrix is primitive
Important result for regular Markov chains
If is the transition matrix of a regular Markov chain, then
the powers approach a stochastic matrix
each row of is the same (row) vector
the components of are positive
So if the Markov chain is regular
Left and right eigenvectors
Let be an matrix, be two column vectors, Then, if
is the (right) eigenvector corresponding to , and if
then is the left eigenvector corresponding to . Note that to a given eigenvalue there corresponds one left and one right eigenvector
The vector is in fact the left eigenvector corresponding to the eigenvalue 1 of . (We already know that the (right) eigenvector corresponding to 1 is .)
To see this, remark that, if converges, then , so is a fixed point of the system. We thus write
and solve for , which amounts to finding as the left eigenvector corresponding to the eigenvalue 1
Alternatively, we can find as the (right) eigenvector associated to the eigenvalue 1 for the transpose of
Now remember that when you compute an eigenvector, you get a result that is the eigenvector, to a multiple
So the expression you obtain for might have to be normalized (you want a probability vector). Once you obtain , check that the norm defined by
is equal to one. If not, use
Linking matrix and graph theory
A digraph is strongly connected if there is a path between all pairs of vertices
A matrix is irreducible if there does not exist a matrix s.t. block triangular
irreducible strongly connected
Another way to check regularity
A matrix is primitive if the associated connection graph is strongly connected, i.e., that there is a path between any pair of states, and that there is at least one positive entry on the diagonal of
Random walk v1.0 (regular case)
Drunk man's walk 1.0 (regular case)
chain of states
if in state , , probability 1/2 of going left (to ) and 1/2 of going right (to )
if in state , probability 1 of going to
if in state , probability 1 of going to
Transition matrix for DMW 1.0
Clearly a primitive matrix, so a regular Markov chain
We need to solve , that is,
Express everything in terms of :
So we get
We have
Since
to get a probability vector, we need to take
So
Now assume we take an initial condition with , i.e., the walker starts in state 1. Then
so
Setting up the transition matrix
# Total population
nb_states = 10 # Small so we can see output
# Parameters
proba_left = 0.5
proba_right = 0.5
proba_stay = 1-(proba_left+proba_right)
# Make the transition matrix
T = mat.or.vec(nr = nb_states, nc = nb_states)
for (row in 2:(nb_states-1)) {
T[row,(row-1)] = proba_left
T[row,(row+1)] = proba_right
T[row, row] = proba_stay
}
# First row only has move right
T[1,2] = 1
# Last row only has move left
T[nb_states, (nb_states-1)] = 1
Analysis using markovchain library
Let us use markovchain to consider numerical aspects
library(markovchain)
mcRW <- new("markovchain",
states = sprintf("S_%d", 1:nb_states),
transitionMatrix = T,
name = "RW_reg")
Function
Role
absorbingStates
absorbing states of the transition matrix, if any
steadyStates
the vector(s) of steady state(s) in matrix form
meanFirstPassageTime
matrix or vector of mean first passage times
meanRecurrenceTime
vector of mean number of steps to return to each recurrent state
hittingProbabilities
matrix of hitting probabilities for a Markov chain
meanAbsorptionTime
expected number of steps for a transient state to be absorbed by any recurrent class
absorptionProbabilities
transient states being absorbed by each recurrent state
meanNumVisits
mean number of visists to a state
canonicForm
canonic form of transition matrix
period
the period of an irreducible DTMC
summary
DTMC summary
> summary(mcRW)
RW_reg Markov chain that is composed by:
Closed classes:
S_1 S_2 S_3 S_4 S_5 S_6 S_7 S_8 S_9 S_10
Recurrent classes:
{S_1,S_2,S_3,S_4,S_5,S_6,S_7,S_8,S_9,S_10}
Transient classes:
NONE
The Markov chain is irreducible
The absorbing states are: NONE
Functions not useful in the regular case
absorbingStates returns no state (character(0))
meanAbsorptionTime returns an empty vector (named numeric(0))
meanNumVisits returns a matrix of Inf
hittingProbabilities is not useful (it is a matrix of 1's)
meanRecurrenceTime: outputs a named vector with the expected time to first return to a state when the chain starts there. States present in the vector are only the recurrent ones. If the matrix is ergodic (i.e. irreducible), then all states are present in the output and order is the same as states order for the Markov chain
A state in a Markov chain is absorbing if whenever it occurs on the generation of the experiment, it then occurs on every subsequent step. In other words, is absorbing if and for
A Markov chain is absorbing if it has at least one absorbing state, and if from every state it is possible to go to an absorbing state
In an absorbing Markov chain, a state that is not absorbing is called transient
Some questions on absorbing chains
Suppose we have a chain like the following
Does the process eventually reach an absorbing state?
Average number of times spent in a transient state, if starting in a transient state?
Average number of steps before entering an absorbing state?
Probability of being absorbed by a given absorbing state, when there are more than one, when starting in a given transient state?
Reaching an absorbing state
Answer to question 1:
In an absorbing Markov chain, the probability of reaching an absorbing state is 1
Standard form of the transition matrix
For an absorbing chain with absorbing states and transient states, the transition matrix can be written as
Absorbing states
Transient states
Absorbing states
Transient states
the identity, , ,
The matrix is invertible. Let
be the fundamental matrix of the Markov chain
be the sum of the entries on row of
Answers to our remaining questions:
is the average number of times the process is in the th transient state if it starts in the th transient state
is the average number of steps before the process enters an absorbing state if it starts in the th transient state
is the probability of eventually entering the th absorbing state if the process starts in the th transient state
The inverse of the symmetric tridiagonal matrix is given by
Note that and
Write and
We have , and the general term takes the form
Taking a look at the few terms in the sequence, we get the feeling that
A little induction should confirm this. Induction hypothesis (changing indices for odd ):
is true. Assume . Then
So holds true
In fact, we can go further, by expressing
in terms of odd and even . If is even,
while if is odd,
But the latter gives
so this formula holds for all 's
For the 's, we have and
So and
The form
is also useful
In summary
1
In , the following terms appear
and
We have, ,
Setting up the transition matrix
# Total population
nb_states = 10 # Small so we see output
# Parameters
proba_left = 0.5
proba_right = 0.5
proba_stay = 1-(proba_left+proba_right)
# Make the transition matrix
T = mat.or.vec(nr = nb_states, nc = nb_states)
for (row in 2:(nb_states-1)) {
T[row,(row-1)] = proba_left
T[row,(row+1)] = proba_right
T[row, row] = proba_stay
}
# First and last rows only have stay
T[1,1] = 1
T[nb_states, nb_states] = 1
Analysis using markovchain library
Let us use markovchain to consider numerical aspects
library(markovchain)
mcRW <- new("markovchain",
states = sprintf("S_%d", 1:nb_states),
transitionMatrix = T,
name = "RW_abs")
Function
Role
absorbingStates
absorbing states of the transition matrix, if any
steadyStates
the vector(s) of steady state(s) in matrix form
meanFirstPassageTime
matrix or vector of mean first passage times
meanRecurrenceTime
vector of mean number of steps to return to each recurrent state
hittingProbabilities
matrix of hitting probabilities for a Markov chain
meanAbsorptionTime
expected number of steps for a transient state to be absorbed by any recurrent class
absorptionProbabilities
transient states being absorbed by each recurrent state
meanNumVisits
mean number of visists to a state
canonicForm
canonic form of transition matrix
period
the period of an irreducible DTMC
summary
DTMC summary
> summary(mcRW)
RW_abs Markov chain that is composed by:
Closed classes:
S_1
S_10
Recurrent classes:
{S_1},{S_10}
Transient classes:
{S_2,S_3,S_4,S_5,S_6,S_7,S_8,S_9}
The Markov chain is not irreducible
The absorbing states are: S_1 S_10
Since , consider only the infected. To simulate as DTMC, consider a random walk on ( Gambler's ruin problem)
Denote , and
Transition matrix
To make things easy to see: Pop <- 5
# Make the transition matrix
T = mat.or.vec(nr = (Pop+1), nc = (Pop+1))
for (row in 2:Pop) {
I = row-1
mv_right = gamma*I*Delta_t # Recoveries
mv_left = beta*I*(Pop-I)*Delta_t # Infections
T[row,(row-1)] = mv_right
T[row,(row+1)] = mv_left
}
# Last row only has move left
T[(Pop+1),Pop] = gamma*(Pop)*Delta_t
# Check that we don't have too large values
if (max(rowSums(T))>1) {
T = T/max(rowSums(T))
}
diag(T) = 1-rowSums(T)
Analysis using markovchain library
We saw in Lecture 07 how to simulate individual realisations using DTMCPack. Let us use markovchain to dig into the details
library(markovchain)
mcSIS <- new("markovchain",
states = sprintf("I_%d", 0:Pop),
transitionMatrix = T,
name = "SIS")
> summary(mcSIS)
SIS Markov chain that is composed by:
Closed classes:
I_0
Recurrent classes:
{I_0}
Transient classes:
{I_1,I_2,I_3,I_4,I_5}
The Markov chain is not irreducible
The absorbing states are: I_0
> canonicForm(mcSIS)
SIS
A 6 - dimensional discrete Markov Chain defined by the following states:
I_0, I_1, I_2, I_3, I_4, I_5
The transition matrix (by rows) is defined as follows:
I_0 I_1 I_2 I_3 I_4 I_5
I_0 1.0000000 0.0000000 0.0000000 0.0000000 0.0000000 0.0000000
I_1 0.1666667 0.5000000 0.3333333 0.0000000 0.0000000 0.0000000
I_2 0.0000000 0.3333333 0.1666667 0.5000000 0.0000000 0.0000000
I_3 0.0000000 0.0000000 0.5000000 0.0000000 0.5000000 0.0000000
I_4 0.0000000 0.0000000 0.0000000 0.6666667 0.0000000 0.3333333
I_5 0.0000000 0.0000000 0.0000000 0.0000000 0.8333333 0.1666667
# The vector of steady states. Here, all mass should be in I_0
> steadyStates(mcSIS)
I_0 I_1 I_2 I_3 I_4 I_5
[1,] 1 0 0 0 0 0
A continuous time Markov chain can be formulated in terms of
infinitesimal transition probabilities
branching process
time to next event
Here, time is in
For small ,
with as
Forward Kolmogorov equations
Assume we know . Then
Compute and take , giving
Forward Kolmogorov equations associated to the CTMC
In vector form
Write previous system as
with
generator matrix. Of course,
Linking DTMC and CTMC for small
DTMC:
for transition matrix . Let , obtain Kolmogorov equations for CTMC
where
More on stochastic models in R
Parallelisation
To see multiple realisations: good idea to parallelise, then interpolate results. Write a function, e.g., run_one_sim that .. runs one simulation, then..
which gives 318.141 sec elapsed on a 6C/12T Intel(R) Core(TM) i9-8950HK CPU @ 2.90GHz (4.75 faster) or 12.067 sec elapsed versus 258.985 sec elapsed on a 32C/64T AMD Ryzen Threadripper 3970X 32-Core Processor (21.46 faster !)