Consider the Markov process for the M/M/1 queue, as given in Figure 7.4.
a) Find the steady state process probabilities (as a function of
= λ/µ) from (7.15) and also as the solution to (7.23). Verify that the two solutions are the same.
b) For the remaining parts of the exercise, assume that
= 0.01, thus ensuring (for aiding intuition) that states 0 and 1 are much more probable than the other states. Assume that the process has been running for a very long time and is in steady state. Explain in your own words the di↵erence between ⇡1 (the steady-state probability of state 1 in the embedded chain) and p1 (the steady-state probability that the process is in state 1. More explicitly, what experiments could you perform (repeatedly) on the process to measure π1
and p1.
c) Now suppose you want to start the process in steady state. Show that it is impossible to choose initial probabilities so that both the process and the embedded chain start in steady state. Which version of steady state is closest to your intuitive view? (There is no correct answer here, but it is important to realiize that the notion of steady state is not quite as simple as you might imagine).
d) Let M(t) be the number of transitions (counting both arrivals and departures) that take place by time t in this Markov process and assume that the embedded Markov chain starts in steady state at time 0. Let U1,U2, . . . , be the sequence of holding intervals between transitions (with U1 being the time to the first transition). Show that these rv’s are identically distributed. Show by example that they are not independent (i.e., M(t) is not a renewal process).