Past states of continuous-time Markov models for ecological communities

Math Biosci. 2008 Feb;211(2):299-313. doi: 10.1016/j.mbs.2007.08.006. Epub 2007 Sep 2.

Abstract

Discrete-time Markov chains are often used to model communities of sessile organisms. The community is described by a set of discrete states, which may represent species or groups of species. Transitions between states are modelled using a stochastic matrix. A recent study showed how the time-reversal of such a Markov chain can be used to estimate the distribution of time since the last occurrence of some state of interest (such as empty space) at a point, given the current state of the point. However, if the underlying process operates in continuous time but is observed at regular intervals, this distribution describes the time since the last possible observation of the state of interest, rather than the time since its last occurrence. We show how to obtain the distribution of time since the last occurrence of a state of interest for a continuous-time homogeneous Markov chain. The expected time since the last occurrence of an initial state can be interpreted as a measure of the successional rank of a state. We show how to distinguish between different ways in which a state can have high successional rank. We apply our results to a marine subtidal community.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Animals
  • Ecosystem*
  • Maine
  • Markov Chains*
  • Models, Biological*