A Markov chain is a stochastic model depicting a grouping of potential occasions in which the likelihood of every occasion depends just on the state achieved in the past event.

In likelihood hypothesis and related fields, a Markov procedure, named after the Russian mathematician Andrey Markov, is a stochastic procedure that fulfills the Markov property (in some cases portrayed as “memorylessness”). Generally, a procedure fulfills the Markov property in the event that one can make expectations for the fate of the procedure depends on its present state similarly just as one could knowing the procedure’s full history, henceforth freely from such history, that is, contingent on the current situation with the framework, its future, and past states are autonomous. 

A Markov chain is a kind of Markov process that has either a discrete state space or a discrete recordset (frequently speaking to time), however, the exact meaning of a Markov chain varies. For instance, it isn’t unexpected to characterize a Markov chain as a Markov procedure in either discrete or ceaseless time with a countable state space (hence paying little heed to the idea of time), yet it is additionally basic to characterize a Markov chain as having discrete-time in either countable or consistent state space (accordingly paying little mind to the state space).

Markov contemplated Markov forms in the mid-twentieth century, distributing his first paper on the subject in 1906. Random strolls dependent on whole numbers and the card shark’s ruin issue are instances of Markov processes. Some varieties of these procedures were examined many years sooner with regards to autonomous variables. Two significant instances of Markov forms are the Wiener procedure, otherwise called the Brownian movement process, and the Poisson process, which is viewed as the most significant and focal stochastic procedures in the hypothesis of stochastic processes, and was found over and again and freely, both when 1906, in different settings. These two procedures are Markov forms in constant time, while arbitrary strolls on the whole numbers and the speculator’s ruin issue are instances of Markov forms in discrete time. 

Markov chains have numerous applications as measurable models of genuine world processes, for example, considering journey control frameworks in engine vehicles, lines or lines of clients landing at an air terminal, trade paces of monetary standards, stockpiling frameworks, for example, dams, and populace developments of certain creature species. The calculation known as PageRank, which was initially proposed for the web search tool Google, depends on a Markov process.

The following table gives an overview of the different instances of Markov processes for different levels of state space generality and for discrete time v. continuous time:

Note that there is no complete understanding in the writing on the utilization of a portion of the terms that imply uncommon instances of Markov forms. Typically the expression “Markov chain” is saved for a procedure with a discrete arrangement of times, that is, a discrete-time Markov chain (DTMC), however a couple of creators utilize the expression “Markov process” to allude to a ceaseless time Markov chain (CTMC) without unequivocal mention. what’s more, there are different expansions of Markov forms that are alluded to in that capacity yet don’t really fall inside any of these four classes (see Markov model). In addition, the time record need not really be genuinely esteemed; like with the state space, there are possible procedures that travel through file sets with other scientific develops. Notice that the general state-space nonstop time Markov tie is general to such an extent, that it has no assigned term. 

While the time parameter is normally discrete, the state space of a Markov chain doesn’t have any commonly conceded to limitations: the term may allude to a procedure on a discretionary state space.[39] In any case, numerous uses of Markov chains utilize limited or countably unending state spaces, which have a progressively direct measurable examination. Other than the time-list and state-space parameters, there are numerous different varieties, augmentations, and speculations (see Varieties). For straightforwardness, the greater part of this article focuses on the discrete-time, discrete state-space case, except if referenced generally. 

The progressions of the condition of the framework are called transitions.[1] The probabilities related to different state changes are called change probabilities. The procedure is portrayed by a state space, a change framework depicting the probabilities of specific advances, and an underlying state (or beginning dispersion) over the state space. By show, we accept every conceivable state and changes have been incorporated into the meaning of the procedure, so there is constantly the next state, and the procedure doesn’t end. 

A discrete-time irregular process includes a system that is in a specific state at each progression, with the state changing arbitrarily between steps The means are regularly thought of as minutes in time, yet they can similarly well allude to physical separation or some other discrete estimation. Officially, the means are the whole numbers or normal numbers, and the arbitrary procedure is a mapping of these to states. The Markov property expresses that the restrictive likelihood dispersion for the framework at the subsequent stage (and in reality at all future advances) depends just on the present condition of the framework, and not moreover on the condition of the framework at past advances. 

Since the framework changes haphazardly, it is commonly difficult to anticipate with assurance the condition of a Markov chain at a given point in the future. Be that as it may, the factual properties of the framework’s future can be predicted. In numerous applications, it is these measurable properties that are significant. 

A well known Markov chain is the purported “boozer’s walk”, an arbitrary stroll on the number line where, at each step, the position may change by +1 or −1 with equivalent likelihood. From any situation, there are two potential changes, to the following or past whole number. The progress probabilities depend just on the present position, not on the way wherein the position was come to. For instance, the progress probabilities from 5 to 4 and 5 to 6 are both 0.5, and all other change probabilities from 5 are 0. These probabilities are autonomous of whether the framework was beforehand in 4 or 6.

Discrete-time Markov chain

A discrete-time Markov chain is a sequence of random variables X1, X2, X3, … with the Markov property, namely that the probability of moving to the next state depends only on the present state and not on the previous states:

if both conditional probabilities are well defined, that is,

{\displaystyle \Pr(X_{n+1}=x\mid X_{1}=x_{1},X_{2}=x_{2},\ldots ,X_{n}=x_{n})=\Pr(X_{n+1}=x\mid X_{n}=x_{n}),}

The possible values of Xi form a countable set S called the state space of the chain.