The nuances of trading: Markov's process of price formation in the market
Modern models of financial markets and their pricing processes usually represent the dynamics of prices over an elementary period of time as a result of two factors: deterministic instantaneous increment и random increments. The first factor includes compensation for perceived risk, as well as the effect on price of such causes as imitation and herding effects in the trading environment. The second factor includes the noise component of price dynamics with an amplitude called volatility. It can also represent a systematic parameter controlled by imitation as well as other factors. If the first factor of price formation is absent, and volatility is constant, then the second term by itself creates trajectories of random walks.
The introduction into the model of the ubiquitous nonlinear dependence of volatility and a certain instantaneous increment from the past values of volatility and returns gives us a great variety of all possible trajectories of price evolution. Here, first of all, the numerous possible mechanisms leading to a non-linear positive feedback of prices to themselves can be of interest. It is known that the positive feedback loop in financial markets plays a great role in diagnosing price differences in the studied time series of the foreign exchange market.
Elementary Markov process
And let's start with the elementary Markovian process of pricing. But first let us formalize the most important concepts.
— A finite stochastic process is called a process with independent values if
(1) For any statement рwhose truth depends only on the outcomes of the experiments before n,
Р {fn = sj|p} = P{fn= sj}.
For such processes, knowledge of the outcomes of already observed experiments does not affect our prediction of the subsequent experiment. For Markovian processes, this requirement is relaxed, admitting that the value of the immediately preceding outcome affects this prediction.
— A finite Markov process is called a finite stochastic process such that
(2) For any statement рwhose truth depends only on the outcomes of the experiments before n,
Р {fn = sj|(fn-1=si) ^ p} = P{fn= sj| fn-1= si }.
(it is assumed that fn-1= si and p are compatible).
Condition (2) is called Markovian property. Sometimes a "Markov process" refers to a stochastic process with continuous time that satisfies the Markov property, and the object of definition (2) is called A Markov chain with a finite number of states.
— A finite Markov chain is called a finite Markov process for which the transition probabilities pij((n) do not depend on n.
Here it is appropriate to recall that the transition probabilities at the n-th step, which are usually denoted as pij(n), it is
pij(n) = P[fn= sj| fn-1= si].
A Markov chain can be thought of as a process that moves from state to state. It starts with probability pij(0) в si. If at any time he is in a state of si, then in the next "step" it enters the si likely pij. Initial probabilities can be understood as the probabilities of this or that possible "start".
The vector of initial probabilities together with the transition matrix completely defines the Markov chain as a stochastic process, since they are sufficient to construct a complete measure on the tree (branching of outcomes). Markov chain transition matrix is called a matrix P with elements pij. An initial probability vector (or initial distribution) is a vector π0 = {pj(0)} = { P[ f0= sj]}.
Therefore, if some probability vector π is given0 and some probability matrix P, then there is a single Markov chain (accurate to the transformation of states) for which π0 - is the initial distribution, and P is the transition matrix.
Examples of Markov Processes
To make it clearer what we are talking about, let's look at two illustrative examples.
1. Market diagnostics. Let's say the market is not rising for two days in a row. If the market is bullish, then the next day with equal probabilities there will be either Flator a drop in prices. If it is flat (or there is a drop in prices), then with the same probability the market the next day either stays the same or changes. If it changes, then half of the time the next day there will be an increase in prices.
Under these conditions it is convenient to represent the market as a Markov chain with three states: S = {flat (F), bullish (B), bearish (M)} and a transition matrix
2. The finite straying of the asset price. The currency rate begins to randomly wander in the price corridor between support (S) and resistance (R). The initial price level is somewhere in the middle between S and R. The rate tends to grow, i.e. to reach the resistance level. Let's label the distance in points from S to R with integers from 0 to n; the initial price level corresponds to the number i, which is located between 0 and n and is the initial state.
The market each time makes a "step" in the up direction (to state n) with probability p or a step down (to state 0) with probability q = 1 - p.
The states 0 and n are absorbing states. Assume that p is not 0 and p is not 1. In this case the transition matrix is equal to
Now let's move on to consider an example in the foreign exchange market.
Example of Markov chains formed on US Nonfarm payrolls data
Since, as we mentioned above, the initial probabilities can be understood as the probabilities of this or that possible "start", then, taking this into account, the elementary Markov pricing process (or elementary Markov chain) can be considered as an "instantaneous" market response to an external influence (the release of important news) with the direction of the asset price movement by the differential between the released figure and the one that was expected.
For example, on April 1, 2005, a significant economic indicator was released Nonfarm payrolls. Number of new jobs (Nonfarm payrolls) in the U.S. for March was +110,000 (the forecast was +225,000, the previous value was +262,000).
Fig. 1. Presented chart of euro/dollar with a sweep of 2 minutes as a market reaction to the release of economic indicator NFP U.S. 01.04.2005.
Figure 1 shows the natural euro/dollar market reaction to such "bad" statistics. The market-makers here are playing with two numbers: the one that came out at 16-30 Moscow timeframe and the one that was expected. Because before the statistics the market had already discounted the number of +225 000 new jobs "on expectations", right after the NFP indicator was released (the number of new payrolls jobs in the USA increased by only +110 000 in March 2005) it became clear that the US dollar was overestimated by the market. And the market-makers were the first to understand it and react: within a minute the EUR/USD exchange rate was up more than 80 points. Then the market-makers "ran into" bear sell orders, which stopped the rise of the Euro. Subsequently, during 20-30 minutes each, even a slight growth of the Euro was accompanied by bear attacks, which may indicate (in a zero approximation) about a strong structuring of the group consciousness of players in the Euro/dollar market in favor of selling.
Fig. 2. There are statistics (for the last 10 years) of price jumps in the USD/CHF market at the time of release of the economic indicator NFP USA (on sweeps 1min, 2min, 3min), as well as after 5 minutes and 20 minutes.
Figure 2 shows statistics (over the past 10 years) of price jumps in the market USD/CHF at the time of the economic indicator NFP USA (on sweeps 1min, 2min, 3min), as well as after five minutes and twenty minutes. It can be seen that at the moment of news release (16-30 Moscow time) the frequency distribution of jumps from their magnitude is far from Gaussian. Then literally in five (!) minutes the frequency increases sharply near the "zero" jump. And in another 20 minutes the distribution becomes similar to the course of a Gaussian curve. Visualization in the on-line mode of the deformation of these statistical curves as the Markov chains develop and fade in our market allows us to measure the duration of such a chain - in the market under study after the release of the US NFP news the process fades in about 25-30 minutes.
Of course, not every trader has the ability to program the measurement of fractal statistics in online mode and state the final Markov process due to the complexity of such a program and requiring computers with huge RAM. However the way out can be found by analyzing the dynamics of the trend indicator "Bollinger band".
A more detailed study of the frequency of events from the value of the price jump at 16-30 Moscow time (the moment of the release of the US NFP economic indicator, see Fig. 2 - first curves) on different time sweeps also allows us to predict clustering effects in Markov chain formationwhich then provides a good help in exploring the direction of the post-Markovian pricing process in our market, when the market is essentially reverting to memory (long-memory process).
After the completion of Markovian pricing processes, memory returns to the market, so the subsequent market dynamics is largely determined by the group consciousness of market participants. If this consciousness is quite significantly structured, strong intermediate trends are possible, which are accompanied by turbulence in the price of the studied asset. At the same time, their modeling is quite successfully achieved by a jump-shaped - diffusion model. But this is the topic of a separate publication.