Blog

How the Law of Large Numbers shapes trading.

Written by Agustin Baldovino Pacce | Aug 21, 2024 1:06:42 PM

Continuing with our series of articles regarding the difficulty for individual traders to consistently beat the market over time, we will today explore how the Law of Large Numbers conspires from a mathematical point of view against that. We talked in our last article about the psychological aspects of trading and how these biases affect performance. But there’s more to that, not only our psyche plays against the attempt goal of consistently beat the market, but math also doesn’t help either. Here’s where the Law of Large Numbers (LLN) comes into play.

The LLN is a powerful concept that underpins many statistical methods and real-world applications. It assures us that with enough data, the sample average will provide a reliable estimate of the population mean.

There are two main forms of the LLN: the Weak Law of Large Numbers (WLLN) and the Strong Law of Large Numbers (SLLN).

  1. Weak Law of Large Numbers (WLLN)

The WLLN states that for a sequence of independent and identically distributed (i.i.d.) random variables, the sample average converges in probability to the expected value as the sample size grows.

Mathematically, let ( X_1, X_2, \, X_n ) be i.i.d. random variables with expected value ( \mu ) and variance ( \sigma^2 ). The sample average ( {X}_n ) is given by:

The WLLN states that for any ( \epsilon > 0 ):

This means that as ( n ) increases, the probability that the sample average ( {X}_n ) deviates from the expected value ( \mu ) by more than ( \epsilon ) approaches zero.

  1. Strong Law of Large Numbers (SLLN)

The SLLN is a stronger version of the LLN, stating that the sample average almost surely converges to the expected value as the sample size grows.

Mathematically, using the same notation as above, the SLLN states:

This means that with probability 1, the sample average ( {X}_n ) will converge to the expected value ( \mu ) as ( n ) approaches infinity.

The LLN can be understood intuitively by considering the idea that the more observations you have, the more the average of these observations will reflect the true average of the population. For example, if you flip a fair coin many times, the proportion of heads will get closer to 0.5 as the number of flips increases.

Consider rolling a fair six-sided dice. The expected value ( \mu ) of a single roll is:

μ=1+2+3+4+5+6=3.5

If you roll the dice ( n ) times and calculate the average of the outcomes, the LLN states that this average will get closer to 3.5 as ( n ) increases.

When it comes to trading LLN has several implications of the challenges of consistently beating the market due to market efficiency, the balancing effect of diversification, the difficulty of managing large portfolios and the unpredictability of short-term movements. While it’s possible to achieve above-average returns in the short term, the LLN suggests that over the long term, most traders and investors returns will converge to the market average.

The stock market is generally efficient, meaning that stock prices reflect all available information. As more traders participate and more trades are executed, the market price of a stock tends to converge to its true value. This efficiency makes it difficult for any single trader to consistently find undervalued or overvalued stocks to exploit for profit.

Efficient Market Hypothesis (EMH), posits that financial markets are “informationally efficient.” This means that asset prices reflect all available information at any given time. There are three forms of EMH:

Weak Form: Prices reflect all past trading information.

Semi-Strong Form: Prices reflect all publicly available information.

Strong Form: Prices reflect all information, both public and private.

Consider a stock with a true value ( \mu ). Let ( P_t ) be the price at time ( t ), and assume that ( P_t ) is influenced by new information ( I_t ) and random noise (epsilon_t ):

In an efficient market, ( I_t ) is quickly incorporated into the price, and (epsilon_t ) represents random fluctuations. According to the LLN, as the number of time periods ( t ) increases, the average price ( {P}_t ) will converge to ( \mu ):

This convergence supports the EMH by showing that, over time, prices reflect the true value of the stock, and the impact of random noise diminishes, providing a mathematical foundation for the theory of market efficiency by explaining how prices converge to their true values.

The diminishing of the random noise (epsilon) can intuitively be explained by the fact that with more and more observations, the random ups and downs (noise, epsilon) tend to cancel each other out. The larger the number of observations, the closer the sample average gets to the true value ( \mu ), because the influence of any single random fluctuation becomes negligible.

Mathematically this can be expressed:

Where each Pi can be expressed:

If we substitute Pi into the Pt formula:

The term ( \frac{1}{t} \sum_{i=1}^{t} \epsilon_i ) represents the average of the random noise. According to the LLN, as ( t ) increases, this average converges to the expected value of the noise, which is typically zero if the noise is centered around zero (i.e., ( E[\epsilon_i] = 0 )):

As ( t ) increases, the impact of the random noise ( \epsilon_i ) on the sample average ( \overline{P}_t ) diminishes. This is because the term ( \frac{1}{t} ) effectively scales down the sum of the noise terms, making their average contribution smaller and smaller.

This is very interesting because it opens the door for the integration of another important concept that works very well together with the LLN and that is the Central Limit Theorem (CLT) as both concepts are very close related. We will dive deeper into this in our next article but so far we can say that while the LLN tells us that the sample average converges to the population mean as the n increase, the CLT takes us a step further by telling us that the shape of the distribution of the sample average as n increases the distribution becomes approximately normal N(0,1). But more on these implications next time.

Moving forward for large institutional investors, the LLN implies that achieving high returns becomes increasingly difficult as the size of their portfolio grows. This is because they need to invest large sums of money, which often means buying stocks that are already widely held and efficiently priced. The larger the portfolio, the harder it is to find enough mispriced stocks to generate above-average returns. In this line the diversification of large portfolios brings another challenge as well as LLN suggests the increase in the number of stocks in your portfolio will make the performance to more closely mirror the average market return. This is because the individual variations in stock performance tend to cancel each other out, leading to a return that approximates the market average. While diversification reduces risk, it also means that extraordinary gains from a few stocks are balanced out by losses in others. In the short term, stock prices are influenced by a myriad of factors, including market sentiment, economic data, and geopolitical events. These factors introduce a lot of noise, making it difficult to predict short-term movements accurately. Over the long term, however, the LLN suggests that the average return of a large number of trades will converge to the market average, making it hard to consistently outperform.

As we can see there are a lot of reason why it is so difficult to consistently outperformed the market, we will keep digging and in our next article we will talk as mentioned above how can we integrate LLN with the CLT. Furthermore, ahead in coming deliveries we will start outlying ways that could anyway skew the probabilities to our favor with a solid theoretical ground.