Matrix Notation – A Quick Introduction

Vectors and Matrices

Vector and matrix notation is a convenient and succinct way to represent dens data structures and complex operations. It’s application is called matrix algebra, but should be viewed simply as an extension of traditional algebra, as opposed to an alternative. Matrix notation is suitable for a range of scientific and mathematical branches like linear algebra, multivariate analysis, economics and machine learning. Its forté is its ability to abstract individual variables and constants into larger array structures. This lets us avoid the mess of using a host of indices on individual variables. The following is a (very) brief introduction to some of the more useful parts of matrix notation.

Data Structure

A vector \(\vec{v}\in \mathbb{R}^{n}\) represents an ordered list of individual variables \(v_i\in \mathbb{R}\). The index \(i\in \{1, \cdots, n\}\) denotes where a particular variable fits within a vector. By default the variables are considered to be listed vertically.

\begin{equation}
\label{eq:vector}
\vec{v} =
\begin{bmatrix}
v_{1} \\
\vdots \\
v_{n} \end{bmatrix}
\end{equation}

A vector with this configuration is called a column vector. The orientation of the a vector matters when it comes to performing operations on them. The following illustrates the transpose operation T, where \(\vec{v}\) is flattened.

\begin{equation}
\label{eq:transposedVector}
\vec{v}^\mathrm{T} =
\left[ \begin{array}{ccc}
v_{1} & \cdots & v_{n}
\end{array} \right]
\end{equation}

A neat way to represent a column vector is thus as follows.

\begin{equation}
\vec{v}=[v_1 \dots v_n]^\mathrm{T}
\end{equation}

An extension of vector notation is the more expressive matrix notation. A matrix \(A\in \mathbb{R}^{m\times n}\) is a rectangular array, or table, of single variables \(a_{ij}\in \mathbb{R}\). The first index \(i\) represents a row in the matrix, whereas the second index \(j\) denotes a column. An interpretation is that a matrix is composed of multiple vectors representing either rows or columns.

\begin{equation}
\label{eq:matrix}
\begin{array}{ccl}
A
&=&
\begin{bmatrix}
a_{11} & \cdots & a_{1n} \\
\vdots & \ddots & \vdots \\
a_{m1} & \cdots & a_{mn}
\end{bmatrix}
=
\begin{bmatrix}
\vec{a}_{1}^\mathrm{T} \\
\vdots \\
\vec{a}_{m}^\mathrm{T}
\end{bmatrix}
,\quad\\
&&
\vec{a}_{i}^\mathrm{T}
=
\begin{bmatrix}
a_{i1} & \cdots & a_{in}
\end{bmatrix}
,\quad i \in \{1, \cdots, m\}
\end{array}
\end{equation}

\begin{equation}
\label{eq:alternativeMatrix}
D
=
\begin{bmatrix}
d_{11} & \cdots & d_{1n} \\
\vdots & \ddots & \vdots \\
d_{m1} & \cdots & d_{mn} \end{bmatrix}
=
\begin{bmatrix}
\vec{d}_{1} & \cdots & \vec{d}_{n}
\end{bmatrix}
,\quad
\vec{d}_{j} =
\begin{bmatrix}
d_{1j} \\
\vdots \\
d_{mj} \end{bmatrix}
\end{equation}

Conversely you could interpret a vector as a matrix with only one column or row, hence the terms column and row vectors.

Arithmetic Operations

There are multiple ways to perform multiplication of vectors and matrices. As with simple algebraic notation, two adjacent vectors have an implicit multiplication operation called the dot product.

\begin{equation}
\label{eq:dotProduct}
\vec{v} \cdot \vec{u} = \vec{v}^\mathrm{T}\vec{u} =
\begin{bmatrix}
v_{1} & \cdots & v_{n}
\end{bmatrix}
\begin{bmatrix}
u_{1} \\
\vdots \\
u_{n} \end{bmatrix}
= v_1u_1 + \cdots + v_nu_n
\end{equation}

The vectors must be of equal dimensions, where each ordered pair of single variables are multiplied then summed.

General matrix multiplication is an extension, or generalization, of the dot product. Using the dot product, each row vector of the first matrix \(A\in \mathbb{R}^{m\times n}\) is multiplied with every column vectors of the second matrix \(B\in \mathbb{R}^{n\times k}\).

\begin{equation}
\label{eq:matrixMult}
\begin{array}{ccl}
AB & = &
\begin{bmatrix}
a_{11} & \cdots & a_{1n} \\
\vdots & \ddots & \vdots \\
a_{m1} & \cdots & a_{mn}
\end{bmatrix}
\begin{bmatrix}
b_{11} & \cdots & b_{1k} \\
\vdots & \ddots & \vdots \\
b_{n1} & \cdots & b_{nk}
\end{bmatrix}\\
& = &
\begin{bmatrix}
\vec{a}_{1}^\mathrm{T}\vec{b}_{1} & \cdots & \vec{a}_{1}^\mathrm{T}\vec{b}_{k}\\
\vdots & \ddots & \vdots \\
\vec{a}_{m}^\mathrm{T}\vec{b}_{1} & \cdots & \vec{a}_{m}^\mathrm{T}\vec{b}_{k}
\end{bmatrix}
\end{array}
\end{equation}

Notice how the inner dimensions \(n\) of the matrices must agree. That is, each row vector of matrix \(A\) must have the same dimension as the column vectors of matrix \(B\). The resulting matrix \(AB\) is thus of m-by-k dimensions where each entry is defined as follows.

\begin{equation}
\label{eq:matrixRntryMult}
\begin{array}{c}
\vec{a}_{i}^\mathrm{T}\vec{b}_{j}
=
\begin{bmatrix}
a_{i1} & \cdots & a_{in}
\end{bmatrix}
\begin{bmatrix}
b_{1j} \\
\vdots \\
b_{nj}
\end{bmatrix}
=
a_{i1}b_{1j} + \cdots + a_{in}b_{nj},\\
i\in \{1, \cdots, m\},\quad j\in \{1, \cdots, k\},
\end{array}
\end{equation}

A special case of the matrix multiplication is when \(k=1\), or rather the multiplication of a matrix \(A\in \mathbb{R}^{m\times n}\) to a vector \(\vec{b}\in \mathbb{R}^{n}\). This yields a special case where the dot product of vector \(\vec{b}\) and each row vector of \(A\) produces the vector \(A\vec{b}\in\mathbb{R}^{m}\).

\begin{equation}
\label{eq:matrixMultVsVector}
A\vec{v} =
\begin{bmatrix}
a_{11} & \cdots & a_{1n} \\
\vdots & \ddots & \vdots \\
a_{m1} & \cdots & a_{mn}
\end{bmatrix}
\begin{bmatrix}
v_{1} \\
\vdots \\
v_{n}
\end{bmatrix}
=
\begin{bmatrix}
\vec{a}_{1}^\mathrm{T}\vec{v}\\
\vdots \\
\vec{a}_{m}^\mathrm{T}\vec{v}
\end{bmatrix}
=
\begin{bmatrix}
a_{11}v_1 + \cdots + a_{1n}v_n\\
\vdots \\
a_{m1}v_1 + \cdots + a_{mn}v_n
\end{bmatrix}
\end{equation}

In the field of computer science a second more intuitive matrix and vector multiplication is often used. The entrywise product, also known as the Hadamard product, applies normal algebraic multiplication of every corresponding entry pairs of two matrices or vectors of identical dimensions.

\begin{equation}
\label{eq:entrywiseProduct}
\begin{array}{ccl}
A\circ D &=&
\begin{bmatrix}
a_{11} & \cdots & a_{1n} \\
\vdots & \ddots & \vdots \\
a_{m1} & \cdots & a_{mn}
\end{bmatrix}
\circ
\begin{bmatrix}
d_{11} & \cdots & d_{1n} \\
\vdots & \ddots & \vdots \\
d_{m1} & \cdots & d_{mn}
\end{bmatrix}\\
&=&
\begin{bmatrix}
a_{11}d_{11} & \cdots & a_{1n}d_{1n} \\
\vdots & \ddots & \vdots \\
a_{m1}d_{m1} & \cdots & a_{mn}d_{mn}
\end{bmatrix}
\end{array}
\end{equation}

An exception to the dimensionality requirements discussed above is the multiplication of a constant, or weight, to a vector or matrix. In this case the weight is simply multiplied to all entries.

\begin{equation}
\label{eq:constantVectorProduct}
c\vec{v} =
c
\begin{bmatrix}
v_{1} \\
\vdots \\
v_{m}
\end{bmatrix}
=
\begin{bmatrix}
cv_{1} \\
\vdots \\
cv_{m}
\end{bmatrix}
\end{equation}

Unsurprisingly, addition and subtraction also behave entrywise as in the vector case below.

\begin{equation}
\label{eq:additionAndSubtractionVector}
\vec{x} \pm \vec{y} =
\begin{bmatrix}
v_{1} \\
\vdots \\
v_{m}
\end{bmatrix}
\pm
\begin{bmatrix}
u_{1} \\
\vdots \\
u_{m}
\end{bmatrix}
=
\begin{bmatrix}
v_{1}\pm v_{1} \\
\vdots \\
v_{m}\pm v_{m}
\end{bmatrix}
\end{equation}

Although, for addition and subtraction the dimensions must agree.

Vector Norms

An important class of vector and matrix operations is the norm, which is denoted by some variation of \(||\cdot||\). Both vectors and matrices have corresponding norm concepts however only norms of vectors are needed. There exists many types of norms, where they all have the common trait of being a measure of length or size of a vector. The most notable norm is the euclidean norm.

\begin{equation}
\label{eq:euclNorm}
||\vec{v}||_2 = \sqrt{\vec{v}^\mathrm{T}\vec{v}} = \sqrt{v_1^2 + \cdots + v_n^2}
\end{equation}

This is often used to calculate the euclidean distance between two points.

\begin{equation}
\label{eq:euclDist}
d(\vec{v},\vec{u})= ||\vec{v-u}||_2 = \sqrt{(\vec{v}-\vec{u})^\mathrm{T}(\vec{v}-\vec{u})} = \sqrt{\sum_{i=1}^n{(v_i-u_i)^2}}
\end{equation}

A generalization of the euclidean norm is the p-norm.

\begin{equation}
\label{eq:pNorm}
||\vec{v}||_1 = \left(\sum_{i=1}^n{v_i^p}\right)^\frac{1}{p}
\end{equation}

Besides the euclidean norm, the most important p-norms are the manhattan norm with \(p=1\) and the infinity norm with \(p=\infty\).

\begin{equation}
\label{eq:1Norm}
||\vec{v}||_1 = \left(\sum_{i=1}^n{v_i^1}\right)^\frac{1}{1} = \sum_{i=1}^n{|v_i|}
\end{equation}

\begin{equation}
\label{eq:inftyNorm}
||\vec{v}||_\infty = \max\{v_1, \cdots, v_n\};
\end{equation}

Linear and Quadratic Equations

Matrix notation allows for succinct expression of linear and quadratic systems of equations. The set of linear equations \(y_i = a_{i1}x_{i1} + \cdots + a_{in}x_{in} + b_i = 0,\; i\in \{1, \cdots, m\}\) can be written as follows.

\begin{equation}
\label{eq:linearEquations}
\vec{y} = A\vec{x} + \vec{b} = \vec{0};
\end{equation}

The quadratic equation \(y = \sum_{i,j=1}^n{a_{ij}x_ix_j}+b=0\) can be written as follows.

\begin{equation}
\label{eq:quadraticEquation}
y = \vec{x}^\mathrm{T}A\vec{x} + b = 0;
\end{equation}

Miscellaneous Structures and Operations

Sometimes it is useful to have a way to represent a vector of identical entries. The null vectors \(\vec{0}\) and one vectors \(\vec{1}\) are vectors with entries of only \(0\) and \(1\) respectively. Usually the dimension of such a vector is implied by the context. An application of the one vector is that it can be used to sum the entries of a vector by \(\vec{1}^\mathrm{T}\vec{x} = x_1 + \cdots + x_m\).

\begin{equation}
\label{eq:nullOneVector}
\vec{0} =
\begin{bmatrix}
0 \\
\vdots \\
0
\end{bmatrix},
\quad\vec{1} =
\begin{bmatrix}
1 \\
\vdots \\
1
\end{bmatrix}
\end{equation}

The \(\mathrm{diag}(\cdot)\) operator transforms a vector \(\vec{v}\in\mathbb{R}^{n}\) into a diagonal matrix \(V\in\mathbb{R}^{n\times n}\) by putting each entry of the vector into the diagonal of the matrix while keeping other entries zero.

\begin{equation}
\label{eq:diagVectorToMatrix}
\mathrm{diag}(\vec{v}) = \mathrm{diag}
\begin{bmatrix}
v_1 \\
\vdots \\
v_n
\end{bmatrix}
=
\begin{bmatrix}
v_{1} & \cdots & 0 \\
\vdots & \ddots & \vdots \\
0 & \cdots & v_{n}
\end{bmatrix}
=
V
\end{equation}

An often used matrix concept is the identity matrix defined as \(I_n = diag(\vec{1})\). It has the property that multiplication with it does not have any effect, e.g., \(AI_n=A\), and can be viewed as a cousin of \(1\) in normal arithmetic.

Finally augmentation of vectors and matrices can be needed for application of machine learning and optimization software. Augmentation simply means that we concatenate two object of equal dimensions. For instance, a vector \(\vec{v} = [v_1, \cdots, v_n]^\mathrm{T}\) may be augmented by a new variable \(v_{n+1}\) becoming \(\vec{v}’ = [v_1, \cdots, v_{n+1}]^\mathrm{T}\). Similarly a matrix \(A\in\mathbb{R}^{m\times n}\) may be augmented by another matrix, say, \(E\in\mathbb{R}^{m\times k}\) as follows.

\begin{equation}
\label{eq:matrixAugmentation}
\begin{bmatrix}
A & E
\end{bmatrix}
=
\begin{bmatrix}
a_{11} & \cdots & a_{1n} & e_{11} & \cdots & e_{1k} \\
\vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\
a_{m1} & \cdots & a_{mn} & e_{m1} & \cdots & e_{mk}
\end{bmatrix}
\end{equation}

Another example where we augment the matrix with a vector could be as follows.

\begin{equation}
\label{eq:matrixVectorAugmentation}
\begin{bmatrix}
A \\
\vec{v}^\mathrm{T}
\end{bmatrix}
=
\begin{bmatrix}
a_{11} & \cdots & a_{1n} \\
\vdots & \ddots & \vdots\\
a_{m1} & \cdots & a_{mn}\\
v_{1} & \cdots & v_{n}
\end{bmatrix}
\end{equation}

Modern Portfolio Theory

Modern portfolio theory (MPT), introduced by Harry Markowitz in 1952, is a theory which attempts to create portfolios of assets by striking a balance between return and risk. In the basic terms, one can maximize the portfolio expected return given a certain risk, or equivalently minimize risk for a given level of return.

One of the main ideas of MPT is that a portfolio of assets should be composed not based on the performance of individual assets, but rather taking a holistically view of asset performance. This means that the internal risk-return dynamics of portfolios must be taken into consideration when evaluating them. The classic mathematical model of MPT uses mean return as a measure of expected return and the return variance as the risk measure.

Mean return and variance are regarded as the two primary return and risk measures of individual assets or collections of assets. These measures were used by Markowitz when he originally presented modern portfolio theory. To calculate them we start with asset prices. Given \(T\in\mathbb{N}_1\)$ time periods and \(n\in\mathbb{N}_1\) assets we can define the price time series of all assets in question as a matrix \(\mathbf{P}\in\mathbb{R}^{T+1\times n}_{\geq{0}}\) consisting of price entries \(p_{ti}\in\mathbb{R}_{\geq{0}}\), one for each asset \(i\in\{0,\cdots,n\}\) at time period \(t\in\{0,\cdots,T+1\}\). The row vectors of \(\mathbf{P}\) can be viewed as price snapshots in time, or datapoints in a dataset, concepts that will be covered in section \ref{dataStructure}. From the price entries we can define the temporal (rate of) return development matrix \(\mathbf{R}\in\mathbb{R}^{T\times n}\) with entries \(r_{ti}\in\mathbb{R}\).

\begin{equation}
r_{ti} = \frac{p_{ti} – p_{(t-1)i}}{p_{(t-1)i}}
\end{equation}

Mean Return

The \(r_{ti}\) entries can be collected into \(n\) return column vectors \(\vec{r}_{i}\in\mathbb{R}^{T}\) each representing a return time series of an asset \(i\). This makes it possible to define the mean return for a single asset.

\begin{equation}
\bar{r}_i = \frac{1}{T}\vec{r}_{i}^\mathrm{T}\vec{1}
\end{equation}

These can be collecting into a mean return vector \(\vec{\bar{r}} = [\bar{r}_1\; \cdots\; \bar{r}_n]^\mathrm{T}\in\mathbb{R}^{n}\) for all assets. A more succinct way to define the mean return vector is by matrix notation.

\begin{equation}
\vec{\bar{r}} = \frac{1}{T}R^\mathrm{T}\vec{1}
\end{equation}

Return Variance

For any two time series return vectors \(\vec{r}_i\) and \(\vec{r}_j\), and their means \(\bar{r}_i\) and \(\bar{r}_j\) we can compute their covariance.

\begin{equation}
\operatorname{cov}{\vec{r}_{i}}{\vec{r}_{j}} = \frac{1}{T}(\vec{r}_{i}-\bar{r}_i)^\mathrm{T}(\vec{r}_{j}-\bar{r}_j)
\label{eq:covariance}
\end{equation}

Using this it is possible to retain all possible covariance computations in a symmetric covariance matrix $\mathbf{\Sigma}$ as defined by (\ref{eq:covarianceMatrix}).

\begin{equation}
\mathbf{\Sigma} =
\left[ \begin{array}{ccc}
\operatorname{cov}{\vec{r}_{1}}{\vec{r}_{1}} & \cdots & \operatorname{cov}{\vec{r}_{1}}{\vec{r}_{n}}\\
\vdots & \ddots & \vdots\\
\operatorname{cov}{\vec{r}_{n}}{\vec{r}_{1}} & \cdots & \operatorname{cov}{\vec{r}_{n}}{\vec{r}_{n}}
\end{array} \right]
\label{eq:covarianceMatrix}
\end{equation}

Alternatively, it is possible to show that the covariance matrix can be computed directly.

\begin{equation}
\mathbf{\Sigma} =
\frac{1}{T-1}(R-\frac{1}{T}\vec{1}\vec{1}^\mathrm{T}R)^\mathrm{T}(R-\frac{1}{T}\vec{1}\vec{1}^\mathrm{T}R)
\label{eq:fancyCovarianceMatrix}
\end{equation}

Expected Return and Risk of a Portfolio

Before we can state the various versions of the MPT optimization problem we need to review how to measure the portfolio expected return and risk as a function of mean return and variance. Let \(\vec{w}\in[0,1]^{n}\) with the constraint \(\vec{1}^\mathrm{T}\vec{w} = 1\) be the distribution over \(n\) assets determining how much should be invested in each asset, then the return \(r_p\) from such a portfolio distribution can be computed.

\begin{equation}
r_{\vec{w}} = \vec{\bar{r}}^\mathrm{T}\vec{w}
\label{eq:portfolioReturn}
\end{equation}

The variance of the portfolio can be computed from a doubly weighted covariance matrix.

\begin{equation}
\sigma_{\vec{w}}^2 = \vec{w}^\mathrm{T}\mathbf{\Sigma}\vec{w}
\label{eq:portfolioVar}
\end{equation}

Expressing MPT as a Mathematical Model

There are different ways to express MPT as a mathematical optimization model, each with subtle differences. What we want is to maximize \(r_{\vec{w}}\) or minimize \(\sigma_{\vec{w}}^2\) by adjusting the portfolio distribution \(\vec{w}\). However, since these are two concurrent objectives we cannot optimize them at the same time. What we can do is fix one at a desired level and optimize the other. The first optimization task constraints risk and maximizing return.

\begin{equation}
\begin{array}{rlcl}
\max_{\vec{w}} & r_{\vec{w}} & = & \vec{\bar{r}}^\mathrm{T}\vec{w} \\
\textrm{s.t.}& \vec{w}^\mathrm{T}\mathbf{\Sigma}\vec{w} & \leq & \sigma_p^2 \\
&\vec{1}^\mathrm{T}\vec{w} & = & 1\\
&\vec{w} & \geq & 0\\
\end{array}
\label{eq:mptMax}
\end{equation}

The alternative is to constrain return and minimizing risk.

\begin{equation}
\begin{array}{rlcl}
\min_{\vec{w}} & \sigma_{\vec{w}}^2 & = & \vec{w}^\mathrm{T}\mathbf{\Sigma}\vec{w} \\
\textrm{s.t.}&\vec{\bar{r}}^\mathrm{T}\vec{w} & \geq & r_p \\
&\vec{1}^\mathrm{T}\vec{w} & = & 1\\
&\vec{w} & \geq & 0\\
\end{array}
\label{eq:mptMin}
\end{equation}

Notice the lower bound constraints on the portfolio \(\vec{w} \geq 0\). Removing these will allow the optimization task to short stocks; however, this behavior is outside the scope of this thesis. The two optimization problems can be solved using Lagrange multipliers thus reducing the problem to a quadratic, i.e., convex, optimization problem.

Applying a Model of Risk Aversion

An alternative to fixing one of the performance measures is to apply an utility function \(U(\vec{w})\) that converts the two dimensional performance measures to a single dimension. There are many ways to do this, however, a main constraint is that it must conform to rationality (A simple definition of rationality says that rational agents wanting more rather than less of a good. Of course this is a gross simplification of rational choice theory, yet it will suffice due to being outside of the scope of this thesis).

Portfolio Utility and Domination

With ordinal utility we can say that some portfolios stochastically dominate others, however, for some portfolio pairs only a utility function can settle the preference. For A in the example above the ordinal utility says that $E > A > C$, but nothing about $B$ and $D$ relative to $A$. A utility function will remove any ambiguity, and in this case states that $E > B > A > D > C$ which is a special case of the ordinal utility.

The previous figure illustrates how different portfolios in the two dimensional risk-return space can be mapped to a one dimensional utility. Notice that some internal requirements to this mapping. From the perspective of portfolio \(A\), we can qualitatively see that it stochastically dominates portfolio \(C\), but without a utility function we cannot say anything about \(A\)’s ordering compared to \(B\) and \(D\). Likewise we see that portfolio A is stochastically dominated by portfolio \(E\).
A widely used utility function is the weighted linear combination of the performance measures. Only one weight is needed, we will use a risk aversion factor \(\rho\) that weights the risk component. This means that the utility is adversely affected as higher risk aversion is applied.

\begin{equation}
U(\vec{w};\rho) = U\left(r_{\vec{w}}, \sigma_{\vec{w}}^2;\rho\right) = \vec{\bar{r}}^\mathrm{T}\vec{w} – \rho\vec{w}^\mathrm{T}\mathbf{\Sigma}\vec{w}
\label{eq:utilityFunction}
\end{equation}

The resulting optimization problem is also quadratic and solvable in reasonable time. Using the mentioned utility function as an objective function we get the following optimziation problem.

\begin{equation}
\begin{array}{rlcl}
\max_{\vec{w}}& U(\vec{w}) & = & \vec{\bar{r}}^\mathrm{T}\vec{w} – \rho\vec{w}^\mathrm{T}\mathbf{\Sigma}\vec{w} \\
&\vec{1}^\mathrm{T}\vec{w} & = & 1\\
&\vec{w} & \geq & 0\\
\end{array}
\label{eq:mptMaxRiskAverse}
\end{equation}

The Efficient Frontier

In the optimization problems above the user must choose some parameter \(\sigma_p^2\), \(r_p\) or \(\rho\) which reflects their preference. A way of analyzing the effect of changing the parameters is to look at the efficient frontier characteristics of the problems. The efficient frontier was coined by Harry Markowitz\cite{Markowitz1952}, and can be defined by the optimal solution space, i.e., optimal portfolios \(\vec{w}\), that exist for all parameter settings that yield feasible solutions. It can be visualized as an upper-half hyperbola in the mean-variance space created by the optimal portfolios.

Efficient Frontier

Two efficient frontiers, the hyperbolic frontier does not have a risk free asset, while the linear does.

Efficient portfolios are portfolios with the highest risk-return ratios given any parameter setting. This means that it is always possible to select a portfolio in the efficient frontier that stochastically dominates any non-efficient portfolio.

Adding a Risk Free Asset

Introducing a risk free asset creates a special case where the efficient frontier becomes linear between the risk free asset and the tangency portfolio. The tangency portfolio is a special portfolio which if used with the risk free asset can produce risk-return trade-offs which stochastically dominate any other efficient portfolio lacking the risk free asset. Portfolios that have mean return less than the tangency portfolio lend money at the risk free rate, while portfolios that have larger mean return than the tangency portfolio borrows at the risk free rate. In this thesis borrowing and shorting is not allowed.

Dow Efficient Frontiers

The chart shows the efficient frontiers of the Dow Jones Industrial Index using daily sampling from 2000 to 2012. Risk free asset is 5 year US Treasury bond.

OBX Efficient Frontiers

The chart shows the efficient frontiers of the 25 most liquid stocks of Oslo Stock Exchange (OBX) using daily sampling from 2006 to 2012. Risk free asset is cash in NOK.

Efficient Market Hypothesis

The efficient market hypothesis (EMH) was first developed by Fama in his PhD thesis during the 1960s. The hypothesis asserts that financial markets are informationally efficient, in other words, prices reflect all relevant information. This means that changes in price are reactions to unpredictable events, i.e., price movements are nothing more than random walks. The EHM assumes that all actors in the market act rationally according to Expected Utility Hypothesis.

There are three major versions of EMH.

  • Weak EMH claims that prices on traded assets already reflect all past publicly available information.
  • Semi-strong EMH claims both that prices reflect all publicly available information and that prices instantly change to reflect new public information.
  • Strong EMH additionally claims that prices instantly reflect even hidden or “insider” information.

The efficient market hypothesis was widely accepted until the 1990s when behavioral economics started to gained ground. Empirical analysis has consistently found faults with EHM. In spite of consistent substantial inefficiency and controversy it is still widely considered a valid starting point. The EHM laid the foundation for modern portfolio theory.

The Lost Elite of the Third World

Educating the present, developing the future…

The currency of the past, present and future is, and always will be, people. Educated people who are willing, able, and naturally talented are sought after by every firm and organization. With the right people at the wheel human endeavors have a striking tendency to work themselves out no matter the circumstances. We have an abundance of people in the world; so, why do we lack the right people for rapid progress?

Most developed and some developing countries have created infrastructure and educational capabilities that foster development of their gifted two-percentile young. These children are the thinkers, doers and leaders of the future. They are the elite. Every country has them. The best nurture them. The worst despise and fear them. However, many just ignore them.

There is no principal difference between the potential of a human being in one country and the next. Given the necessary education, environment and opportunity a naturally gifted child in an underdeveloped country will excel to prosperity and greatness just as easily as a naturally gifted native of the west.

There is always a short supplied of these two-percentile people, but many developing countries lack the capability to identify them, let alone foster them. Internally in developed countries there is extreme competition to attract the two-percentiles. This is exemplified by the current Indian export of high-tech labor to the west. In short, the developing world is not producing enough brilliant people – they are undersupplied!

Some developing countries, such as India and China, are taking their educational system seriously, but there is still a huge untapped potential of gifted people in countries that lack the needed educational infrastructure to identify and develop the talented young. In effect, there exists a huge supply of gifted people who given the opportunity and investments needed can fill the undersupplied high-end labor market of the west.

The Lost Elite School Structure

The Lost Elite School Structure

My proposal is to establish a for-profit business that aims to identify, foster and educate the two-percentile gifted children of underdeveloped countries. The identification process includes a joint venture with a philanthropic organization which has the general goal of supplying primary education to all children of underdeveloped countries. This organization will run open schools where any child in the region can get a limited primary education. From these schools the for-profit business selects potential students and offer enrollment to an elite program that foster their abilities and goes beyond the limited primary education of the philanthropic organization. At legal age the students are presented with both university and job opportunities provided they enter a binding contract. The contract includes a student loan financing their last year of High School and their University education. Furthermore the contract binds them, for a stipulated time, to work at one of our partner companies. For each year a graduate of our schools works at a partner company the for-profit business get a commission.

The for-profit part of this venture will aim to get initial investment funding through company partners, venture firms, for-profit philanthropic venture firms and the general investment community. The philanthropic part of the venture gets funded by NGO’s, economic development organizations such as WTO, government investments and private philanthropic funds. Eventually money will also be funneled from the for-profit venture to the philanthropic venture as a self reinforcing mechanism. Additionally, it isn’t a farfetched possibility that the partner companies, under their social responsibility umbrella, choose give charity to the philanthropic venture in order to support the whole endeavor.

When all is said about the great economic potential there are some extremely important factors that must be stressed. The curriculum and fostering program must be specially tailored to provide a moral and social context that helps integration in to the global social and business scene. We are talking about creating real people with depth and dignity. Businesses are not looking for automatons, but creative and intelligent people able to work in teams, lead and be led. Therefore our business goals and education goals must reflect this essential part of the labor demand and human dignity.

Because of returning elite graduates and funding of primary education we are helping the poorest people of the world rise to their feet. Also, on a more personal level we are offering individual children and families the opportunity to break free from the socioeconomics shackles that arbitrarily were imposed on them at birth.

Peak Performance: Falling of the earth

I was reading Nicholas Gurewitch’s comic “The Perry Bible Fellowship” when I stumbled upon this gem of a strip. Being the geek I am, I of course just had to figure out how tall such a mountain would need to be.

Peak Performance by Nicholas Gurewitch

Peak Performance by Nicholas Gurewitch

So, here we go. From Wikipedia we can define some stuff.

Ohh, almost forgot, I’m assuming the climber is at the equator and probably a bunch of other stuff.

The climber is “weightless” when the centripetal force and gravitational force are equal.
\begin{equation}
F=G\to\frac{m_p v_p^2}{r_p} =\gamma \frac{m_e m_p}{r_p^2}\to\frac{v_p^2}{r_p} =\gamma\frac{m_e}{r_p^2}\to v_p^2=\gamma \frac{m_e}{r_p}
\end{equation}

Inserting rotational speed of the mountain peek and solving for \(r_p\) yeilds
\begin{equation}
\left(r_p\frac{v_e}{r_e} \right)^2=\gamma \frac{m_e}{r_p}\to r_p^3=\gamma m_e \left(\frac{r_e}{v_e}\right)^2\to r_p=\sqrt[3]{\gamma m_e\left(\frac{r_e}{v_e}\right)^2}
\end{equation}

Inserting numbers we get
\begin{equation}
r_p=\sqrt[3]{(6.67428\cdot10^{-11}) (5.9736\cdot10^{24})\left(\frac{6,378,100}{1,674,400}\right)^2}=42,167,401.9\;\text{m}
\end{equation}

Now, finding the mountain height.
\begin{equation}
r_m=r_p-r_e=42,167,401.9-6,378,100=35,789,301.9\;\text{m}
\end{equation}

which is ~10% of the way up to the moon.

\begin{equation}
\frac{r_m}{r_{moon}} =\frac{35,789,301.9}{384,399,000}=0.093\approx10\%
\end{equation}

Kinda sweet, eh? I guess I can now say Q … E … D. :p