Portfolio optimization

Portfolio optimization is a mathematical model that is applied in the process of investment decision-making across a set of assets or financial instruments. This model was invented by Prof. Harry Markowitz in the 1950s. While developing this model, Markowitz was trying to solve the following problem: Given a fixed collection of assets, determine the portfolio that combines the assets using an optimal fixed percentage asset allocation given the risk preferences of an individual investor (Prigent 2007). Portfolio optimization is based on a number of assumptions, including the investment will be for a single period of time, asset prices display a jointly distributed lognormal random walks, unconstrained asset allocation, and the objective is to capitalize on the predictable utility of wealth at the end of period. Portfolio optimization can be undertaken using various techniques but the most common include quadratic programming, mixed integer programming, as well as nonlinear programming. In addition to being applied as an investment decision-making tool, portfolio optimization is also used in divestment and capital allocation decisions (Wachowicz and Horne, 2008).

Auto-correlation

The term auto-correlation is defined in various ways, depending on the field in which it is being applied. Statistics, auto-correlation is a correlation coefficient, which describes the correlation of a time series with its own past and future values. In some cases, auto-correlation is also known as “Serial correlation” or “lagged correlation”, describing the correlation between elements of a chain of numbers arranged in time (Wachowicz and Horne, 2008). Auto-correlation is usually applied in the determination of non-randomness in data, as well as to determine a suitable time series model in non-random data. When determining non-randomness, only the first autocorrelation is required, but in the case of a suitable time series model, a number of lags are required to be plotted. Although auto-correlation can be determined using various tools, the correlogram serves as the best tool for this purpose. In finance, autocorrelation is used in determining the future prices of given assets based on the historical prices of such assets (Ehrhardt and Brigham, 2010).

Serial correlation

As mentioned above, auto-correlation is sometimes known as serial correlation as well. However, in this case, serial correlation is described as the relationship that exists between a certain variable and itself over different intervals of time. This type of correlation is very common in the case of recurring patterns especially when the level of a variable affects its level in the future (Wachowicz and Horne, 2008). With the field of finance, technical analysts apply this type of correlation in examining how the historical price of a given asset can be applied in predicting the future price, and this is the main connection between auto-correlation and serial correlation. Thus, serial correlation is used in the study of how past events forecast future events. Particularly, it is helpful in checking whether and how a given movement of price will lead to a dissimilar price movement.

Bonuses and Discounts
give up to20% off
Place an Order

Runs test

Runs test is one of the non-parametric tests. It is described as a statistical tool that is used in evaluating whether there is an element of random occurrence in a string of data in a given definite distribution. Particularly, the runs test is applied in determination of the occurrence comparable events, which are divided by different or dissimilar events. In statistics, a run is described as a chain of successive points, either below or above the regression curve. To be more precise, a run is a successive chain of points with either negative or positive residuals (Anderson et al 2011). If the data is randomly distributed; and Ka are points above the regression curve, and Kb points below the curve, then the expected number of runs is calculated as follows:
[{2KaKb}/{Ka + Kb}] + 1. The runs test is used in a number of areas in finance. The most common area where this statistical tool is used is in the prediction of returns of stock. The runs test is usually applied in the random walk theory, one of the theories that are applied in determination of future security prices.

Distribution of returns

In statistics, the term distribution refers to a universal function that is utilized in formulating partial differential equations’ solutions. In finance, the term is used in various areas, the most common being explaining on how the trading volume of a market varies in different times without or with minimal price appreciation (Wachowicz and Horne, 2008). Thus, distribution of returns can be described as how investment returns vary in different times, based on various variables. Distribution of returns is important for various trading problems. One of the crucial uses of distribution of returns is risk management, which involves an estimation of the likelihood of more extreme price changes (Clark and Downing 2010). Notably, there are a number of statistical distributions, but in finance the most relevant include the lognormal and the normal distributions, which are commonly used in the analysis of financial asset prices and returns. When considering the distribution of returns, there are a number of concepts which are considered including the following:

Mean

This is the average of a given data. It is calculated by calculating the sum of all the financial data and then dividing the sum with the number of data. As such, it is simply the central tendency of a set of data divided by the size of the set (Clark and Downing 2010).

Mode

The mode is generally described as the most appearing number or set of numbers within a given distribution.

Median

The median is described as the middle number within a distribution arranged in either a descending or ascending order. This implies that before a finding the median data must be arranged in either of the above stated order. Although it is possible to find the median through a try and error method, there is a mathematical technique which has been specifically designed for this purpose: [{size of the data set} +1] /2 (Clark and Downing 2010).

Standard deviation

Standard deviation is a measure of how a given data is distributed from the mean. Simply, it is a measure of variation from the expected value of mean. It is usually calculated by getting the square-root of the variance (Prigent 2007). In the context of finance, standard deviation is applied on the yearly rate of return of a given security to estimate the volatility of that security. Thus, it is generally used by the investors in measuring the expected volatility.

Standard error of the mean

A standard error is generally described as an approximation of errors in sampling that affect a statistic. A standard error of the mean, which is also known as the (standard deviation of the mean) is an approximation of the degree in which the calculated mean is expected from by probability from the true mean (Clark and Downing 2010).

Test for significance

The test of significance is described as a technique of making due allowance for fluctuations that may arise in the process of sampling, which are likely to affect the outcomes observations and experiments. Before setting the tests of significance, first a hypothesis must be formulated (Hafner et al. 2008).

Coefficient for skew and kurtosis

Skewness is used in measuring the degree and direction of departure from the level symmetry. On the other hand, kurtosis is a measure or estimation of the tallness and sharpness of the central peak in a data distribution. In a normal distribution, there zero kurtosis and skewness (Black 2011). The coefficient of skew, also known as Pearson’s Coefficient, is described as:
Coefficient of Skew = 3{mean- Median}/ standard deviation
On the other hand, the coefficient of kurtosis obtained by calculating the 4th moment about then mean, then dividing the answer by the variance of the distribution.
Thus, Coefficient of Kurtosis = (1/N)∑((Xi – M)/σ)^4, i=1 to N
Jerque-Bera coeffiency for normality

This is a test that is used to testing whether a given data have the kurtosis and skewness of a normal distribution. In this case, it is a test carried out on a given data to establish whether the data have a kurtosis and skewness of zero. For a normal distribution, the Jerque-Bera co-efficiency should display a chi-squared distribution.

Tangent portfolio and efficient frontier

The term tangent portfolio refers to the portfolios of stocks and bonds that are intended for long term investors. Since most people do not like losing their money about two times as much as they get the pleasure of making it, the tangent portfolios begin by asking how much a person is willing to lose in the worst situation without having to bail out of the market: 15% 28% or 38%? When the maximum percentage is selected, the tangent portfolios try to provide a high return rate that amount the risk. On the other hand, Harry Markowitz (1952) defines efficient frontier as the gold curve, which runs along the top of the achievable region. Portfolios that are on the efficient frontier are optimal in that they present maximal returns that are expected for various levels of risks that are given as well as the minimal risk for the given expected return level (Greg 2012).

Sharpe Ratio

Sharpe ratio was created by laureate William F. Sharpe in order to measure risk adjustment performance. The Sharpe ratios are derived by deducting the risk free rate from the rate return for a portfolio and then the standard deviation is divided by the result obtained earlier The Modigliani and Modigliani measure are use to characterize the way a portfolios’ return rewards an investor for the amount of the risk taken (Sharpe 2009). This means that the investment that took a whopping contract more risk that other benchmark portfolio, but had only a small performance advantage get lesser risk adjusted, than the one that may have taken dramatically small risk relative to the benchmark, but did have similar returns. Jensen and alpha is a term that refers to the differential between the return on the portfolio in excess of the risk free rate and the return explained by the market model (Kevin 2006) The Jensen measure is based on the CAPM

Capital Market Line and Capital Asset Pricing Model

The Capital Market Line (CML) is a line that is utilized in the CAPM to indicate efficient portfolio rate of return, based on the standard deviation and risk-free rate of return for a given portfolio. It is usually drawn starting from a point of risk-free security to appoint that is considered to be a risky security region. CAPM is a model that defines the connection between the risk and the expected return. It is generally applied in pricing of risky assets (Miller et al 2009). According to this model, systematic risk is the only risk that is taken into consideration by rational investors, since such risk cannot be avoided through diversification. The theory behind this model argues that, a security’s expected return is equal to the security’s risk-free rate plus a risk premium, multiplied by systematic risk of the security (Watsonand Head 2007).
Mathematically:
ra = rf + Betaa (rm-rf)
Where Betaa = risk premium
rf = risk-free return rate
ra = price of the asset
rm = market rate of return

Extensions and limitations of CAPM

CAPM has a number of limitations. To begin with, the model is established on unrealistic assumptions. For instance, risk free assets do not exist. Secondly, it is challenging in testing the validity of this model. Finally, the other limitation is that Betas vary in most cases, contrary to the model which assumes that they are stable (Abeysinghe 2010).

Reference List

Abeysinghe, R.L. 2010. Limitations of Capital asset pricing Model. Retrieved from http://ezinearticles.com/?Limitations-of-Capital-Asset-Pricing-Model&id=3364246

Anderson, R.D., Sweeney, D.J., and Williams, A.T. 2011, Statistics for Business and Economics. London: Cengage learning.
Black, K. 2011. Business Statistics: For Contemporary Decision Making. New York: John Wiley & Sons.

Clark, J. and Downing, D. 2010. Business Statistics. Dallas: Barron’s Educational Series publishing
Ehrhardt, C.M., and Brigham, F. E. 2010. Financial Management Theory and Practice. London: Cengage Learning

Greg B., 2012, Behavioral Investment Management: An Efficient Alternative to Modern Portfolio Theory. New York City: McGraw Publishers

Hafner, M.C., Hardle, W., and Franke, J. 2008. Statistics of Financial Markets: An Introduction. New York: Springer
Kevin S., 2006, Portfolio Management. New York: PHL Learning Publishers

Kobold K., 2009, Interest Rate Future Market Theoretical Concepts and empirical Evidence. New York: Walter de Gruyter Publishers.

Miller, P.F., Vandome, F.A., and McBrewster, J. 2009. Capital Asset Pricing Model. London: Alphascript publishing.
Prigent, J.L. 2007. Portfolio Optimization and performance Analysis. Boston: Chapman and Hall/CRC.

Sharpe F., 2009 Portfolio Theory and Capital Market. New York City: McGraw- Hill Publishers

Wachowicz, M.J., and Horne, C. J. 2008. Fundamentals of Financial Management. London: Prentice Hall.

Watson, D and Head A, 2007, Corporate Finance: Principles and Practice, 4th edition, FT Prentice Hall, pp222–3.

Xiaobo, L. 2007, Some Problems In Stochastic Portfolio Theory. Kansas City: ProQuest Publishers