
Many investors look for above-market returns when analyzing exchange-traded funds (ETFs) or other types of investments. But the amount of risk being assumed is perhaps even more important than these gross returns. After all, a risky micro-cap security that returns 50% in a given year may be valued differently when considering the extreme risks involved.
Risk-adjusted returns attempt to account for these risks and provide investors with a clearer picture.
In this article, we’ll take a look at some common ways to measures risk-adjusted returns and how individual investors can leverage them to improve their performance.
Sharpe Ratio

The most commonly used measure of risk-adjusted return is the Sharpe Ratio, which represents the average return in excess of the risk-free rate per unit of risk (volatility or total risk). In effect, the calculation measures standard deviations of returns as a proxy for total portfolio risk and weights returns based on those deviations. The measure may be less accurate in cases where returns aren’t normally distributions and should be taken with a grain of salt.
Treynor Ratio

The Treynor Ratio, also known as the reward-volatility ratio, measures the same thing as the Sharpe Ratio, but uses a beta coefficient in place of standard deviations. By using the beta coefficient, the ratio compares the portfolio with the overall market rather than the rate of return for a risk-free investment. These dynamics can be useful for determining if a portfolio is outperforming the market’s average gains on a risk-adjusted basis.
Jensen’s Alpha

Jensen’s Alpha leverages the Capital Asset Pricing Model (CAPM) to determine how much alpha, or excess returns, a portfolio is generating relative to the market, given both its average market return and beta coefficient. In general, a positive Jensen’s Alpha signals that a fund manager is beating the market. The statistic is commonly used in empirical finance to determine the marginal returns for a given exposure to an investment strategy.
Modigliani-Modigliani

The Modigliani-Modigliani Measure, better known as M2, measures portfolio returns that are adjusted for the risk of the portfolio relative to a benchmark. Essentially, in the calculation the portfolio assumes the same amount of risk as the market (as measured by standard deviation). As a result, the portfolio return is de-leveraged so that it can be compared to the benchmark or market return. Since it’s expressed in units of percentage return, the measure is often considered to be more useful than the Sharpe Ratio that inspired it, due to its intuitive interpretation. A portfolio with an M2 of 5% is clearly 1% superior to one with an M2 of 4%, while the real difference between Sharpe Ratio values of -0.5 and 0.5 may not be as obvious.
Information Ratio

The Information Ratio measures risk-adjusted returns by dividing the difference between a portfolio or security’s return and a benchmark return by the standard deviation of that active return. Often times, the ratio is used to assess fund managers relative to their benchmark indexes or the overall market as reflected by the S&P 500. The metric differs from the Sharpe Ratio in that it’s focused on “active” returns rather than “excess” returns.
The Bottom Line
There are many different ways to calculate risk-adjusted returns, ranging from the Sharpe Ratio to the M2 Measure, but the key takeaway for individual investors is that they shouldn’t be looking exclusively at returns without considering the risks. Often times, investors can find these ratios – especially the Sharpe Ratio – on various research websites or screeners in order to identify ETFs that are outperforming their peers on a risk-adjusted basis.