ABOUT US EPOCH INSIGHTS INVESTMENT APPROACH STRATEGIES & RESULTS IN THE NEWS
left corner

The Case for Active Management, Continued: Epoch’s Investment Process

June 26, 2015

Share View PDF

In our recent paper “The Case for Active Management,” we described the difficulties many active managers have had in outperforming their benchmarks and we attributed those difficulties to insufficient discipline within the investment process. To capture the impact of stock-specific inefficiencies, the manager must;

  1. Understand the forces that create an inefficiency
  2. Capture it by casting a wide net across stocks that are likely to be affected, and 
  3. Properly structure the portfolio to filter out the impact of factors (e.g., size, or industry effects) for which the manager currently has no forecast and which might otherwise swamp the excess return generated by the inefficiency that the manager is trying to capture. 

How does Epoch’s investment process stack up on those three criteria? We thought it would be useful to follow up on our previous paper with details on our process and why we think it is likely to lead to success.

Focus on finance, not accounting

As we have written many times in our white papers over the years, what sets Epoch apart from other investment managers is our focus on finance rather than accounting (for an early example, see “Back to the Future,” from April of 2005). Look at virtually any investment research report written on Wall Street and what you will find is an extensive discussion of things like earnings, book value, and return on equity – all accounting variables. Accounting has its place and accounting statements can provide useful information, as long as you understand the rules that lie behind them. 

And that, to us, is the source of the major inefficiency we are trying to capitalize on. Many investors, we would argue, do not fully understand the rules that drive accounting statements and as a result, they don’t know how to properly evaluate companies or their managements from the perspective of an investor. We believe there are two questions you should ask when evaluating a business as a potential investor: 1) does the business generate free cash flow and, 2) how does management decide how to allocate that free cash flow?

On the first of those questions, traditional accounting measures obscure rather than clarify the answer. On a host of issues, from the treatment of inventories to the rate at which capital assets are depreciated, accounting rules work to make “earnings” into a figure that can sometimes bear little relationship to a company’s actual success or failure in generating cash. Accrual accounting relies on many subjective decisions about when to recognize both revenues and expenses. A company can generate positive earnings while experiencing negative free cash flow; conversely, it can have negative earnings while generating positive free cash flow. Understanding the difference between these measures of corporate performance is crucial to successful investing. (We wrote about this subject ten years ago in “Mixing Financial Principles with Accounting Standards – A Slippery Slope.” That paper included a reprint of Jack Treynor’s 1993 Financial Analysts Journal article, “Feathered Feast: A Case,” which still serves as an excellent illustration of how accounting measures can be disastrously misleading.)

The second question we mentioned above – how does management decide how to allocate free cash flow? – is one that many investors don’t even bother to consider, but it is just as important as the question of whether a business generates free cash flow to begin with. There are five things that a company can do with its free cash flow: 1) pay cash dividends, 2) buy back stock, 3) pay down debt, 4) make acquisitions, or 5) invest in internal projects. 

What’s important for investors to focus on is not just “what is management doing with the cash?” but “how is management making that decision?” Ideally, what you want to see is management that has a sensible capital allocation policy – i.e., one that recognizes that a business has a cost of capital and should invest only in projects (either internal projects or acquisitions) that will produce a return above that cost of capital. If projects earn a premium over the cost of capital, they add to the value of the business; if they generate a return that is lower than the cost of capital, then investing in them reduces the value of the business. (Imagine borrowing money at a 5% interest rate to buy Treasury bonds yielding 2% and then holding them to maturity. That’s a sure-fire formula for reducing your wealth.)

The problem for investors who rely on traditional accounting-based metrics, including price/earnings ratios and price/book ratios, is that they tell you literally nothing about whether a company is allocating capital in a way that is creating wealth or destroying wealth. Yet these are precisely the metrics that many investors continually use to evaluate the attractiveness of a stock. Why? This is where the behavioral biases come in. They use those metrics because that’s what so many other people use – in other words, it’s easier to follow the herd than to think independently.

We don’t think in terms of traditional valuation metrics – rather, we think in terms of value creation or destruction. To us, an attractive company is one where management is allocating free cash flow sensibly. If they have opportunities to earn a return on invested capital above their cost of capital, they take them. When those opportunities are exhausted, they return the rest of the free cash flow to shareholders in some way (cash dividends, share buybacks, or debt pay downs). Note that it is not an either/or proposition. That is, it will often make sense for companies to reinvest some capital and return some capital to shareholders simultaneously. What matters is whether management properly understands how the capital allocation process should work. (For more on this topic, see our April, 2011 paper “Free-Cash-Flow Investing: A Value Strategy.”)

In light of this discussion about how we invest, it is instructive to see how our performance has varied over the years based on what was happening to P/E ratios. We looked at our All Cap Value performance record, which goes back to mid-1994 (it includes performance from previous firms), and calculated the rolling three-year return for every month-end starting in mid-1997. We also calculated the rolling three-year change in both earnings and P/E for the S&P 500 as of each month-end (the strategy is actually benchmarked against the Russell 3000 but we do not have historical P/E ratios for that index). We sorted all the month-end observations into quartiles based on the changes in the S&P 500 P/E and then calculated the average performance within each quartile for both the Russell 3000 and for our All Cap Value composite. The results are shown in the table below.

Remember, the quartiles are based on how much the S&P 500’s P/E was changing, so naturally there is a wide variation (and in perfect descending order) for the average P/E change from one quartile to the next. But notice that the average earnings growth for the S&P 500 does not really change much from one quartile to another. Now look at the columns showing the average returns for our strategy and for the Russell 3000. The market returns correlate very well with the changes in P/E. Given that earnings growth has been very similar across the quartiles, that has to be the case: if earnings growth is essentially the same from one quartile to the next, then the variability in the returns is going to be completely dependent on the variability in the P/E change. Sure enough, the market has done best when P/E has expanded the most, and done worst when P/E has contracted the most. (To be clear, we are not saying that P/E changes drive the level of the market’s returns. As you can see in the bottom row of the table, over the whole period P/E changes played a small role in driving the return; earnings growth was the main driver. What we’re saying is that the variability in the market’s returns is largely driven by the variability in P/E changes.)

But Epoch’s performance, at least in absolute terms, has been mostly unrelated to changes in P/E ratios. Yes, when P/E ratios have expanded the most, we have experienced our best returns in absolute terms (though our worst in relative terms). But in the remaining three quartiles, even as the market return has deteriorated, our absolute return has held quite steady, just as the growth in earnings has been steady across those quartiles. And of course, this means our relative return has improved as P/E change has worsened. This outcome is also evident in more recent periods provided they are long enough to include a complete market cycle. 

The lesson we draw from this analysis is that our returns are more connected to the underlying fundamental success of the companies we own (which earnings, for all their faults, still serve as a proxy for in aggregate), while the market’s returns are more connected to changes in P/E ratios. When P/E expansion has been at an extreme (which, incidentally, it has been of late; ten of the twelve most recent month-end observations as of March 31, 2015 fell into the top quartile for P/E change), it has carried our returns to some extent; we have done well in absolute terms but have lagged in relative terms. But in all of the other environments (ranging from modest P/E expansion to extreme contraction), our returns have been largely impervious to those changes in valuation. We have generated similar returns, on average, in all of those environments, just as earnings growth has shown similar results. We view this as a proof statement that our returns are driven not by traditional valuation but by whether businesses are creating value.

Cast a wide net

Our second criterion for success was that an active manager needs to buy a broad basket of stocks to raise the chances of successfully capturing a market inefficiency. Why do we say this?

Think of buying individual stocks as an exercise in probability. When you buy a stock you never have certainty that the stock will outperform the market. But presumably if you are buying it you think there is a greater than 50% chance that the stock will outperform. Suppose you have some way of identifying stocks that you think are incorrectly priced due to some behavioral bias among investors and you think that each time you identify such a stock there is a 55% chance that the stock will beat the market. Your goal, of course, is to build a portfolio of stocks that is going to outperform. In general (setting aside for the moment the issue of each stock’s magnitude of relative performance), you will outperform if more than 50% of the stocks in your portfolio outperform. In these circumstances, the more of these stocks you can hold (i.e., ones with a 55% chance of outperforming), the more likely it is that your portfolio will outperform. (For an explanation of the math behind this statement, see the footnote at the end of the essay.) 

How do we go about finding a broad array of what we think will be winning stocks at Epoch? We make use of what we call the “Epoch Core Model” and other screening techniques. The purpose of the model is to apply our free cash flow criteria to a broad universe of stocks and rank them by their attractiveness. Specifically, the model favors stocks with the following characteristics:

  1. High free-cash-flow yield
  2. Strong free-cash-flow growth trend
  3. Forward cash-rich and financially flexible (i.e., growing earnings, high dividend yield, low payout ratio, and low financial leverage)
  4. High balance-sheet quality (low level of accruals)

Epoch is not a “quant shop” – we do not build portfolios simply by buying the highest ranked stocks in our model. We believe that the judgment of our analysts and portfolio managers adds tremendous value. But the reality is that the universe of stocks we can potentially invest in is very large, too large for our analysts to be able to investigate each company in depth before reaching a conclusion about whether we should invest in that stock. We need a way to narrow the universe to see which companies most deserve a more detailed research analysis – a kind of triage process.

The Epoch Core Model is our way of performing that triage. It’s a way for us to combine the things that machines do well (processing large amounts of data very quickly) with the things that humans do well (exercising judgment). We believe that when people “race with the machine” (as opposed to racing against the machine), it leads to better results than either people or machines can produce on their own. Technology enables us to figure out how best to focus our research efforts in ways that enable our clients to realize the maximum benefit of our analysts’ insights.


Properly structure the portfolio

It’s not enough to successfully identify stocks that have a good chance of outperforming the market. You have to structure your portfolio in a way that captures that outperformance. In other words, successful portfolio management is not just about which stocks to own, it’s about how much of each stock to own. That may seem like a straightforward proposition, but it can be surprisingly complicated. 

Any individual stock represents a collection of factor exposures. A high free-cash-flow yield, for example, is a factor exposure, and that is one of the factors we are trying to capture at Epoch. We think that high free-cash-flow yield, over time, generates outperformance. But all stocks with high free-cash-flow yields have other kinds of factor exposures too. To start with, they all have industry factor exposures. There will be times when banks, just to take an example, will do well or poorly together because of some development that affects the whole industry. Sometimes that industry effect may outweigh the effect of the free-cash-flow factor. There are many kinds of factor exposures: sensitivity to interest rates, sensitivity to energy prices, sensitivity to the level of the dollar, and so on. Company characteristics like size (i.e., market capitalization) and volatility also constitute factors.

The trick in building a portfolio is to make sure that you have exposure to the factors that you want to have exposure to, while simultaneously minimizing your exposure to factors that you don’t have a view on. We refer to this as managing “aggregation risk,” i.e., the unintentional risks that can arise when you aggregate stocks into a portfolio. If we don’t have an opinion about which way the dollar is likely to go, for example, we don’t want sensitivity to the dollar to unintentionally be our portfolio’s biggest factor exposure. It could turn out that the free-cash-flow yield factor does well, as we expected, and produces 100 basis points of outperformance, but the dollar ends up moving in a way that causes our portfolio to lose 150 basis points in relative performance, more than canceling out the positive impact of our correct view on the free-cash-flow yield factor.

Building a portfolio needs to be an iterative process. If you were relying solely on free-cash-flow yield, you can’t simply take the 50 stocks with the highest free-cash-flow yield, throw them together, and call it a portfolio. That might be a first step, but then you need to understand what other factor exposures that portfolio has. It will almost certainly have some large factor exposures (to industries or to macroeconomic factors) that you do not want to have because you are not sure whether those factors are likely to do well or poorly in the short term. 

The standard analogy in discussing portfolio construction is to talk about “signal” and “noise.” In our example, free-cash-flow yield is the signal. We are confident that it will lead to outperformance over time. The other factor exposures are the noise. Some of them are likely to do well, and others are likely to do poorly, and we are not sure which are which. So we want to filter out the noise to make sure that the signal can come through and generate outperformance for our portfolio. What we need to do, then, is modify our portfolio so that we still have the exposure to the free-cash-flow yield factor, but eliminate, to the extent we can, significant exposures to other factors.

Apart from aggregation risk, the other form of risk that comes into play in portfolio construction is something we call “efficiency risk.” While aggregation risk has to do with factor exposures, efficiency risk has to do with the proper level of stock diversification. How many stocks should a portfolio hold? The answer depends mainly on two things: First, what is the expected alpha for each stock (factoring in your level of conviction about that alpha)? Of course, since our analysis of stocks includes qualitative judgments, placing a number on expected alpha can sometimes involve a bit of art as well as science. 
Second, what is the impact of adding a given stock to the overall portfolio’s expected alpha and expected risk (both total risk and active risk)? The basic principle here is that any time you can add a stock to the portfolio that will increase the portfolio’s return without adding risk (or alternately, that will lower the portfolio’s risk without detracting from return) you should add that stock. 

Is there some number of stocks that is always optimal? No. In the previous section of this essay, we said that when you have a process for identifying stocks that are likely to outperform, buying more of those stocks is better than buying fewer of them, because it increases the probability that the total portfolio will outperform. That is true, but there is more to the story. We mentioned a moment ago that your expected alpha for each stock also needs to factor in your level of conviction about the alpha. What do we mean by that? Suppose you had a research process that consisted of screening a large universe of stocks to find those with a particular set of characteristics that you think are desirable. The screen identifies 100 stocks that meet your criteria. You know that historically, 55% of the stocks that pass this screen go on to outperform the market over the next year. But you conduct no further research on any individual company (e.g., meeting with management, studying the company’s business thoroughly, etc.). In this case, your level of confidence in the alpha forecast for any one of those 100 stocks on its own would be relatively low. After all, you expect 45 of the 100 stocks to actually underperform. In this case, your best strategy is to hold a large number of stocks, because you don’t really prefer any one of the stocks over the others. You should buy as many of them as you need to until you find that adding one more will not reduce your risk any further.

Contrast that situation with a process in which you perform in-depth fundamental research on the 100 stocks that pass the screen and you identify the 40 stocks that you think have the highest probability among the 100 to outperform the market. Now, your level of confidence in the alpha forecast for those 40 stocks is much higher than the confidence you had before you did the research, and it’s much higher than your confidence in the other 60 stocks, some of which you may now expect to underperform based on your research. In this case, holding a more concentrated portfolio is going to make sense. Given your widely varying level of alpha expectations, you will find that adding stocks beyond the 40 that you have most confidence in is likely to reduce your expected return by too much relative to any marginal risk reduction you might achieve.

In the end, the answer to the question of how many stocks to hold depends on the process you use to produce an expectation for each stock’s alpha. A process that relies on broad quantitative screening to identify stocks with factor exposures that are expected to generate alpha is likely to produce better results by holding a portfolio with a large number of stocks. A process that relies on more in-depth fundamental research to identify stocks for which you have a higher level of knowledge and conviction is likely to produce better results by holding a more concentrated portfolio that focuses on the stocks where you have that high level of conviction.

Some firms use portfolio optimization software to manage aggregation risk and efficiency risk. We prefer a more interactive process between our quantitative risk management team and our portfolio managers. In fact, we have made a senior member of our risk management team a co-portfolio manager on all of our portfolios, to demonstrate how seriously we take this process. We review all of our portfolios on a regular basis, measuring precisely the factor exposures we have been talking about to make sure that we are only taking on exposure to the factors that we have a view on. We think it is worth noting that Epoch’s portfolios generally have lower volatility and higher Sharpe ratios than their benchmarks, which to us means we are doing something right when it comes to portfolio construction.


Conclusion

We believe the reason so many active managers fail to outperform their benchmarks over the long term is that they are insufficiently disciplined in their approach to these three essential portfolio management issues.

Some managers cannot identify what inefficiency or behavioral bias they are trying to take advantage of; they fall back on explanations like “we look for companies that are trading for less than we think they are worth.” First, who doesn’t? And second, these managers often cannot articulate why a stock might be trading for less than they think it is worth and whether there is some way to profit from that undervaluation.

Other managers do not structure their portfolios well. If you don’t include a wide enough array of stocks that you think are likely to outperform, or if you don’t pay enough attention to what other factor exposures you are including in the portfolio, you are likely to find that your performance is mainly being driven by factors other than the ones you intended. Even if you have correct insights into certain factors, you will find that those insights are being swamped by factors that you either weren’t aware of or failed to properly control for. 

At Epoch, we maintain a highly disciplined investment process based on three key principles. Our analysts examine companies based on finance rather than accounting, which is the primary inefficiency in pricing that we seek to capitalize on. We have the means to triage a large universe of companies so that our analysts can apply their skills and judgment on the most promising pool of potential investments. And finally, we have a robust risk management process that minimizes unintended factor exposures. 


Footnote

In the “Cast a Wide Net” section of this paper, we asserted that if you have identified a number of stocks which each have a 55% chance of outperforming, the chances that your portfolio will outperform increase as you buy more of those stocks. Why is this so? Suppose you bought just one stock. In this case, there are only two possible outcomes – either the stock outperforms (with a probability of 55%) or it underperforms (with a probability of 45%). So you have a 55% chance of winning and a 45% chance of losing. What if you expanded your portfolio to hold 5 stocks? Now, there are many more possible combinations of outcomes in terms of which individual stocks outperform and which underperform, but given the 55% probability of each stock outperforming, we can calculate the probability that the portfolio is a winner – i.e, that your number of outperforming stocks is 3, 4, or 5. Assuming for the purposes of this exercise that the probabilities for the individual stocks are independent of each other, we can conclude that the portfolio has a 59.3% chance of winning.

Now, suppose we expand the portfolio to include 11 stocks. In this case, winning would mean having 6 or more stocks that outperform, and the probability of that happening is 63.3%. If we go further and hold 21 stocks, our chance of winning (i.e., having 11 or more stocks that outperform) rises again, to 67.9%. You can see what’s going on here. Think of it as flipping a weighted coin that comes up heads 55% of the time. The more times you flip, the greater the odds are that you will end up flipping heads more than 50% of the time. To take it to an extreme, if you could flip that coin a million times, the chances that you would flip heads more than 50% of the time would be essentially 100%. So when the odds are in your favor on each individual flip, the way you maximize your chances of winning the game (defined as flipping heads more than 50% of the time) is to flip as many times as you can. Similarly, if you have a way to identify stocks that will outperform 55% of the time, the way to maximize your odds of having a winning portfolio (defined as having more than 50% of your stocks outperform) is to buy as many of those stocks as you can.

Of course, in the coin analogy, the weighted coin always has a 55% chance of coming up heads, no matter how many times you flip, so the larger number of flips the better. With stocks, there is not going to be an infinite number of stocks that you can identify that have that 55% chance of outperforming. As you broaden your list of stocks, at the margin the probability of outperformance for each new stock starts to regress towards 50%; eventually you will get to the stocks that you think have less than a 50% chance of outperforming, and you certainly don’t want to add those. So there are practical limits 
on how far you can take this. The point here is that if you have some way of identifying stocks that you think are likely to outperform, you want to hold as many of them as you can until the probability of outperformance for the marginal stock you are adding approaches 50%. 

Share View PDF

Upper Right Corner Shadow
lower left cornerlower right corner