Utility and a theory of emotion

In my last post I discussed at length the question of rationality. I concluded that contrary to the opinion of behavioral economics, humans do make decisions that they believe to be in their best interests, in my view the correct definition of a rational decision. In that post, I first had to define what “best interests” means, a concept we called “utility.” In this post, I want to do two things. First, I want to repeat (apologies) the definition of utility and expand on what it means for a person to try to maximize utility. Second, I will use our model of utility to hypothesize a theory of human emotions.

Before we begin a quick note. I mentioned in my previous article, and I reiterate here, that for the most part, my definition of utility is not groundbreaking. However, I believe my view of where emotions come from and their relation to utility may be unique. At least I am not aware of anyone who has espoused a similar idea.

What is utility?

To many philosophers and economists, utility is a measure of, or a proxy for, happiness. As we will see shortly, utility and happiness are absolutely related but they are not the same thing. Utility is a metaphorical basket comprised of all the different things evolution and biology and our genes have made us humans desire. I believe that the components of utility can be grouped into three categories: basic life necessities, social desires and entertainment/leisure. I’d go further and say that these three categories are listed in order of importance. In other words, basic life necessities are the strongest contributor to utility, then social desires and then entertainment.

Basic life necessities are things such as water, food and good health. Other things equal, my utility at any given moment is higher if I’m not thirsty, not starving, and not sick. Like many of the Earth’s species, humans have evolved to be social animals. We desire such things as love, friendship, companionship, sex and status. Other things equal, my utility is higher if I love and am loved, have friends and consider myself to be superior (higher status) to my peers. Lastly, we desire all sorts of entertainment or leisure, the third category of utility. Other things equal, my utility is higher if I am entertained, having fun, not bored. And keep in mind entertainment means very different things to different people. To couch potatoes, mindless TV watching. To adventurers, sky diving. To intellectuals, reading articles on EconomicsFAQ.

Before moving on I want to clarify a few things. First, the three categories are not perfectly discrete. There can be overlap. For instance, food is nourishment (basic necessity) but can also be entertainment. Similarly, hanging out with friends or having sex can also contribute to multiple categories: social desires and entertainment. Wearing fancy clothing can provide warmth (basic necessity) and status (social desire).

Moreover, every person, based on their genes, will have somewhat different weightings for these three categories, not to mention all the different human activities that make up these categories. For example, an extrovert will likely favor friendships and human interaction more than an introvert. Someone with a “Type A” personality might favor the “status” component of utility more than a less aggressive person. In addition, each person’s individual weightings will almost certainly vary over time. For instance, “status” seeking probably peaks when seeking a mate and declines as we age.

What does it mean to maximize utility?

So far, we’ve defined utility as best we could. At every given (conscious) moment of time, each of us have some level of utility. When we say that we humans make decisions in order to “maximize utility” what we precisely mean is that we make decisions in order to maximize the present value of the sum of our probability weighted future utility over our lifetimes (or longer, if you believe in an afterlife).

By the term “present value,” we mean that the same amount of utility today is worth somewhat more than that amount of utility tomorrow. How much more depends on some “discount rate” which will also vary from person to person and may vary from moment to moment. Furthermore, we implicitly weight the utility we will experience in the future by their probabilities of occurring. That is, an event that has a higher chance of happening will contribute proportionally more to the present value of my utility.

Going forward, I’m going to use a shortcut for the sake of brevity and readability. When I use the word “utility” I mean the present value of probability weighted future utilities. So when I say “maximize utility” I really mean maximizing the sum of the present value of probability weighted future utilities.

A few points before we move on. You might be skeptical that every time we are faced with a decision, we actually map out the rest of our lives and make some incredibly complex calculations. You’d be right of course, sort of. Our incredibly complex brains have evolved to do this for us. That is, we do this subconsciously. Also, keep in mind that we make only one decision at a time, that the vast majority of decisions we make have negligible effects on our lifetime utility, and that most decisions involve a small number of choices (often just two).

We also have evolved to use rules of thumb (“heuristics”) to help us forecast the future and make decisions. One of the most powerful, I believe, is to favor decisions that maximize our ability to make future decisions. In other words, to keep our options open. Perhaps we humans are naturally optimistic creatures. I might land my dream job! I might win the lottery! I might marry a supermodel!

I find it helpful to think of life as a giant decision tree. Each decision results in our tree dividing into two or more branches. We choose the branch that we expect will result in the highest utility (the largest leaf if you will). But as I said, we tend to favor choices that maximize the size of our tree, the number of branches, even if some of those branches have quite a low probability. We try hard to avoid making decisions that will significantly prune our decision tree. More than anything else, I believe that this accounts for our natural desire for freedom. The more freedom, the larger our decision tree, and the larger our decision tree, the greater number of “high utility” branches.

I want to be very clear about this idea of a tree. Strictly speaking, maximizing the size of your tree is just a heuristic for maximizing utility and is not always the correct one. Consider the following. A long time ago you’ve committed a crime. Hanging over your head is the chance of jail time, almost certainly a low utility branch of your tree! But now the statute of limitations for your crime has run out and the threat of prison has been eliminated. In this example, your tree has been pruned, normally something to avoid. But because this was a low utility branch, your lifetime utility is almost certainly higher. Mostly, however, we humans like to maximize our possibilities because some of them will lead to high utility outcomes. Keeping our options open is good, because we can always choose the one with the highest utility.

Before we move on, I want to make one last very important point. When we make a decision, only the future matters. Absolutely we take our past life experiences into account to help us make our decisions. And certainly, things we’ve done in the past can result in future utility, for example, the memories of a loved one or the sentimental value of a favored object. But, when we try to maximize the present value of future utility, we do not factor in the past into our “calculation.” We only look forward. We will return to this crucial idea when we talk about emotions, and also regarding winning a lottery.

A few decision making examples

To make the discussion of utility a bit clearer, let’s discuss a few examples of decisions I might make and see how they will (or will not) impact my utility.

Say tomorrow I wake up and it’s time for breakfast. I look into my kitchen cupboard and I find two cereal boxes: Corn Flakes and Rice Krispies. Assume they both have the same nutritional value, cost the same amount of money, and I like them equally. I have a choice to make, but this choice will have a negligible effect on my utility. I say negligible but not zero because perhaps there’s a slightly greater chance of choking on the slightly larger Corn Flake. Or maybe there’s a minuscule chance that the “snap crackle pop” noise of the Rice Krispies will wake up my sleeping child. But point being, I’m not going to give this decision much thought because it is not going to have much of an impact on my utility.

The next day I wake up and again go to the cupboard for breakfast. However on this day I look a little more closely inside and realize that there is a third box of cereal: Lucky Charms. Now I have a slightly harder choice to make, and one that will likely have a bit more impact on my utility. Do I eat one of the healthier, boring cereals (Corn Flakes or Rice Krispies)? Or, do I go for the Lucky Charms, the sugary, marshmallowy, unhealthy one?

Lucky Charms may very well give me more current utility (I’m ignoring the guilt I might also feel which would lower my utility). But, it may also lower my future utility by making me fatter, raising my chance of developing diabetes or heart disease, perhaps lowering my life expectancy. Here I must make a choice that has a trade-off. Eat the Lucky Charms and have more utility now but less later, or eat healthy and have less utility now and more later. There’s no necessarily right or wrong answer that works for everyone for every morning, but there is a right or wrong answer for you for that particular morning. Choose the cereal that results in the greatest lifetime (present valued) utility.

Of course in the grand scheme of things, choosing what to eat for breakfast is a pretty small decision, and one that very likely has a tiny effect on my lifetime utility. Let’s go to the other extreme, decisions that might have very large impacts, for example which college to attend or what career to pursue. These are decisions worth obsessing over. Let’s discuss the decision of whether or not to marry your current girlfriend or boyfriend.

The decision of marriage is one of life’s decisions that probably impacts future utility more than just about any other. Before we begin, recall that I said earlier that most decisions we make involve a very small number of choices, often only two. That is the case here. We are not choosing who to marry among hundreds or thousands or millions of eligible bachelors/bachelorettes. We are making a binary (yes or no) decision, to marry or to stay single. Let’s think about how marriage affects my future utility.

If I choose marriage, the upside is love, companionship, children, etc. Sure I may experience those contributors to utility even without marriage, but the probability is much higher with marriage (perhaps close to 100% probability, at least for the foreseeable future). On the other hand, choosing marriage substantially “prunes your tree.” That is, you give up (pending your views of polygamy or adultery), the fun of dating. You give up the chance to meet someone even “better.” You give up all those branches of your tree that might just lead to that supermodel or trophy husband. What to do? Keep your options open? Or prune the tree for the substantially greater probability of all of those social components of utility?

Most of life’s decisions are like the breakfast ones. They have very limited impact on utility. Occasionally however, we face a big one like marriage, a decision that has a huge influence on our utility, and on our emotions.

A theory of emotions

I hope that by now the concept of utility, and how we make decisions to maximize utility, is reasonably clear. Now I am going to turn our attention to the topic of emotions. I believe that emotions are derived from utility, specifically from changes to utility.

Just like we spend some time specifying the concept of utility, we need to properly define the term emotion. This is tricky because at least in the English language, we tend to associate the word “emotion” with “feelings.” However, the word “feeling” or “to feel” is quite ambiguous. We often say we feel hungry or feel hot or feel loved or feel happy. In my view, neither hunger nor cold nor love is a proper emotion. Of these four “feelings,” only happiness is a true emotion.

Hunger (or being satiated), being cold (or warm) and experiencing love (or loneliness) are all direct contributors to utility. Recall that we grouped utility into three categories: basic needs, social desires and leisure. Eating and keeping warm fall into the basic needs bucket. Love is in the social desire basket. In fact there are many such social desires that we think of as emotions but directly contribute to utility. For example, jealousy (when we perceive our own social status lower in comparison to someone we know) or guilt (when someone we know views our social status as low because of something we did) or schadenfreude (when we view our social status as superior to someone else because of something they did).

On the other hand, happiness is a real emotion because we “feel” it when we experience a change to the present value of our utility. In other words, the definition of an emotion is what we feel when there is a change to utility. I will argue that there are, in fact, only two real emotions: positive (call it “happiness”) and negative (call it “sadness”). All other emotions are just variations of happiness and sadness as we will discuss in a moment. As we mentioned, happiness occurs when our utility increases. Sadness occurs when our utility decreases.

To drive the point home, let me suggest two other ways to contrast feelings like “hunger” and “cold” and “love” with the true emotions, happiness and sadness. If you are verbally inclined, think of the former as some of the “nouns” of utility. True emotions are the “adjectives.” Alternatively, if you are mathematically disposed, think of the true emotions as derivatives of our utility function. That is, they result from the change to utility, not from the components of utility itself.

As I said, happiness and sadness represent what we feel when the sum of the present value of future utility increases or decreases, respectively. Of course, we use many more words to describe our emotions. Where do all these other words come from? I believe that there are two types of variations to these two basic emotions. The more obvious first variation relates to the strength of the emotion, or more accurately, the degree to which utility increases or decreases. Emotion is a spectrum. The second variation relates to time period. That is, is the change to utility in the past, the present or the future. Let’s discuss each of these in turn.

If my utility increases a modest amount I feel happy. If my utility increases a larger amount I feel thrilled, elated, ecstatic, euphoric. Find $10 on the ground and I am happy. Win the $100 million lottery and I am ecstatic. Both increase my utility but I can do far more with $100 million than I can with $10. Hence, my utility increases substantially more having won the lottery, and thus my positive emotion is much stronger. Similarly, a small decrease in utility results in sadness. A larger decrease results in feelings of despair, devastation, depression. Lose $10 and I am sad. Lose my life’s savings and I am devastated. Both decrease my (present valued) utility, the latter much more than the former.

The second variation to emotion relates to the time period with which I experience the change to utility. I feel a slightly different set of emotions depending on whether the change to utility happens now (the present), whether I am remembering about a change to utility in the past, or most interestingly, whether I anticipate the change to utility in the future.

We’ve already discussed what we feel when the change to utility happens in the present. We feel those variations of happiness and sadness (stronger or weaker) depending on the magnitude of the utility change. However, when we recall changes to utility we feel slightly different positive and negative emotions. When we reminisce about positive changes to utility, we are proud, content, sentimental, perhaps relieved. On the other hand, when we recall a decrease in our utility we feel such emotions as regret or anger (especially anger if we can place blame).

As I said above, I think the most interesting variations of the positive and negative emotions (especially the negatives ones) occur when we predict changes to our utility. That is, when they happen in the future (when changes to utility happen that we do not predict, we feel “surprise” which can naturally be positive or negative). When we look forward to or expect a positive change we feel anticipation or excitement. When we contemplate a decline in future utility we experience powerful emotions such as stress, anxiety and panic. In fact these variations of negative emotions seem to have outsized effects on our bodies and our immune systems, something I want to discuss a bit further.

To understand how we anticipate a decline in utility I think it is helpful to use the the model of the decision tree. Recall that we can view the size of our tree as a proxy for our utility (though remember also that this is just a proxy, it does not always hold true). The more choices we have going forward (generally speaking) the greater our utility. What happens if we take away choices, when we prune our tree? Generally speaking again, our utility is lower and we feel negative emotion. What happens when we anticipate or expect or fear the pruning of our tree? We feel stress and anxiety, perhaps even panic.

Think about a time when you had a take a big test, perhaps the SATs. If you score well your dream college might be in your future. But, if you score poorly, there goes Harvard, and with Harvard goes your great career and the billionaire life ahead of you… Now think about your tree. If you do poorly on the test, a substantial (and high utility) section of your tree has been pruned. Anticipating this pruning (technically, anticipating a reduction in the present value of your future utility), you feel a negative emotion and that emotion is anxiety or stress.

Let’s take an even more extreme example. You are alone in an elevator and it gets stuck. What might go through your head? Will I ever get out? Will anybody save me? Will I be stuck here forever? Will I die in this elevator? All of a sudden, your whole life’s tree has been dramatically pruned, your utility dramatically reduced (until the elevator starts moving again) and you feel the most extreme form of anxiety or stress, panic.

Winning the lottery

Before moving on, I wish to discuss one last point. Psychologists have performed studies of lottery winners and have concluded that after an initial burst of happiness (perhaps lasting several months after winning), lottery winners tend to be no happier (and sometimes even less happy) than they were prior to winning. To economists this seems surprising. Clearly winners are richer, so why shouldn’t their utilities be higher? The problem here is the confusion between happiness and utility.

Using our models of utility and happiness, we can shed light on this apparent paradox. Both psychologists and economists are correct that lottery winners have greater utility and are happier immediately after winning. We of course understand that winners are happier because they have experienced a significant increase in the present value of their future utility. Their “tree” is much larger, and indeed it takes time to think through all the possibilities that the newly won money can be used (or how it can contribute to future utility). Naturally, as utility continues to grow, happiness continues too, though at a reduced emotional amount. After a period of some months, utility ceases to grow because the winner has substantially completed the process of factoring in the new wealth into his or her future utility. Since happiness is derived from an increase in utility, absent this increase, there is no happiness. That is why the level of happiness tends to revert to what it was prior to winning. There is no longer a change to utility. In fact, sometimes utility actually decreases leading to negative emotion (less happiness) as money is squandered or as the winner is made to feel guilty by all sorts of friends and family for not being more generous.

Conclusion

So there you have it. Utility is made up of three baskets of goods: basic needs, social desires and leisure. We humans attempt to maximize the present value of the sum of our future utility. That is what drives our decision making. Emotions are derived from changes in our level of utility. When we experience an increase in the present value of utility, we feel positive emotion (happiness). When we experience a decline in our utility, we feel negative emotion (sadness). There are are differing strengths of both positive and negative emotions which correspond to the size of the change in our utility. Finally, we also feel variations of these positive and negative emotions depending on if the change to utility happens in the present, if we are reminiscing about it, or if we are anticipating it.

Thinking about the subjects of utility and emotions brings up a great number of fascinating questions. While each could easily be the subject of its own post, here are a few that I wanted to briefly address. I don’t pretend to have all the answers.

Can utility be measured?

I do not believe that utility is likely to be quantifiable. That is, I don’t think you can put a number on it, specifying, for example, that right now my utility is at a 67, before it was a 59 (obviously such a scale is also artificial). However, I concede the possibility that I could be wrong. Perhaps someday the technology will exist (an implantable device?) capable of reading brain waves or measuring levels of certain brain chemicals. Maybe these brain waves or chemicals are indicative of an individual’s utility and could be monitored constantly and in real time. I’m skeptical but time will tell.

Does everyone have the same utility scale?

Let’s for a moment assume that you could put a number of utility. Does every human have the same scale of utility, the same upper and lower bounds? Say my utility scale goes from 1 to 100. Does that mean yours does too?

I think the utility scales (whether measurable or not) among humans would indeed differ, though by only a relatively small amount. I’d speculate that some people are naturally more positive (happier) and probably have a somewhat higher upper bound to utility. Similarly some people are naturally less positive/more negative and have a somewhat lower, lower bound.

Do animals other than humans have a concept of utility?

There is no question that many animals exhibit the same sorts of emotions as do humans. Happiness, sadness, pain, boredom (technically, pain and boredom are primary factors of utility like hunger or love and not true emotions), stress, anxiety, depression and many other emotions have been observed in animals, and not only in highly intelligent creatures such as great apes or dolphins, but in “lower” species as well. Since I argue that emotions are derived from utility and animals clearly exhibit emotions, I must conclude that animals do possess a concept of utility.

The question then is do animals try to maximize the present value of their future utility as humans do? I don’t know for sure, but I am included to think that they do, at least the more intelligent ones. And perhaps most animals do. Others might say that what differentiates humans from animals is that we can think ahead, that we can anticipate our future, and maximize utility. I think there’s no such separation. Like evolution itself, this is a matter of degree, not of absolutes.

How can we explain loss aversion? Why do negative emotions feel stronger than positive emotions?

As I discussed at length in my last post, one of the key insights of psychology and behavioral economics is a concept known either as loss aversion or the endowment effect. These two related ideas show that the pain of losing is more powerful than the pleasure of winning. No doubt this is true, buy why?

There are two possibilities. The first is that there is a real asymmetry between a  gain and a loss. I won’t repeat the detail here, but briefly, I may rationally view an object as more valuable once I own than before I owned it. Hence, losing it reduces my utility more than gaining the object increased my utility. Because utility is greater on the downside, so too is the emotional response to the change in utility.

The second possibility is that emotions are just stronger when utility is reduced than when utility is increased. In other words, negative emotions are just more powerful than positive emotions. Theorists have proposed that this is a consequence of evolution and nature. That is, the most negative consequences of life’s activity (death) are greater than the most positive consequences of life. Therefore, evolution (through our genes) has programmed us to feel worse with loss of utility (death being the ultimate loss of utility) than with gains to utility.

Both are plausible explanations for loss aversion and for the apparent asymmetry of emotions, but I favor the first.

Why I am not a utilitarian

I’ve thought a lot about utility and human decision making. This is a topic that I first became interested in during college (more than 20 years ago) though I never took a class in philosophy or psychology. In fact most of the ideas contained on this article date back to my college days (I’m finally getting around to writing them down!). I firmly believe that human beings rationally make decisions in an attempt to maximize their own utility. I also believe that by understanding utility, we can explain how emotions are derived. I do not, however, consider myself a utilitarian. Let me explain why.

Like everything we do, let’s start with a definition. Of course this is easier said than done. For utilitarianism means differing things to different people. In fact, there are different factions of utilitarianism advocated by various philosophers. I am going to use what I believe to be the most common and the most colloquial definition of utilitarianism: a system of morals or ethics where decisions are made to maximize the sum of total utility. As many philosophers have pointed out, there are a number of issues that arise that are not easily answerable. Here are some of the major ones.

First, who’s utility are we maximizing? Fellow American (or pick your country) citizens? All humans currently alive? What about future humans not yet alive? How about animals (recall our discuss above about animals and utility)?

Second, should everyone’s utility count equally? Should a good samaritan count the same as a murderer? An elderly person the same as a child? An Einstein or Beethoven the same as an average Joe or Jane? Are we indifferent to twice the population at half the average utility? How about ten times the population with one tenth the utility?

Third, how can we possibly measure utility? Does everyone have the same scale? Is what good for me, the same as is good for you? Do we all have the same pleasures and pains?

Fourth, is it all realistic to sacrifice my own utility to help others as the philosophy demands? Would this not be, dare I say it, irrational? Must I give away all my money? Is helping to feed a starving child in Africa the equivalent of preventing my neighbor’s kid from being hit by a bus? Is it the equivalent of saving my own child?

For these four sets of reasons I find the philosophy of utilitarianism to be lacking. It is a good philosophy, in the sense that it is better than most alternatives when it comes to ethics and morals. But, it is too unrealistic and has too many flaws to be taken seriously.

An alternative (libertarian) philosophy

I propose an alternative philosophy. A system of morals and ethics that is more realistic, easier to implement, and based on our biologically evolved decision making system. I don’t pretend that my suggestion is in any way original or without flaws. But I do think it is better, and worth contemplating.

Each individual should attempt to maximize their own utility (as they do now), with one crucial caveat: that the individual’s decision does not impact anyone else’s utility (positively or negatively) without the other party’s consent. In essence, this is the true libertarian philosophy.

The most significant advantage of my proposed philosophy over utilitarianism is that it is implementable. Under utilitarianism, I must be able to measure and predict everyone else’s utility function. This is positively impossible. I can never know what is in someone else’s utility basket (which can change from moment to moment). I can never know the weightings of each utility component (which can change from moment to moment). I can never know the discount rate someone else uses to present value future utility (which can also change from moment to moment). Now multiply what is already impossible by 7 billion people (plus animals!).

Under my philosophy, all I need to know is that my actions do not affect anyone else’s, absent their consent. This is not a completely trivial matter, but it is infinitely easier than what is asked of me under utilitarianism. The libertarian philosophy is also far more realistic because it is based on the same utility maximizing decision making framework we have inherited from evolution. In that sense it is a far more “natural” philosophy.

The main criticism that someone would likely promulgate about my philosophy is that it is amoral, selfish and antisocial, rather than moral, selfless and social. I disagree for a number of reasons. Most importantly because a decision you make should never effect anyone else (again, without consent), you can never hurt them (lower their utility). Strict utilitarians would allow hurting one individual to help two.

Furthermore, a superficial glance at my philosophy would lead one to believe that by maximizing one’s own utility, there is no rationale to help others (the selfish, antisocial critique). However, this ignores the fact that a major component of utility is the basket of social desires. As I’ve said from the outset, we humans being social creatures have evolved to live in societies where we do indeed receive utility from helping fellow human beings. We don’t necessarily need a utilitarian morality nor religion nor a tax incentive to give to charity. Charitable behavior is ingrained in us (obviously, in some more than others). There is also no reason that society cannot further emphasize and encourage charity and goodness.

Utilitarianism is not only a philosophy of individual decision making but also a moral or ethical code for government. Specifically, it is the goal of government to maximize the sum of each person’s individual utility. Just as it is impossible for you or me to try to calculate each other’s utility, it is equally impossible for government to do so for all of its citizens. This is an analogous argument to the one Frederick Hayek made in opposition to socialism. Regardless of your view of socialism’s moral or ethical merits, it is simply impossible for government to make all of the economic decisions in an economy with even a trace of efficiency. Similarly, regardless of utilitarian morals or ethics, it is equally impossible for government to make decisions with the goal of the greatest good for the greatest many (maximizing the sum of utility).

So what instead should government do? Consistent with our libertarian philosophy, government should aim to maximize freedom. I fully concede that maximizing freedom is complicated with a great many tradeoffs (the subject of a future post or maybe a book!), but I think it is a lot less complicated than maximizing total utility. Moreover, I would suggest that maximizing freedom will lead to a substantially higher level of societal utility than would trying directly to maximize utility. Recall the heuristic of our decision tree. Having more choice in my life (freedom) is, most of the time, consistent with higher utility. Having our choices reduced is indicative of lower utility (with the accompanying emotions of stress, anxiety and panic).

To be fair, a libertarian philosophy of government does raise many of the same interesting and debatable questions as utilitarianism. Specifically, whose freedom should be maximized: citizens, residents or all humans? I advocate for all humans, but given the jurisdictional limitations of government, the way to achieve this is with open immigration. Do animals count? Yes they should, but I have no idea how. Do future humans count? I’m not certain. Should everyone’s freedom be counted equally? Yes, but not because everyone is equally deserving. Instead because to do otherwise is too complicated and ripe for corruption (simplicity counts in philosophy).

Let me end this article with something I consider very important, and very misunderstood. A libertarian philosophy of government is naturally one that will tend to favor smaller government. It will also tend to favor not doing over doing. However, the proper goal of libertarian philosophy is not, contrary to popular belief, to minimize government per se. The goal is to maximize freedom.

We are not irrational: Nobel Prize edition

Richard Thaler was recently announced as the recipient of the 2017 Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel (aka The Nobel Prize) “for his contributions to behavioural economics.” Thaler is essentially the father and most noted proponent of the field of behavioral economics as well as its offshoot, behavioral finance. By most accounts (and my own reading of many of his published works and his memoir, Misbehaving), Thaler is also quite a nice guy – and rather un-arrogant. A refreshing contrast from most economists. But is he deserving of the Nobel?

The main criticism of the work of Thaler (and of other behavioral economists) is that its conclusions are plainly obvious, if not to economists blinded to reality, than to generations of psychologists, marketers and hucksters. And very true, such work would probably not be quite as agreeable to the Nobel committee for awards in physics, chemistry or medicine.

But leaving aside the question of whether this silly prize should even exist, I’d say Thaler is deserving. In helping to introduce psychology into economics, and in shepherding behavioral economics from its backwater infancy to mainstream acceptance and political prominence, Thaler has done more to influence the field of economics than perhaps any other academic economist in the past few decades. A lifetime achievement award as it is. Congratulations.

Obvious or not, the key insight of behavioral economics is that we the people, do not behave the way economists and their models thought we did. Or think we should. In a nutshell, we do not always make decisions that maximize our wealth. And by showing evidence of this fact time and time again using simple experiments, questionnaires and financial data, a Noble Prize was won. I have no problem with that.

But I do have a problem, and the problem is this. By not conforming to the way economists think we should and by not consistently maximizing our wealth in all our decisions, we, human beings, are labeled “irrational.” And this is a conclusion stated in virtually all articles about behavioral economics, whether in the mainstream media, or in economic journals. In fact as we’ll see shortly, evidence of “limited rationality” was one of the stated reasons justifying Thaler’s prize by the Nobel committee.

Irrationality is nonsense and the need to “correct” irrationality should not be used to justify government intervention in the economy or in our daily lives.

Rationality and utility

Now we must get slightly technical. What exactly does it mean for an individual to behave rationally? The colloquial definition is something like, “to make decisions using reason.” Let’s first make note of the fact (which we’ll return to later) that rationality implies making a decision. Next, let’s ponder what it means to use reason (i.e. sound judgement, good sense, logical arguments) to make a decision, or to make a “reasonable decision.” I’d say a reasonable decision (or a decision based on reason) is one that I believe or expect will be in my best interests.

Take note again, that I use the term “believe or expect” to indicate that at the time of a decision, my information is incomplete. That is, I don’t know the future. So, rationality does NOT imply a decision that winds up resulting in a good outcome. It solely implies that my intent in making a decision was in my best interests given available information. Equivalently, irrationality does not imply making a decision that winds up being a bad one, provided that I thought it’d be a good decision at the time I made it.

Finally, and most importantly, what do we mean by the words, “in my best interests?” Here we will use a term very familiar to, though often misinterpreted by economists. And here is where we will really begin to deviate from the economic mainstream. The economic term for “my interests” is “utility.” Correspondingly, the economic term for “in my best interests” is “maximizing my utility.” So what the heck does “utility” represent?

Now I must be less precise because nobody, neither philosophers nor economists, have agreed upon what exactly constitutes utility. Some say it is a measure of happiness or pleasure. Some punt and just say, it is whatever it is that I maximize. I think we can shed a bit more light.

I’m going to say that utility is an aggregation of all the stuff that evolution and biology have made us humans desire. These include, first, basic life necessities such as water, food and good health. For example, other things equal, my utility at any given moment is higher if I’m not thirsty, not starving, and not sick. Given that humans are social animals (with strong incentives to reproduce), our utility is also made up of social desires such as love, friendship, companionship, sex and status (status is something to which we will very importantly return again and again). And while not necessarily an exhaustive list of the various components of utility, I’ll propose a third category of all sorts of entertainment (or that which prevents me from being bored).

At any given moment of time (or least when we are conscious), each of us have some level of utility. The components of utility will clearly vary from person to person and within a given person, from moment to moment. Even though utility is not necessarily quantifiable, each of us are capable of judging or estimating whether a given action will likely result in greater or lesser utility to us.

When we say that we humans make decisions in order to “maximize utility” what we precisely mean is that we make decisions in order to maximize the present value of the sum of our probability weighted future utility over our lifetimes (or longer, if you believe in an afterlife). So two final but crucial wrinkles that we need to discuss.

By the term “present value,” we mean that the same amount of utility today is worth somewhat more than that amount of utility tomorrow. How much more depends on some “discount rate” which will also vary from person to person and may vary from moment to moment. And lastly, note that we weight future utility by the likelihood of the events occurring that would result in that level of quantity of utility. Our brains do this implicitly, kind of the same way a baseball player can calculate the optimal path to run in order to catch a fly ball without knowing the first thing about physics or parabolas.

Before we move on, I will admit a few things. First, from the standpoint of philosophy (rather than economics), nothing about my idea or loose definition of utility is all that controversial. Second, my definition is essentially what is known as a tautology. That is, a sane adult will only make decisions they believe to be in their best interests. In other words all decisions made by sane adults are by definition, rational. Third, up until now, I am mostly talking about definitions and semantics. But that is not really the purpose of this article. Its purpose is to argue that a bad definition should not be used to justify government action. Fourth and finally, the concept of utility is clearly nebulous and extraordinarily difficult, if not impossible to quantify. Hence, economists don’t use it in their models or when analyzing experiments. And herein lies the problem.

So what do economists use to predict decision making and to pass judgement on whether a particular decision or set of decisions is rational or irrational? They do one of two things. Mostly they use a much more quantifiable metric as a proxy of utility: money (or wealth or income). That is to say, rather than maximize the sum of the present value of probability weighted utility, I should maximize the amount of money that I have. If I don’t consistently maximize my wealth, then I am making bad (irrational) decisions. As we’ll see shortly when we discuss Thaler’s research, it is this error of using money as a proxy for utility that mostly accounts for economists’ misinterpretation of rationality.

Then there is the second methodology that economists use to analyze decision making and to judge (ir)rationality. Based on the work of game theorists, economists sometimes utilize an alternative and non-colloquial definition of rationality. Rather than rationality being defined as a reasonable decision made in my best interest (utility maximizing), economists set forth a set of technical rules for which rational decisions must satisfy. For example, decisions must be logically consistent (if I prefer coffee over tea and hot chocolate over coffee, I must always prefer hot chocolate over tea). Decisions must also be time consistent (if today I prefer pizza over a hamburger, I must also prefer pizza over a hamburger tomorrow). Moreover, it is assumed that individuals have a perfect and instantaneous ability to calculate mathematical probabilities. If any such axioms are violated, the individual is thus judged irrational.

There are a number of problems with this approach to rationality. First, as we spoke about when we made our own definition of utility, there is no reason to expect that preferences need be static over time, including the discount rate (how we value utility today versus utility in the future) with which we implicitly use to discount future utility. Clearly it is not irrational to decide to have pizza today and a hamburger tomorrow. Nor is it irrational to today forgo dessert (value my future health more than the instant satisfaction of fat and sugar) but tomorrow partake in dessert (value instant satisfaction more than future health).

It is also not at all reasonable to assume that individuals are perfect calculators. Very few people get perfect SAT scores (some of what passes for the questions asked in participants of behavioral economic studies and research very much resemble SAT questions). Should not having a perfect score be viewed as irrational behavior? I think not. Neither miscalculation, mistake, lack of education nor even stupidity should be considered the equivalence of irrationality. Remember, we said that a rational decision is one that I believe results in the best decision, not one that actually does.

Lastly, I want to strongly reiterate that this type of technical definition of rationality is not the colloquial one. I suppose that in a purely academic setting it is sort of okay to misuse common knowledge words, provided that you define your misuse. The problem, however, is that this alternative definition has not been confined to academic journals and seminars. Instead, it has carried over into the mainstream (media) where a now popular and pervasive belief in erroneous irrationality is used to encourage and support government policy to “correct” human irrational behavior in all sorts of markets and sectors of the economy.

Thaler and his Nobel Prize winning research

We’re finally ready to discuss Thaler’s research, and its application, or misapplication, to human rationality. As described by the Nobel committee, Thaler won his prize for his contributions to four components of behavioral economics: 1) limited rationality, 2) lack of self control, 3) social preferences and 4) behavioral finance. Let’s take a look at each in turn and examine whether it is appropriate based on Thaler’s research to conclude that we humans are indeed “irrational.”

Note that all of the quotations below, unless otherwise noted, have been taken from the Nobel Prize Committee’s press release or accompanying background material.

1 (a). Limited rationality: mental accounting

“Thaler developed the theory of mental accounting, explaining how people simplify financial decision-making by creating separate accounts in their minds, focusing on the narrow impact of each individual decision rather than its overall effect.”

One of Thaler’s research topics was, in his words, to try to understand, “how do people think about money.” Economists assumed that money is money is money, or in technical terms, “fungible.” Thaler noticed that people often think about money in a very different way, in what he called “mental accounting.” People might separate their savings into different pools of money, either purely mentally or with separate banking accounts or money jars. For example, individuals or families might have a separate pot of money for housing, food, clothing, vacations, long-term savings, etc. In fact, most of us do this in one form or another.

Most economists, Thaler included, view this kind of “mental accounting” as less than fully rational behavior. For example, let’s say my “food money jar” is running low and is insufficient to buy this week’s groceries for my family. Let’s also assume that there is plenty of money in the “vacation money jar” that won’t be needed for a while. Assuming money is money, economists would say the rational thing to do is take money from the vacation jar and use it for food. But this is not what many people do. Instead they might choose to work extra overtime this week, or take the time and sell some stuff on eBay in order to fill up the food jar so as not to take from the vacation jar.

Is this behavior irrational? If rationality means simply maximizing wealth than clearly the answer is yes. But I don’t believe that is what rationality means. Let’s think about how money impacts my utility. I cannot drink money or eat money nor does looking at it provide much entertainment. Hence, money does not directly result in utility. It indirectly results in utility in at least three ways.

Most obviously, money allows me to consume goods and services in the future that will result in utility down the road. In other words having money now increases my future utility, which (present valued) is what I am trying to maximize when making decisions. Second, having money now contributes to my feeling of “status” which I believe is one of the primary components of utility. That is, feeling wealthy (or wealthier) makes me feel superior (or less inferior) about myself. Third, having money provides a sense of freedom and reduces stress and anxiety. Technically speaking, having money increases the ability for me to make choices in the future, some of those choices which might lead to higher utility.

Let’s now return to the question of money jars. I don’t take money from the vacation jar to use for groceries because I’d feel badly or guilty doing so. In other words, the status or self-worth component of my utility would be lower if I did. I’d rather preserve my status or self-worth and give up some leisure time to make money some other way (e.g. overtime or eBay). In my view, this is perfectly rational behavior.

But, I don’t blame you if you’re not yet convinced. You might be thinking, whoa..wait a minute! I’m making a decision based on “feelings” and not “reason.” Isn’t that the textbook definition of irrationality? No. Consider the following.

Lots of people choose to drive a BMW instead of a Honda or a Chevy even though all three vehicles provide virtually identical utility when it comes to their primary purpose of an automobile, transportation. That is, they all will generally safely get you to the same place at the same time. BMWs of course cost more so should I be considered irrational to spend extra money to own one? No, because they provide additional utility beyond simply transportation. The owner of a BMW derives utility from the sportier drive, the plusher seats, the better sound system, and most importantly, from the status or superiority that such a luxury brand provides. In other words, they feel better in a BMW.

The same arguments can be made for the decision to live in 10,000 square foot mansion rather than a small house even though both can equally provide necessary shelter. Or the decision to wear fancy designer clothes rather than last seasons’s basic hand-me-downs even though both can equally provide necessary warmth and coverage. Or the decision to eat meals at 3-star Michelin restaurants rather than neighborhood diners even though both can equally provide necessary nourishment.

All of these instances of everyday human activity show that people make decisions all the time that choose “feelings” over wealth. Just like with the money jars. And if you are going to argue that all of these kinds of decisions are irrational than you will wind up maintaining that nearly all decisions that humans make are irrational and that humans should never spend any money at all except for the most basic necessities (or for investments that will provide future basic necessities). Which is also essentially saying that you have zero insight into human decision making. A useless endeavor.

In a similar analysis to the jars of money, Thaler pointed out evidence of mental accounting and concluded “limited rationality” from the observation that many people have both outstanding credit card debt AND money in their savings accounts. Isn’t it irrational to pay high credit card interest rates when you could partially or fully pay off your debt by using your savings? Again, the answer is not necessarily.

Thaler has argued that perhaps the reason for this is self control, or lack thereof. That is, having credit card debt prevents me from spending even more. If I paid off my credit card debts with my savings, I might spend excessively and once again rack up credit card debt. This is a plausible argument and one that, in my opinion, is evidence of rationality, not irrationality. However Thaler would likely argue that the fact that we humans do have issues with self control is in and of itself irrational. We will return to the topic of self control shortly.

I will propose some alternative reasons for why people might carry high interest credit card debt while having money in their savings account. Perhaps it is considered socially acceptable to have credit card balances (something encouraged by credit card companies) but not socially acceptable to have zero money saved. I might feel guilty (lower social status, lower self worth) telling a friend or family member, or even just knowing that I have a zero savings balance. I do not necessarily feel the same level of guilt having credit card debt. Since I view social status as I primary component of utility, I view this behavior as perfectly rational.

Alternatively, I might rationally view my savings as more valuable than my credit card debt. To an economist this would not make sense since money is money and net worth is net worth. But my credit card can be maintained indefinitely provided I pay the minimal interest payments. Once the savings is gone, its gone. I may feel that I have more certainty, more flexibility or more of a safety net knowing that I have savings AND that I can continue to maintain a credit card balance.

A final rationale for having credit card debt and savings is that many people don’t realize or don’t understand the high interest rates they are paying to service the debt. To me, this is pretty stupid behavior. But it is not irrational behavior. Remember that to be rational is to “believe or expect” that a decision is in my best interests. Not understanding the amount of interest I must pay is certainly dumb but also means that it cannot be considered an irrational decision. Thaler and other behavioral economists would argue that in this sort of situation government needs to step in (see “nudging” below). If you want to make this argument (I would not), at least be honest. Government is intervening to correct stupidity, not irrationality.

A third often cited example of so-called mental accounting and limited rationality is a study performed of New York City taxi drivers. The study showed that drivers tend to target a certain amount of income each day (what Thaler refers to as a “reference point”). If it is a good day and they can make their target income early, they stop working. If it is a bad day they work later until they meet their target. This behavior is viewed by Thaler and others as irrational since drivers drive less on days with high demand and more on days with low demand, contrary to the laws of basic supply and demand learned in Econ 101. But is it?

First, understand that the supply part of “supply and demand” refers to firms not individuals. The implicit assumption is that firms always maximize profits and when experiencing high levels of demand, firms will expand their production capacity and new firms will enter the industry until some “equilibrium” is met. However, here we’re dealing with individuals, not firms, and the key insight of this article is that individuals do not necessary maximize profits (wealth). Moreover, unlike textbook supply and demand circumstances, taxi prices are not allowed to rise (or fall) with changes in demand (unlike Uber, for example, with its surge-pricing), nor can industry capacity (more taxis) easily be increased.

So, supply and demand clearly has limited relevance here. But what about the fact that economists call taxi driver behavior irrational because they are valuing money (work) and leisure (non-work) differently on different days, depending on the demand for taxi rides. Or in other words, why aren’t they increasing their own capacity (driving more hours) in response to high demand and lowering their capacity in the face of low demand?

Thaler maintains that the study of taxi drivers shows that they tend to have an income goal (a “reference point”) for each day’s work. I agree. The question is, is there a possible rational explanation for this?  I think so, and once again I refer to the component of utility comprised of status or self worth. On high-demand days, I feel like I got a good deal. I can go home early and enjoy my leisure time, unlike other drivers (or any other workers) who may still be working. On low-demand days, I feel good because I worked harder (longer) and still made my target income. Had I not worked longer, I might feel like a quitter or even a failure. Moreover, I’d argue that for taxi drivers, having a daily income goal (rather than just maximizing income) in and of itself contributes to utility because it makes a stressful and lonely work day more palatable (or less unenjoyable).

Before moving on, I’ll say one more thing about the taxi driver study. Indulge me for I will now make a completely unscientific guess. I will speculate that taxi drivers with spouses exhibit this income goal (reference point) behavior more than those taxi drivers without spouses at home (though they still exhibit it too). If I return home late, I am met with something like, “Why are you late for dinner?” If, on the other hand, I come home with low pay, I am met with an even more serious, “Why didn’t you make more money today?” Either way, I am made to feel guilty (those without spouses make themselves feel guilty, but to a lesser extent). The feeling of guilt is tied to low status/self worth and lower utility. In my view, it is perfectly rational to sacrifice a small amount of income, or work a longer day, to avoid being made (or making myself) feel badly.

The final area of Thaler’s study on the subject of mental accounting that I wish to discuss is the observed fact that individual stock investors are more likely to sell winning stocks and hold on to losing stocks. This is known as the “disposition effect” and is related to some of what we will discuss later on when we turn our attention to behavioral finance. Part of the insight of behavioral economists is that investors treat each stock as its own mental account, rather than try to maximize their entire portfolio or net worth as they assume a rational being should.

I have little doubt that this so-called disposition effect is indeed correct. And once again, the behaviorists have successfully shown the we humans are not simply wealth maximizers. But surprise, surprise, they have not shown that we are irrational. As an individual investor, I get utility from owning a stock and especially from making a winning trade that goes far beyond the monetary value of the gain. I feel smart, brilliant even! I tell all my buddies at the bar, and strangers at cocktail parties, and my spouse, how great a stock-picker I am! Utility is not money. As we’ve talked about many times, utility includes my feelings of status and self worth. I’ll gladly (and rationally) trade a few bucks in exchange for the world to think I’m the greatest investor since sliced bread. No difference from trading a few extra bucks to be seen in a BMW instead of a Chevy.

Of course, if I wait too long to sell the stock, there is a chance its value might go down. I may even lose money! Better to book the gain and be brilliant than risk losing my perceived brilliance. Are my friends going to think I’m more brilliant because I have a 32% gain rather than a 28% gain? Probably not. Better to leave a little money on the table rather than risk losing all the gain (and all my perceived brilliance). In other words, booking the brilliance provides me more utility than the additonal monetary gain. And what happens if I hold on too long and the gain is lost? Now I will feel all sorts of guilt (low self worth) from myself and others for not being smart enough to get out when I should have. As we’ll discuss shortly in the section on loss aversion and the endowment effect, this hurts even more.

Similarly, I have lower utility from a loss beyond its monetary value. As long as I hold on, there’s always the chance of reversing the loss. As soon as I sell, I’m an idiot and my utility is lower. But as long as I hold on, I’m not. The chance to not be an idiot outweighs the monetary loss of a further decline in the stock price. For this reason, and another of the subject of Thaler’s research (though not directly cited by the Noble Committee) sunk costs can be viewed as perfectly rational behavior.

1 (b). Limited rationality: the endowment effect

“He also showed how aversion to losses can explain why people value the same item more highly when they own it than when they don’t, a phenomenon called the endowment effect.”

In 2002 a psychologist named Daniel Kahneman was awarded the economics Noble Prize for a set of ideas he (along with the deceased Amos Tversky) derived about human decision making called prospect theory. The most important component of prospect theory is something called “loss aversion” or the idea that losing something lowers our utility a greater amount than obtaining the object had raised our utility. In other words, an asymmetry exists whereby losses are more painful than gains are pleasurable.

Richard Thaler had been the first economist to apply prospect theory, and specifically loss aversion, to the realm of economics. In fact, he essentially relabeled the idea as the “endowment effect,” noticing that people tend to value things they own more than they valued them before they owned them. Loss aversion and the endowment effect are no doubt correct. But are they indications of irrationality, as Kahneman and Thaler and many others would have us believe?

One of Thaler’s most well known pieces of research on the endowment effect is his coffee mug study. Briefly, he gave half of a class of college students free mugs (retail value $6) and allowed them to trade with the other half of the class that did not receive the mugs. As it turned out, very few trades were made and it was observed that the median price at which sellers wanted to sell a mug was about twice as high as what buyers were willing to pay for a mug. In other words, those students who were given mugs valued them twice as much as those who were not given mugs. The endowment effect in action!

There are at least three reasons why I think it is perfectly rational to value something more when I own it, compared with its value before I owned it. The first and most important reason is that once we own an object, we now gain additional utility from the good memories, sentiments and emotional attachments that the object brings. For instance, every time I drink coffee out of that mug, or even see it sitting on my dorm room shelf, I will derive utility from the memories that I won a free mug from an economics professor! How fortunate! How cool! How many students can say that?

In other words,valuing the mug before I owned to after I own it is not an apples-to-apples comparison. While the physical mug has not changed in any way, its usage has changed, and hence its value. It is no longer simply a receptacle for hot liquids. It is also a receptacle of good, and status-increasing memories. It is no longer a mug. It is now my mug. It is no longer identical to hundreds of other mugs sold in the campus bookstore. It is unique.

I know what you might be thinking. What a ridiculous argument. Why should an everyday object magically change in value from one moment to the next just because I own it now? To be clear, it is not that the object has changed, it is that the object’s value has changed to me. Consider the following.

Let’s say you are a huge basketball fan. Let’s say you are walking down the street late one evening and you run into Lebron James, and he’s so friendly that he gives you a jersey that he wore in that night’s game. Assume that nobody witnessed the gift and that you have no certificate of authenticity so you could never sell it to a collector as a game-worn jersey. Would it be worth more to you than the same #23 Cleveland Cavaliers jersey you could buy in any sporting goods store? An economist would say no. In fact, an economist would probably say its value (and hence, utility) as a piece of clothing is actually lower than a brand new shirt, precisely because it has been washed and warn, and therefore has a shorter useful life.

But of course, you, a huge basketball fan, will value it much higher than a new shirt. Wearing it will make you feel special even if strangers have no idea who once wore it. Your friends will be jealous. You can daydream about passing it down to your future kid some day and telling him or her the story of how you obtained it. Point being, you get much more utility from the jersey than any other otherwise identical shirt you could have bought at a store. And because of that, its value is greater to you. And that is totally rational. Same for the coffee mugs.

A second reason why the endowment effect can be considered rational behavior has to do with how much time and effort you spend predicting an object’s value to you before and after you possess it. Before you own something, there is, obviously, a less than 100% chance you will come to own it. It is therefore rational to limit how much time and effort you expend anticipating the object’s use to you. Naturally, the greater the chance of ownership, the more time and effort you are likely to expend. Once you own an object, it now makes sense to expend additional time and effort to analyze the object’s potential usage.

Yes, I know that’s a bit confusing. Let’s use the mugs to make it clearer. Before I own the mug, I may give it a quick thought and say, “That would be a great mug for coffee or tea or hot chocolate.” Once I own the mug, I may upon further thought say, “not only can I use the mug for beverages, but it would also be a great holder for pens or spare change, or be a paperweight, or just look pretty on my desk, or maybe I can re-gift it…” I see more possible uses for the mug, more future utility, and hence more value.

I would argue that this value discrepancy pre-ownership and post-ownerhips due to the differential certainty of ownership is exacerbated when making decisions about objects of small value. Think about it. How much time is it really worth investing in thinking about the uses for a mug before I own it? In my view, this is especially true in academic behavioral economics research where questionnaires or even artificial trading do not involve real decisions, only theoretical ones. We’ll return to the issue of the applicability of research studies very shortly.

The third reason supporting the rationality of the endowment effect relates to the role of “status” in determining one’s utility. This is something we’ve discussed previously and will return to again and again. Thaler’s coffee mug study does not simply involve the respective valuations of the mug by owners (sellers) and potential buyers. It also involves the decision on both sides of whether or not to make a deal.

If I was one of the student’s fortunate enough to receive a free mug, I’m probably going to be reluctant to sell it for much less than its $6 retail value for fear of being viewed (by other students, or even by my own self) of making a bad deal. Now, think about the mindset of a potential buyer. I know that the student who received the mug got it for free. Why should I pay full price for an item that my fellow student received for free? I could just as easily buy it for full price from the campus bookstore. Just like with the seller, I don’t want to feel like or be deemed by others a “loser” for making a bad deal. There is also an element of “fairness” involved. Why should I pay for something someone else got for free? We will talk about more about fairness when we discuss Thaler’s research on social preferences. However, I think this holds true even if the seller of the mug had to pay for the mug in the first place.

All of us, and economics students especially, are trained to “buy low, sell high.”  The perception I have (my status or self-worth) as a good trader (making a good deal or avoiding a bad deal) might be worth more than the few dollars differential of a coffee mug. I might even think that that unstated purpose of the exercise is indeed to measure my trading prowess, further biasing the analysis of mug value.

This is one of the limitations of artificial academic studies (something we will return to very shortly). Decisions about buying and selling mugs do not only capture the value buyers and sellers place on mugs. The study’s results are also affected (biased) by other factors, notably in this instance, how participants view themselves as smart traders vis a vis their fellow classmates. This is similar to our discussion of locking in stock gains and avoiding stock losses so as to be viewed as smart, something that contributes to my utility.

Think again to automobiles. Most of us know that the value of an automobile drops significantly as soon as it leaves the dealer’s lot. The car hasn’t really changed, other than perhaps a handful more miles on it. Why then, would you not buy it from me for anything close to what I paid? Naturally, I probably wouldn’t sell it to you either for anything less than I paid. Kind of like the mugs. The point here is that buy low, sell high is ingrained in most of us. When we violate this, we feel like idiots, which lowers our utility.

Finally, just like we talked about sentimentality which increases the value of the mug because it is my mug, and not just any old mug, there is also sentimentality to how I obtained it. Say for example that I acquired it in a trade from fellow student for a low price. I got a great deal! Going forward that mug will bring me utility as I will recall the brilliant trade I made with another (i.e. inferior) student. This is distinct from the utility I might get remembering that I got it free from a professor.

The fourth and final rationale I will make with regards to the rationality of the endowment effect relates to the issue of sunk costs. As I mentioned earlier, taking into account sunk costs when making decisions is viewed as irrational behavior by economists. I disagree. Admitting a loss lowers my feeling of status or self worth and thus my utility. I don’t want to sell the mug once I own it unless I get a very high price, because doing is an admission that I made a mistake. I would rather tradeoff a small amount of lost money rather than be viewed (or viewing myself) as making an error. In my view, therefore, sunk costs can therefore be considered a component of rational decision making.

In any case, enough talk of coffee mugs. Let’s move on to death and disease. A second well known study on the endowment effect is one that Thaler is a survey given to students in a classroom setting. The following two questions were asked in the survey:

A) Assume you have been exposed to a disease which if contracted leads to a quick and painless death within a week. The probability you have the disease is 0.001. What is the maximum you would be willing to pay for a cure?

B) Suppose volunteers would be needed for research on the above disease. All that would be required is that you expose yourself to a 0.001 chance of contracting the disease. What is the minimum you would require to volunteer for this program? (You would not be allowed to purchase the cure.)

A typical answer given by students (in 1980 dollars) was about $20 for the first question (how much would you pay for a cure) and $10,000 for the second question (how much would you require to be exposed to the disease).

As I’m sure you have noticed, the probabilities of your death are equal in both questions. Either way you have a 0.001% chance of dying. So from a purely mathematical standpoint, valuing death the same, one should give the same answer for answers A and for B. Thaler and others concluded that an endowment effect is at work. That is, people are willing to sell health for way more than will spend to buy health. And of course, they infer that this large discrepancy is strong evidence of irrationality.

I want to hold off on addressing the question of rationality for a moment. We said a moment ago when discussing mugs that there is an element to bias in an academic study of decision making that renders conclusions questionable or even invalid. In the mug case, my skill as a trader and the utility gained from making a good deal might trump the economic value of a mug. Here, the study is much more unrealistic. In fact, this study is so far fetched that I’d argue its conclusions are essentially meaningless.

Let’s say you were participating in this research study. What might go through your mind as you read the two questions? I know what would go through mine. If I choose A, why can’t I still get the cure if and after I learn I have the disease? If I choose B, why can’t I also get the cure? How do you know the disease will be fatal? How do you know that the disease will kill me in exactly one week? How do you know it will be painless? How do you know the exact probabilities of contracting the disease?

The point I am trying to make is that a survey like this is so unrealistic, so unbound to reality, that deciding between questions A and B has little relation to real-life decisions. The even more important point I want to make is that the choice of A or B has zero effect on my utility, other than perhaps a small impact if I infer the survey is some kind of test of my intelligence (as we saw with mugs). When I take such a survey, I am more likely to think, what is the right answer? Which answer will make me look smart? What is the point of the survey? What I am probably not thinking all that much about are the realistic possibilities of my own death, which is of course the intent of the study.

My criticism of this type of behavioral research study is not unique. Many others before me have shared the view that much of the research in behavioral economics is unrealistic and does not require the test taker to make a true decision. Thus, how can it be used to opine on the question of rationality? Certainly studies have shown that people might be bad at calculating probabilities. But as we’ve said before, a lack of math skill is not the equivalent of irrationality.

Having said all that, for the sake of argument, let’s take the survey and its conclusion at face value, that loss aversion or the endowment effect is absolutely true. The question that follows is why might it still represent a rational decision? The answer, in my opinion, is that I would feel like an idiot, and others would consider me an idiot if I caused my own death. I know what you are thinking. That makes no sense because either way I made a decision that resulted in my own death (either by not paying for the cure or by risking the disease). I don’t think this is exactly true.

To use one of the favorite tools of a behavioral economist, let’s restate or “re-frame” the two choices.

A) You may have a disease. Do nothing and you will probably live

B) You don’t have a disease. Do something and you might catch it and die

Do these feel equivalent? Of course, I’ve simplified the statements and left out the probabilities, but the point I am making is that how a question is framed has an enormous impact on most people’s decisions. Behavioral economists would no doubt agree, as framing is one of the most researched areas of decision making. But, what a behavioral economist would conclude is that the way a question is framed should not affect a rational individual’s choice as long as the outcomes are equivalent. If it does affect my choice, I am irrational. As I’m sure you can guess, I disagree.

There are a number of alternative ways I could have framed this choice but the point I am trying to make is that in the first choice, either I already have the disease or I do not. I am not giving myself the disease. I am only deciding whether it is worth to pay for a cure. In the second choice, I do not have the disease. I am making the choice whether to risk being exposed. In other words, I make the choice whether to give myself the disease or not. In B, I kill myself. In A, I don’t kill myself, I just don’t save myself. Yes the outcomes (death) are the same, but from a decision making standpoint they are not at all equivalent.

Why does this matter? Let’s think about what happens to me given both choices if I do get sick and do not have the cure. Either way, my last week on Earth is going to suck. Presumably I am quite upset, facing certain death. But there’s something else. My guilt will be far greater had I chosen to risk exposure (choice B) than had I opted to not pay for the cure (choice A). Think of all those wrenchingly sad goodbye conversations with my loved ones if I chose B. “How could you have risk exposure to a deadly disease for a bit of money?!?!?” Whereas with choice A, it seems perfectly reasonable to not pay for the cure given the very low chance of disease. The point is that since guilt is a key component of status or self- worth, and since status or self-worth is a key component of utility, my utility will be lower in choice B, than in choice A. And since my utility will be lower for that last miserable week, it makes perfect, rational sense to require a lot more money to make that choice, exactly what the study showed.

Let’s discuss one final example of the endowment effect. In one of his early research papers, Thaler wrote about a gentlemen referred to as “Mr. H.” who mows his own lawn. A neighbor’s son offers to mow Mr. H.’s lawn for $8 (1980 dollars) but Mr. H. continues to mow his own lawn. Mr. H. is then offered $20 to mow his neighbor’s equivalently sized lawn but Mr. H. declines.

On the one hand, Mr. H. is saying that mowing a lawn is worth no more than $8. On the other hand, Mr. H. is saying that mowing an equivalent lawn is worth no less than $20. How can this be? Naturally, Thaler concluded that there is an endowment effect going on, that the price a person is willing to buy a good or service can be significantly lower than the price at which they are willing to sell the same good or service. No disagreement here. But what about the issue of rationality?

To understand why Mr. H.’s behavior can be considered perfectly rational we need to think first about the consequences to utility from mowing one’s own lawn versus not mowing one’s own lawn. When I mow my own lawn, there’s a sense of pride and accomplishment in my beautiful lawn-mowing job, which contributes to my status. I also avoid the negative status that stems from the guilt I receive from my spouse’s disappointment in me not mowing the lawn. I might want to demonstrate to my children the responsibility of chores. I also may avoid the guilt that I feel if I shirk my responsibilities as a homeowner. Finally, perhaps there is entertainment value in the actual mowing, being alone and with nature. All of these may contribute to my utility and may be worth far more to me than a small amount of money. Perhaps they are even priceless. That is, my neighbor might offer to mow my lawn for free, but I would still mow it myself.

Next, let’s discuss why I might not want to mow my neighbor’s lawn. I don’t get the same status from it. I don’t care that my neighbor’s lawn is beautiful. I don’t feel the guilt from my spouse or from myself for not doing it. It’s not my job as a homeowner since its not my home. Mowing my neighbor’s lawn is just a business transaction. Mowing my own lawn is not. Hence, they are decidedly not equivalent, even if the time and effort required for mowing are. In short, it makes perfect sense to mow one’s own lawn given that I get additional utility from it and it makes perfect sense to not mow my neighbor’s lawn for more money since I don’t get the same utility.

Before we leave the topic of the endowment effect, I want to point out two important conclusions that have been demonstrated by research. First the endowment effect is much weaker, if it exists at all, for goods that have easily defined and known monetary value. This should make sense since 1) cold hard cash or its equivalent has little sentimental value, 2) we think about the value of money all the time so there should be little difference in our predictions for it use before and after it is obtained, and 3) it is highly unlikely for a trade to be considered good or bad when the value of the item to be traded is obvious to both parties. The second conclusion of behavioral research is that professional traders generally do not exhibit the endowment effect. This also makes much sense since professionals tend not to become sentimentally attached to the objects they trade.

2. Lack of self-control

Thaler has also shed new light on the old observation that New Year’s resolutions can be hard to keep. He showed how to analyse self-control problems using a planner-doer model, which is similar to the frameworks psychologists and neuroscientists now use to describe the internal tension between long-term planning and short-term doing.”

The second area of study that the Noble Prize committee cited as reasons for Thaler’s award is his research on the lack of self-control, and in Thaler’s opinion, government’s responsibility to correct people’s lack of self-control. Here, more than anywhere else in the article do I disagree with Thaler and his followers.

Thaler has stated that one of the things that first got him interested in studying decision making was the somewhat bizarre behavior of his academic colleagues and friends that tended to occur at dinner parties. That curious behavior involved bowls of nuts. Specifically, cashews.

Here I quote from Thaler’s book, Misbehaving:

“Some friends come over for dinner. We are having drinks and waiting for something roasting in the oven to be finished so we can sit down to eat. I bring out a large bowl of cashew nuts for us to nibble on. We eat half the bowl in five minutes, and our appetite is in danger. I remove the bowl and hide it in the kitchen. Everyone is happy.”

Thaler used anecdotes like this, and later, research studies to conclude that human beings lack self control, a conclusion that is surely true. But he also concluded that this lack of self control represents irrational behavior. Dinner guests say they are happier when the cashew bowl is removed. They knew it was ruining their appetite for dinner. Yet, they could have just stopped eating! This certainly seems irrational. Moreover, how could anyone be happier with less choice (no cashew bowl)? This is something known by behavioral economists as the “paradox of choice.”

I am now going to give two very different explanations for why I think that eating cashews should not be considered irrational behavior. I think the first is the stronger argument. As you’ll see, it is also a very different argument than I have used so far in this essay.

Eating the cashews is not an irrational decision because it is not a decision at all. Think about yourself in a similar circumstance. Do you decide you’re going to have another cashew and then eat it? Or does your body just do it without you deciding? Hand goes to bowl. Hand picks up nut. Hand goes to mouth. Repeat. Did you consciously make a decision to pick up a nut and put it in your mouth? No. This kind of action is unlike, for instance, deciding how much money to buy or sell a mug or whether to mow your lawn. Those require thought. Eating cashews from the bowl in front of you does not. Simply put, there is no decision being made.

Recall our definition of rationality: to make a decision that I believe to be in my best interests. Absent a decision, we cannot conclude rationality or irrationality. Eating a cashew in this case is little different from breathing, an involuntary activity of your body. You could also call it an addictive behavior. Either way, it is not a conscious decision, and therefore not an irrational one. It is also not an example of the “paradox of choice.” Because I don’t make a choice when eating the cashews, removing the bowl is not the same as removing a choice.

Thaler also noted that when the cashew bowl is far away (say, at another table on the other side of the room), people do indeed refrain from eating the nuts. They do not get up, walk across the room and grab a nut. That would require a conscious decision, and therefore could be considered an irrational one. But people don’t do this.

That was the first answer for why cashew eating is not irrational. A second possible answer is that eating the cashews actually does increase my utility even if I don’t want to admit it. People might say that they would rather eat a healthy dinner than a bowl of nuts, but perhaps they are lying. They might even be lying to themselves. Why would they lie? Because it is not socially acceptable to ruin one’s appetite by eating unhealthy snacks. It is not considered acceptable to have a dinner of nuts. That is considered by society to be weak and childish. And who wants to be considered weak and childish by one’s peers? Or even one’s self?

Evolution has given us humans a desire to eat fatty and salty foods. Hence my utility is higher. Similarly, why ever eat ice cream? Surely I can get the equivalent calories in a healthier package. But the fat and sugar of ice cream makes me feel good, it increases my utility. Admit it or not, cashews for dinner might not be the healthiest decision, but there’s no reason it can’t be the rational one.

Take your pick whether you prefer the answer that cashew eating is not a decision or that cashews are better than meatloaf. Both are probably true.

The planner-doer model

“Thaler used his research on self control to propose a model of human behavior he called the planner-doer model.”

Thaler hypothesized that a person has two selves, the planner and the doer. The planner tries to maximize the present value of lifetime utility. The doer is only concerned with current utility.  Naturally there is conflict between the planner and the doer, but sometimes the planner can override the doer if sufficient willpower (some kind of cost) is used.

In my view Thaler’s planner-doer model is Ptolemaic, or maybe Freudian.  It is confusing, unnecessary and wrong. First of all, if I’m deciding between a healthy fruit cup for breakfast or a chocolate doughnut do I really have a devil (doer) on one shoulder and an angel (planner) on the other? How exactly do they duke it out to make a decision? Second, how long does the “doer” have to make a decision until the “planner” kicks in? Is it instantaneous? What exactly is meant by current utility in this context? Isn’t breakfast in the future anyway?

Are chocolate doughnuts always the choice of the doer? Are they always disallowed by the planner? Do they always reduce my long-term utility? What about a chocolate doughnut once per week? Once per month? May I eat one once a year even? When I’m old can I eat one? Age 60? 70? 80? On my deathbed? Ever? What if I plan to eat a doughnut so its not an impulse decision? Would that be okay? Yes, I think I’ll plan to eat one for breakfast every day, starting tomorrow. That must be allowed since its a decision made by the planner in me, not the doer.

I’m obviously being a bit silly here. But the point I’m trying to make is that when you really think about the planner/doer model, it completely falls apart. There’s no obvious way to differentiate between the two decision makers unless you say something like the “doer” makes me fat, unhealthy and poor and the “planner” keeps me thin, healthy and rich. But that’s not a useful or valid model for an economist or for any other half-intelligent person.

We humans maximize the present value of our (probability weighted) future utility. That’s how we make decisions. How we weight the difference in value between current and future utility is exactly measured by the discount rate we implicitly use. No angels or devils, planners or doers needed. Of course, how we derive our discount rate is a good question, but one that I will not address except for this. Our discount rate can, and will change from time to time, contrary to the assumption of economists. Remember as we’ve stated before, there is nothing irrational about having that fruit cup today and that doughnut tomorrow.

Before moving on, let me cut Thaler just a bit of slack here. His definition of the “doer” is very vague. But, to the extent that the “doer” is a proxy for our cashew-eating involuntary decision making system of the brain, then I agree. Neuroscience research has indeed shown that there are multiple decision making systems of the brain. At the very least, there is one that involves voluntary (thinking) decisions and one that governs involuntary actions. However, as we stated above, actions made by this second involuntary system should be considered neither rational nor irrational since they do not involve conscious decisions.

Nudging

“Succumbing to shortterm temptation is an important reason why our plans to save for old age, or make healthier lifestyle choices, often fail. In his applied work, Thaler demonstrated how nudging – a term he coined – may help people exercise better self-control when saving for a pension, as well in other contexts.”

Thaler used his planner/doer model to infer that individuals tend not to act in their own best interests. That is, they favor the short-term over the long-term. Thaler co-authored an influential popular book called Nudge and a paper entitled, “Libertarian Paternalism is Not an Oxymoron” where he argued that individuals should have choices made for them by governments (or other entities) in situations where they make decisions believed not to be in their best long-term interests.

The concept of “nudging” has been applied to a number of areas where academics (and government officials) believe people make irrational decisions that favor short-term benefits in lieu of long-term interests. These include healthy eating, smoking cessation, education and organ donation. However, the area that has had more research and probably the most real world implementation is one that Thaler is most known for, retirement savings.

Specifically, Thaler posited that most people undersave for retirement since they do not have the (planner) willpower to override their (doer) urges to spend the money now. Implicitly, Thaler’s view is that this constitutes irrational behavior. Thaler’s research on retirements savings also demonstrated that many people do not participate in voluntary employee or government sponsored retirement programs, and thus miss out on valuable tax deductions and/or employer matching funds. This too he considered irrational.

To compensate for such short-term, irrational thinking, Thaler (and others) suggested that enrollment in retirement funds be made automatic. That is, instead of people having to fill out paperwork and choose to enroll in a savings plan (“opt-in”), they would be automatically enrolled unless they filled out paperwork stating their desire not to enroll (“opt-out”). Further, Thaler argued that funds be automatically invested in some sensible diversified portfolio (a default portfolio) rather than the individual having to choose the investments since individuals tend to pick irrationally. Thaler also suggests that contributions to retirement plans automatically increase as an employee’s salary increases.

Thaler labeled this libertarian paternalism or the more user-friendly, “nudge” and successfully advocated many companies and governments in the U.S. and U.K. to adopt such plans. On the surface, libertarian paternalism or nudging feels reasonably benign. It is not coercive because individuals can always opt-out. In Thaler’s view, a fully rational individual should be indifferent to a traditional opt-in retirement plan or a nudging opt-out plan since, either way, they have the ability to make the same choice (to save or not to save).

I, however, find four significant issues with this concept of nudging. First, how does Thaler or the government or anyone else know that my utility is higher if I save more? Second, if government does assume that long-term interests always trump short-term ones, there are an infinite number of situations where nudging could be applied. Where do we stop? Third, how do you prevent special interest groups from co-opting otherwise well-intentioned policies? Fourth, is nudging (libertarian paternalism) really consistent with liberty and freedom, at least as recognized in the U.S.?

1. Is utility really higher?

The most crucial assumption that Thaler and other nudgers make is that favoring the short-term over the long-term is a mistake. That, for example, spending today instead of saving for tomorrow is irrational. Is it really?

There’s no question that other things equal, people prefer to spend money rather than save it. To use the technical term, people have a high discount rate when present valuing their future utility. In Thaler’s view, this discount rate is far too high (“hyperbolic” in his words). Hence, utility today is valued too high, and the value of utility tomorrow (or, say, 30 years from now in retirement) is too low.

Let’s start with something easy. It may indeed be the case that most people do not understand how much money they will need when retired and hence how much they need to save for retirement. They may not understand the concept of compound interest. They may not understand the financial markets at all. Thaler’s view is that stupidity equals irrationality. Said differently, anyone that lacks the necessary education or knowledge or mathematical ability to make the same decision that would be made by a highly educated PhD economist should be considered an irrational being. As you know by now, I do not share this view. A decision should only be considered irrational if it is made knowing that it won’t be in your best interest. Not not knowing.

Second, let’s consider what Thaler would consider an irrationality “no-brainer.” If I don’t contribute to my 401k retirement plan, for example, I lose the tax deduction that the federal government (in the U.S.) grants me. In Thaler’s view, this is money lost and why would any rational human ever choose to lose money? But it’s not that simple. If I put money into a 401k, there are significant limitations and penalties if I want to use the money before I retire. The money is not free for me to spend as it would be if I put the same money in a savings account, or a normal (non-retirement) investment account. So yes, I lose the tax deduction but I retain access to my money. There is a trade-off. It is not necessarily irrational to give up the tax benefit in order to keep my own savings accessible.

The next question to ponder is why is it irrational to value the certainty of consumption now a lot more than the uncertainty of consumption some decades down the road? The answer of course is, that it’s not. Will I be alive in 30 years? Don’t know. Will my social security checks be sufficient to meet my financial needs? Maybe. Will I even care about status when I’m old the way I care about status now (remember that most consumption is really for the purpose of increasing our social status or self worth)? No idea.

Now we must get a bit philosophical. Take you, dear reader. I’m going to take 1% of your income and force you to save it for retirement. You can consume less now and you might be able to consume more later, much later. Is the present value of your utility higher now? Well, is it? Are you better off? I have no idea. Maybe it is. Maybe it isn’t. I don’t know. And that’s the key point. Neither does Richard Thaler.

Thaler says we should save, not consume (incidentally, this is the exact opposite behavior encouraged by the Keynesian policies espoused by nearly all mainstream economists). When we reach retirement, is it okay to consume then? Why? Should we save even longer? Just like with breakfast, is it ever okay to eat the chocolate doughnut? Do I ever get to enjoy my savings? Should I wait until I’m too old and feeble? What is the point of wealth if not to spend it? How do you know that spending later is better for me than spending now?

The same arguments we can make about retirement savings we can make about other areas for which nudging has been advocated. Take healthy living, for instance. Thaler would argue that individuals should be nudged to live healthier lives. But how does he really know that a person’s utility is indeed higher giving up soda or junk food or even cigarettes just to potentially live a little longer? Why would anyone assume that maximizing life expectancy is the equivalent of maximizing utility? Clearly evidence points away from this. All humans engage in behavior that reduces life expectancy in exchange for near-term utility. Some just go further than others. Where do you draw the line?

Recall also that addictive behavior, like our cashews, is neither rational nor irrational because it does not represent a decision. As much as I personally find smoking to be abhorrent (and addictive) behavior, I do not recognize it as irrational behavior.

Lastly, I want to address the point that Thaler makes that a fully rational individual should be indifferent to opt-in or opt-out. His view is that either way, an individual can participate or not participate. Therefore, from a utility standpoint, they represent identical choices. I find this argument unpersuasive. Firstly, many people might not know they have the ability to opt-out. To Thaler this, in and of itself, is stupid and irrational. To me, only stupid. Second, many will feel pressure to not opt-out since big-brother (either government or their employer) has made participation the default option. Peer (or big-brother) pressure affects status and self worth and hence utility so even though opt-in and opt-out both allow participation nor non-participation, their effects on utility are not necessarily equivalent.

2. The slippery slope

There is no question that we humans make choices that favor the short-term over the long-term. If government views this is bad, how far should government go to correct this behavior? Let’s return to retirement savings. Perhaps by default, 5% of my salary should be saved for retirement. Maybe 6% would be better. Or 7%? 10% of my income? Maybe 20%? Where does government draw the line? This is the slippery slope problem. Once you start down the nudging path, where do you stop?

How about healthy living? Clearly many people eat too much dessert and drink too much alcohol. We get fat, we get diabetes, we get liver problems, we don’t live as long as we might have otherwise. Maybe government needs to nudge. Perhaps when I go to a restaurant, it should be illegal for the restaurant to present to me a wine list or dessert menu unless I specifically ask for one. Perhaps there should always be a default order: green salad and grilled chicken. That’s what I get, unless I specify otherwise (maybe in writing, to make it even harder) for the steak and fries.

Maybe grocery stores should be mandated to put unhealthy foods on high, out-of-reach shelves. Maybe they should be in a separate section of the store, a section to which I need to (in writing again!) ask for entrance. Perhaps I should be automatically enrolled in a health club membership. Maybe a personal trainer should automatically stop by house every day to encourage me to exercise. Maybe they should even have a key to my house so they can get me out of bed in the morning to exercise.

Frankly speaking, these are not unreasonable debates. But the key point I am trying to make here is if you are going to advocate nudging, how do you decide where to nudge, and how do you decide how much to nudge?

3. Nudging and special interests

Now I am going to talk about an issue that affects all government intervention, not just nudging. That issue is special interests. For every government action, some entities are helped and some are hurt. There are always unintended consequences, and very often (perhaps even 100% of the time), those unintended consequences ultimately dwarf the intended ones. Said differently, when government gets involved, the cure is often (usually) worse than the disease.

Let’s return to retirement savings. If retirement funds increase, who benefits? Where is my 401k money going? To a money manager. To Wall Street. To financial services firms. To the stock market. Nudging retirement accounts has the effect of subsidizing Wall Street and financial markets. Is that really a good thing? Might it not lead to more power to Wall Street and the financial markets? Might it not lead to a greater likelihood of Wall Street bailouts down the road since government is really made the decision to put my money into Wall Street? Now they can’t let it decline?

Might now the financial industry lobby for even more nudging of retirement savings since these firms benefit? As I wrote earlier, if 5% savings is good, why not 6% or 10% or 20%? And of course, in all of this some industries have to lose. Perhaps traditional local and community savings banks where I would have otherwise put my money to save. Perhaps retail stores or restaurants where I would have otherwise spent my money.

Let’s say government wants to nudge towards healthier eating. Encourage certain foods, discourage others. Some companies gain, others lose. But which foods are even the healthy ones, which ones the unhealthy ones? Frankly, scientists have no idea. Eggs used to be good for us, then they were bad for us, now they are good for us again. Butter is bad, margarine is good. Now margarine is bad, butter is good. Fat kills, so eat carbs. Now, carbs kill so eat fat. And its not just food. The entire healthcare system suffers from such uncertainty. So why should government take sides, unless the evidence is absolutely overwhelming (as it is with smoking).

The real problem is that government involvement is ripe for decisions encouraged by special interests. Big companies with big lobbying budgets at the expense of small businesses without. These special interests almost always trump the best interests of the people. And that assumes that government officials and politicians even have the best interests of the people at heart, something of which I am skeptical.

4. The oxymoron

As I mentioned above, Thaler co-authored a paper called “Libertarian Paternalism is Not an Oxymoron.” I am by no means the first to argue that this title is emphatically wrong. Thaler clearly does not understand what the term libertarianism truly means. The essence of libertarianism is not that I will do something to you unless say no. The essence of libertarian is that I will not do to you unless you want it done.

Allow me some latitude to solidify this argument. Consider the issue of sexual consent. I will have sex with you unless you say no. Or the alternative: I will not have sex with you unless you say yes. At least in the U.S., both societal norms and the legal system have moved towards the latter statement. That is, sex requires affirmative consent. When it comes to the violation of our bodies, most of us clearly seem to prefer it this way. However, Thaler’s idea of libertarian paternalism espouses the former (consent is assumed absent a “no”). Just some food for thought.

Before moving on from nudging, let me say three final things. While I personally would not often advocate nudging by government, there are arguments that can be made in favor. Government is (for better, or worse) collectively the largest health insurer in the U.S. (through Medicare, Medicaid, public employees, veterans, etc.). It is therefore reasonable to argue that nudges in favor of healthy living, and hence lower medical expenditures are warranted, given the government’s economic stake in our health. Similarly, it is not unreasonable to argue for nudging with decisions that affect children, since the decision making processes of children’s brains are not yet fully developed. But what I do ask of those who, like Thaler, advocate for nudging is that they not base their arguments on human irrationality. For that is a fallacy.

Secondly, I am in agreement with the behavioral economists that yes, most people make lots of mistakes. They certainly do make decisions that wind up being not in their best interests (though they do not realize this at the time of the decision and thus they are still rational decisions). For the most part, we need to let people make mistakes, not have government correct them. People learn from making mistakes, and that’s how society improves. Obviously there are limitations here. But, I would argue government should only get involved not simply when the benefits are greater than the costs, but when the benefits are an order of magnitude greater than the costs. The bar must be set higher. The special interests, the unintended consequences, the inefficiencies of government involvement are just too great in too many circumstances. The cure must never be worse than the disease.

Lastly, I point out that to the extent a case for government involvement in markets or personal lives is overwhelming, government has four different ways in which to act. First it should educate. Only if that education fails should it incentivize, for example through sin taxes (to discourage undesirable behavior) or tax credits (to encourage desirable behavior). Only if incentives fails should it nudge. Notice that nudging is the third option, not the first. And finally, only if nudging fails should government force or coerce behavior.

3. Social preferences

“Thaler’s theoretical and experimental research on fairness has been influential. He showed how consumers’ fairness concerns may stop firms from raising prices in periods of high demand, but not in times of rising costs. Thaler and his colleagues devised the dictator game, an experimental tool that has been used in numerous studies to measure attitudes to fairness in different groups of people around the world.”

The third area of study cited by the Noble Committee is what they refer to as “social preferences,” which mostly means fairness. That is, people don’t always act selfishly, as naive economists, or at least their models, think they should. Another example of irrational behavior. Not so. Let’s talk about evolution for a moment.

Evolution works at the level of genes and our genes have one primary purpose – to replicate themselves. But genes can’t reproduce on their own. They are dependent on their host (e.g. us humans) to reproduce. Fortunately they have quite an influence on their host, as they provide their carrier with its basic programming. In other words, in order to maximize the chance that a gene reproduces, it programs its host to seek food and avoid danger and attract a mate, among many other things. The host is rewarded. It “feels good.”

As we stated at the very top of this article, it is this genetic programming that influences what constitutes our “utility.” Eating and being healthy and having sex clearly (other things equal) contributes to utility. And like many other animal species, we humans have been programmed to be social. That is, it is a lot easier to obtain food and stay healthy and find that mate if we interact with other members of our species. Long story short, while our genes might be totally selfish (they “care” only about reproducing), us humans cannot be. In order to survive and reproduce and raise our children, and have our children reproduce, we must interact with other humans. And very often engaging in social activities requires making decisions that appear to economists to not be in our best interests. But to someone who actually understands human behavior, these decisions are perfectly rational.

We chase wealth not for wealth itself but to attract a mate, to be “alpha-male” (or “alpha-female”), to feel strong and powerful and superior. We buy fancy cars and live in big houses and wear big jewels to signal our superiority to others the same way a gorilla pounds its chest or a peacock flouts its feathers. At the end of the day, it is not net-worth that contributes to our utility, as economists believe, but self worth. Net-worth is just a component of self-worth.

Of course, we cannot be solely selfish. Or at least most of us cannot. Society wouldn’t survive and most of us (and our genes) wouldn’t reproduce. We cannot chase wealth and power at all costs. If I steal from the grocer, true I may get a free meal. But if I continue to do so, the grocer may wise up. Forbid me from entering his store, one way or another. Now where will my food come from? Worse off will I likely be.

Selflessness also matters. We treat people kindly so they will return the favor. This is the oldest form of insurance there is. And our genes reward us for this. It is in their interest and ours. We feel good about it, and we are rational to do so. We punish the jerks among us (or at least try to) so that they will learn and correct their behavior and if not, leave our community altogether. And again, our genes reward us for doing this. It is in their interest and ours. And we feel good about it and we are rational to do so.

Let’s now take a look at some of Thaler’s research on human social behavior and fairness, beginning with price gouging. We will see how humans behave in a manner inconsistent with economics, but perfectly consistent with what evolution has made our genes, and with how our genes have programmed us.

In the 1980s, Thaler performed a study that showed that the majority (82%) of people found it “unfair” for a hardware store that sells snow shovels to raise the price of shovel from $15 to $20 the morning after a large snowstorm. Let’s examine two things. First, is it rational behavior for the hardware store owner to raise prices? Second, it is rational for snow shovel consumers to find this behavior “unfair?”

Economics 101 teaches us that prices should rise (other things equal) in circumstances of rising demand. The assumption here is that demand for snow shovels increases after a bad snowstorm. Hence, a simplified understanding of Econ 101 implies that the hardware store has justification for raising the price of shovels. But, as we’ll discuss next, consumers may very well be turned off by this “price gouging” behavior. Their distaste may lead them to avoid shopping at this store for any goods in the future. The may be so incensed that they arrange a boycott of the store. The point being that, from the shop owner’s standpoint, to raise prices is really a question of short-term gain versus potential long-term loss. A business that depends on steady, long-term relationships is probably best served (and rational) not to raise prices. A business that caters to one-time customers (say, tourists), may benefit from price gouging. There’s no right or wrong answer here, except that either decision can be considered “rational” (and long-term profit maximizing) depending on the circumstances. Of course, a socially-minded business owner (and much less likely a big public corporation) may decide that fairness trumps profits regardless. As we’ll see shortly, this too can be considered rational behavior (contrary to the beliefs of economists).

On to the more interesting question. Are consumers of snow shovels rational for finding this price gouging behavior unfair? I can certainly sympathize, for there is a feeling of being cheated. Why should the shop-owner benefit from the dumb luck of a random big snowstorm? Worse, why should the shop-owner benefit extra from my misfortune of having to expend time and money clearing my driveway? As we’ve described before, here’s an example where self worth trumps net worth. If I overpay for the shovel, I’ve been taken advantage of by a fellow human being. Put simply, I feel like a sucker. And I will continue to feel like a sucker every time I walk into that store from now until the end of time. Better to pay $20 to a neighborhood kid to shovel than to give an extra $5 to that greedy hardware store. Now every time I walk into that store, I’ll know, even if they don’t, that I didn’t let them cheat me! I’m happier, I feel better, my utility is higher, and therefore I’ve made an entirely rational decision.

Around this same time as his price gouging study, Thaler and his collaborators invented an experiment that has become known as the dictator game. In this study, students were asked to divide $20 between themselves and a random and anonymous fellow student. They had two choices:

1) Keep $18 and give away $2, or

2) Split the $20 evenly, keeping $10 and giving away $10

Clearly, the selfish, and (to an economist) rational decision is to keep $18. However, as you may have predicted, it turns out that the majority of students (76%) decided to split the $20 evenly, demonstrating that for many people, social considerations are more important than money. Why might this be?

First of all, I would suggest that the study’s assumption of anonymity is a faulty one. As a participant, might I not be questioning that assumption? What if the recipient somehow finds out that it was me, the greedy one, that only gave them $2? What if the teaching assistant finds out, or the professor? Do I want my T.A. or my professor thinking I am a jerk? If he or she thinks poorly of me, might that not affect my grade in this class? As I have mentioned before, this kind of artificiality, ambiguity, unrealism or bias, is why I find many of the conclusions of behavioral economics to be questionable.

But for the sake of discussion, let’s now assume that anonymity is not an issue. Let us assume that there is absolutely, positively, no way for anyone knowing who gave $10 and who gave $2 other than the individual who made the decision.  Why might it still be rational to be “fair” and not “greedy?”

Here we return to the conclusion that self-worth trumps net-worth. My genes have programmed me to feel good to treat someone else fairly and to feel guilty to treat someone else unfairly, even if my actions aren’t known to others. As the study showed, most of the participants gave up $8 to feel good about themselves, and/or to not feel badly about themselves. As we’ve stated many times before, this behavior is little different than spending more money for luxury items to feel good about myself, or to give to charity, or even hold a door open for a stranger (which expends some small amount of energy).

Thaler and his colleagues decided to test another aspect of human nature. They extended the dictator game to include a second round with a third player. The third player had the following two choices:

1) Receive $5, give $5 to a (fair) student who had split the original $20 evenly in round 1 and $0 to a (greedy) student who had kept $18 in round 1, or

2) Receive $6, give $0 to a (fair) student who had split evenly in round 1 and $6 to a (greedy) student who had kept $18 round 1

Just like in the first instance, the “economically rational,” wealth maximizing decision is choice #2, to take $6 over $5. But the majority (74%) of students choose #1, that is to give up the $1 difference in order to reward Round 1 players who were “fair” and punish Round 1 players who were “greedy.” Are we irrational beings because we are willing to sacrifice $1 to punish a jerk?

I don’t think so. Our body’s social programming has taught us that punishing a jerk is a type of investment. We are (or least attempting to) train the jerk to not be a jerk next time. Having fewer jerks in a community is a good thing. Perhaps the punishment now in an economics class will prevent the jerk from becoming another Bernie Madoff some day and getting more screwed later on. Certainly, these are long odds, but might it be worth $1 now to potentially save myself from getting cheated out of millions? Why not? Moreover, I get satisfaction (we call it schadenfreude) from the punishment. I feel superior, a better person. I have a higher sense of self-worth, and thus, greater utility.

Lastly on the subject of social preferences is a question that is frequently raised by economists who study decision making. Why do people voluntarily leave tips at restaurants they never intend to visit again?

Start with the fact that tips are not really voluntary. Gratuities may have started out that way, as a way to reward good service, but at least in the U.S., they have become a meaningful (sometimes majority) component of the pay of waiters and restaurant staff. That is, they are expected. But of course, there is no legal obligation to provide one, only a moral obligation. Not doing so violates a social contract.

As we’ve discussed, our genes reward us for maintaining the social contract and punish us for violating the social contract. For some, doing the right thing for its own sake feels good. I feel good treating people nicely. Others feel good knowing a stranger, as in the waiter, thinks highly of them. Yet others seek to avoid the guilty feelings that contribute to a sense of low status/low self-worth. I don’t want the waiter to think of me, until the end of time, as a cheapskate. Every time I go into a restaurant I may be reminded of my cheapness. What if I run into the waiter again someday, even if I don’t intend to return to that restaurant? Do I really want to take that risk? I will never get that out of my mind… For most people, all three of these social components of utility contribute to decision making. That’s why we tip.

4. Behavioral finance

“Thaler was one of the founders of the field of behavioural finance, which studies how cognitive limitations influence financial markets.”

The proliferation of computers in the 1980s allowed economists to become number crunchers. As a wise person once said, best to fish where the fish are. Best to number crunch where there are lots of numbers. And where are there lots of numbers? Financial markets. Specifically prices and trading data of common stocks and other liquid financial assets.

As economists turned their attention to financial market data, analyzing decades of stock market data with simple statistical tools, they noticed something. They noticed anomalies. Up until then, the prevailing assumption held by most economists and finance professors was that markets (at least highly liquid ones like the stock market) were perfectly efficient. That is to say, all available information is immediately priced into a security. The only way to outperform the overall market (or a market index) is to 1) take more risk, 2) have non-public information or 3) get lucky.

These anomalies seemed to show that by analyzing certain historical data, investors could in fact outperform the overall market without taking on extra risk. To most, this observation clearly contradicted the view that markets are perfectly efficient. It also led to a dramatic increase in the study of financial markets by academics (analyzing data on a computer in your office is a lot easier than running experiments on college students in your classroom or lab!) and spawned entire new asset classes such as quantitative hedge funds, smart beta funds, ETFs and factor investing.

Thaler co-authored one of the first prominent studies of market anomalies in 1985. Thaler compared stocks that had dropped in value over the prior few years (“losers”) with those that had increased in value over that same time period (“winners”). He found that the loser stocks subsequently outperformed the winner stocks. In other words, investors could generate positive risk-adjusted returns (“alpha”) by buying a portfolio of losers and selling short a portfolio of winners. Similarly, an investor could out-perform the overall market by simply buying a portfolio of losers.

Thaler offered up a behavioral explanation for the anomaly he uncovered. Based on earlier published psychological research, he posited that investors must “overreact” to information. In other words, after a company’s stock has declined because of poor financial performance (or some other bad news), investors hold the view too long that the stock is a bad one and that the poor performance will continue. Consequently they are too slow in reevaluating or reassessing if the news and/or the company’s financial performance has improved (or regressed to the mean). Similarly, investors overreact to good news and good financial performance and are too slow tempering down their positive views of a stock.

Over the years many academic papers on financial markets have been published and many such anomalies have been uncovered. However, most of these anomalies do not persist. That is, the out-performance tends to disappear after the publication, either because it was spurious to begin with (the product of data-mining or data-snooping) or because the strategy is traded upon and the profits get “arbitraged out” by investors once the anomaly becomes widely known. A prominent example is the so-called “January effect,” whereby stocks were thought to increase in price during the month of January after having fallen in December. This was believed to occur due to investors selling stocks in December in order to capture the tax benefits of capital losses (to offset capital gains). It is pretty much a given that the January effect no longer exists, if it ever even did.

There are, however, market anomalies that have seemed to persist even though they have been widely known for decades. The two most important are the value effect and the momentum effect. The value effect is an extension (and essentially a renaming) of Thaler’s discovery of overreaction discussed above. Recall that Thaler uncovered that stocks that had declined for the prior few years tended to outperform the market and stocks that had increased over that time period tended to underperform. Later research concluded that stocks that are cheap by some metric such as Price-to-Book Value or Price-to-Earnings (called “value” stocks) tend to outperform stocks that are expensive (usually referred to as “growth” or “glamour” stocks).

The second prominent anomaly still thought to be in existence is the momentum effect. Whereas Thaler looked at three years of historical data to determine whether a stock should be considered a winner or a loser, other researchers looked at shorter time frames, say 6-12 months. What they found was quite the opposite of Thaler’s conclusions. Stocks that have outperformed the overall market (i.e. increased) over the prior 6-12 months tend to continue to outperform (increase) over the next 6-12 months. Similarly, stocks that have underperformed the market (i.e. decreased) over the prior 6-12 months tend to continue to underperform.

Researchers labeled this the “momentum effect” and concluded that investors could do very well by buying a basket of short-term winners and shorting a basket of short-term losers. Similar to Thaler, researchers posited a behavioral explanation for their anomaly. Whereas Thaler said that investors overreact to longer-term good and bad news, momentum researchers argued that investors also underreact to shorter-term good and bad news. That is, it takes time for good news and good performance (and bad news and bad performance) to become fully appreciated by investors and hence, fully priced in to stocks.

Note that while the value effect and the momentum effect appear at first glance to be contradictory, this is not so. They are measured over different time periods. In fact, many quantitative hedge funds (and later ETFs and other “smart beta” products) have used the combination of these two “factors” as the basis of their investing strategies. Buy a basket of value (long-term cheap) stocks that have exhibited strong momentum (short-term gains) and short a basket of growth (long-term expensive) stocks that have exhibited weak momentum (short-term losses).

Why the long digression into quantitative investing factors and hedge fund strategies? As I stated, these two strategies or factors, value and momentum, along with a number of other less prominent strategies or factors seem to prove that market anomalies do exist. Here I agree.

The question then, and one we’ve asked many times in this article, is do the existence of such market anomalies prove that investors are necessarily irrational in their behavior, or as the Nobel Price committee stated, are they evidence of “cognitive limitations?” Most say yes. I say no.

There are at least five possible explanations for market anomalies. The first is that the anomalies are not real. They are spurious, the result of data mining or data snooping, the product of overeager PhDs desperate for an article to publish or an interview at a quant fund. No doubt many of the anomalies (factors) found over recent years fit this type. But, for factors such as value and momentum it is hard to make this case. These two anomalies have persisted for decades after discovery and have been confirmed in multiple asset classes (not only stocks) and in the financial markets of many different countries.

The second possible explanation is risk. That, for example, value stocks outperform growth stocks because they are inherently riskier (perhaps a greater risk of distress or bankruptcy). But, the argument goes, this implicit risk does now show up in traditional risk metrics such as volatility or Beta. In other words, yes, value outperforms growth but not on a risk-adjusted basis, if risk were measured properly. Some economists (especially those inclined towards efficient markets) have indeed made this argument in response to the research of Thaler and others.

I am sympathetic to this argument, though actually even more so for the case of the momentum anomaly than for value. I would argue that stocks with strong momentum are far riskier than traditional risk metrics imply. Said differently, what is considered to be “risk” is vastly underpriced. The primary reason for this is the implicit backstop of financial markets by central banks (something we will return to shortly when we discuss financial bubbles). Central banks over many decades have engaged in bailouts of financial markets repeatedly, and ever more strongly. Time and time again they have prevented prices from failing. Intuitively, the stocks that have risen the most (those with the strongest momentum) are likely to decline the most absent the backstop of central banks. Someday when a financial crises comes that central banks are unable (or unwilling, but much more likely unable) to curtail, the momentum anomaly will disappear. It will be shown that these momentum stocks were riskier all along.

The third explanation for market anomalies is one that Thaler has also extensively researched, and one for which he is heaped praise by the Noble Committee. This is something called the “limits of arbitrage.” One set of anomalies that researchers have unearthed occurs when two securities with the same underlying assets have prices that differ. This violates what is known as the “law of one price.”

One of the most famous examples of such an anomaly is one that Thaler discusses at length in his book, Misbehaving, the 3Com/Palm spinoff. Very briefly, 3Com was a tech company during the first dot com bubble. In 2000, 3Com decided in would spinoff a subsidiary, Palm, by initially selling a fraction of its stake in Palm to the public (about 5%). Then, months later, each 3Com shareholder would receive 1.5 shares of Palm stock so that 3Com would divest all of Palm. During those months between the IPO of the 5% shares of Palm and when 3Com divested the rest, the market value of Palm was substantially higher than the market value of 3Com. In other words, the stock market was valuing 3Com’s business excluding Palm at a substantially negative value! Given that the lowest a stock price can be is $0, this made no sense.

Thaler argues that, in situations such as 3Com/Palm, two things are going on. One, irrational (mostly individual) investors are driving up the price of Palm to irrational levels. Two, something prevents the smart (mostly institutional) money from correcting the mispricing. In the case of 3Com/Palm (and in many similar cases), even though everyone knew about the mispricing (it was widely reported on in the mainstream media), it was virtually impossible to short Palm stock given how few shares were outstanding. Without being able to short the stock, investors could not arbitrage the difference in value between 3Com stock and Palm stock and therefore could not “fix” the mispricing. This is known as a “limit to arbitrage.”

I want to hold off for a moment on the discussion of whether Palm stock was so highly valued because of “irrational behavior” or some other reason. But I do want to make the point that the violation of “the law of one price” is not in and of itself evidence of irrational behavior precisely because of the limits of arbitrage. That is, investors are trying to arbitrage out the value difference. They are trying to fix the mispricing. They are trying to enforce market efficiency. They are quite simply unable to do so because of structural limitations to markets (i.e. the inability to short a stock). We will come back to this issue again when we discuss financial bubbles.

The fourth possible reason for why market anomalies exist is indeed the one favored by behavioral economists like Thaler: irrational investor behavior. Economists tend to assume that most stock market investors are what they call “noise traders.” As Thaler has pointed out, another (less polite) name for them used by some economists is “idiots.” The idea here is that most individual investors do not buy and sell stocks based on fundamental data or rational analysis, but on emotions or animal spirits. I freely admit that being an idiot might qualify as a “cognitive limitation.”

It should not surprise you, however, that I do not concede that such investing behavior is necessarily irrational, for three reasons. First, as long as I believe that I can sell for higher than I bought, I am making a decision that, to me, is rational. I may not have done much, or even any, fundamental analysis. I may not even be capable of doing such analysis. I may not even know what fundamental analysis is. It doesn’t matter. I’m still making a trading decision that I believe to be in my best interests. And recall from our discussion earlier about the momentum effect. Momentum usually works. So losing myself to “animal spirits” by buying a high-flying stock, regardless of so-called fundamentals or valuation has very often been a profitable enterprise. We’ll return to this point.

The second reason why such investor behavior is not necessarily irrational is because maximizing my utility is not necessarily the same thing as maximizing my wealth, or the value of my stock portfolio. Other factors that makeup utility must be considered. I won’t say more about this yet, but we will also come back to this idea shortly.

To understand the third reason why economists are confused about irrational investor behavior requires busting one of the most basic myths of all of economics and finance. There is no such thing as fundamental value. All value is relative. The value of a stock, or any other liquid asset is what someone else is willing to pay for it. Even what is known as fundamental analysis (a discounted cash flow, for instance) requires implicit and explicit estimates of other market (or relative) variables.

I am going to give you what I believe to be a better answer for why market anomalies exist. But before I do that, one comment on the mainstream view of “cognitive limitations” or investor irrationality as an explanation for such anomalies. As I’ve said, prominent anomalies such as the value and momentum effects are believed to have persisted for a long while now. However, they do not always work. That is, there are long (multi-year) periods where one or both of these anomalies do not work. This seems to me to be inconsistent with the thesis of cognitive limitations. Why would investors be cognitively limited in only some years and not others? Our brains work in some years, not others? We do analysis in some years, not others? We are more emotional in some years, not others? I don’t get it.

Now, on to the fifth and final rationale for market anomalies, and as I stated just above, the one I find most compelling. I do believe that anomalies are indeed due in good part to investor behavior, consistent with mainstream theorists. I just don’t believe that this behavior should be viewed as irrational. This is of course consistent with the main theme of this entire article. Most of the studies published by Thaler and other behavioral economists are essentially correct. They rightly show that most decisions made by individuals are based on calculating factors other than purely what will maximize wealth. But for the umpteenth time, this is not irrational.

As we’ve alluded to a number of times, we can segregate investors into two broad groups, institutional investors and retail investors. Institutional investors are generally considered the “smart money” and retail investors, the “dumb money” or the “noise traders” or the “idiots.” In order to explain why investor behavior should be considered rational, I need to talk briefly about the motivations of each of the two investor types.

When we talk about institutional investors, we refer to entities that manage money for some group of investors. These include mutual funds, hedge funds, insurance companies, pension plans, endowments, and others. But at the end of the day, there is always a person (or persons) responsible for making the day-to-day decisions of what stocks (or other assets) to buy and what to sell. These are the portfolio managers and the analysts. Even in the case of quantitative funds, there have to be programmers who code the algorithms. The point here, and it’s a crucial one, is that “institutions” don’t make decisions. People make decisions.

What motivates people? They want to maximize their (present valued) utility of course. So what motivates portfolio managers? First and foremost, they probably want to keep their job. Second, they probably want to get paid a lot of money. Third, they probably want to feel smart (or not feel stupid) compared to their peers. Etc. Obviously, having really high stock market returns is likely to help you keep your job, get paid well and make you look smart. But, going after high returns generally involves taking a lot of risk. This is not the strategy employed by most professional money managers.

Instead, most portfolio managers act to minimize the risk of losing their jobs. And they minimize the risk of having their salaries cut by avoiding the kind of poor performance that gives reason for investors to pull their money out and have their assets under management (AUMs) decline. The end result is that the vast majority of institutional managers aim to hit a benchmark rather than maximize gains, and they become what we call “closet indexers” where their holdings mimic an index such as the S&P 500.

In other words, the primary goal of a portfolio manager is to do what everyone else is doing. That way, you keep your job, keep your AUMs and keep your nice salary. It’s okay to lose money as long as everyone else is also losing money. Similarly, it’s not worth taking high risk to shoot for the moon. Hence it is incredibly difficult to be a contrarian investor. You run too high a risk of losing the patience of investors, losing your AUMs, and losing your job.

This mindset of those that manage institutional money more than anything else probably explains the momentum anomaly. Because of their need to track benchmarks and to not be wrong, institutional money managers exhibit momentum investing probably even more so than “dumb” individual investors. Even professional investors want to look smart by holding a portfolio of winners, or avoid looking stupid holding a portfolio of losers.

For example, it is widely known that many mutual fund managers will buy expensive popular stocks towards the end of the mutual fund’s fiscal quarter or year. Why buy high? So that investors, who look at the fund’s list of holdings (published quarterly or annually) will see winners and think highly of the brilliant portfolio manager, even if the fund did not participate in the price run-up of that stock. Are mutual fund investors irrational for behaving this way? Not necessarily since mutual funds don’t disclose all of their trades.

The vital point here is that economists assume that all market participants are trying to maximizes trading gains. To do anything otherwise is irrational. But that’s not how the “smart money” works at all.

Now let’s talk about the other big class of market participants, retail investors. Retail investors are regular people, middle class or wealthy, who invest their own money in the markets. Typically they buy stocks directly in the stock market or purchase shares of mutual funds. Yes, they are generally (though not always) less sophisticated than institutional investors (“idiots” remember?). But like the portfolio managers at institutions, they buy and sell securities for reasons other than purely maximizing performance. They try to maximize utility, not simply wealth.

To many, stock market investing (really speculating) is also entertainment, not simply a way to save. Like going to Vegas, I pay for the entertainment. But unlike Vegas, where whether I win or lose is mostly based on pure chance, with investing, it is at least perceived to be based on skill and smarts (whether it really is, is another story). So, I get utility from the entertainment of picking stocks. And I also get utility from the status effect of picking winning stocks.

Let’s now try to explain the value anomaly. Recall that over time, value stocks have often outperformed high growth/high glamour stocks. In my view, this is because investors, especially retail investors, get utility from owning such stocks. This utility more than compensates them for the small loss of money they could have had if they had owned boring out-of-favor stocks. It is fun to own the stock of Disney or Apple or Google or Facebook. These are stocks I can talk about at cocktail parties and water-coolers. Anybody want to talk about the insurance company or electric utility stocks that I own?

Even the smart money exhibits this behavior. They go to parties too, and industry conferences. More fun to talk about the high-glamour winners my fund owns than the boring losers. Plus, money managers sometimes get invited to corporate events hosted by the companies whose stock they own, or analyze. Which companies are likely to have more events and better events? The highly valued popular companies, or the struggling, perhaps cash-poor unpopular ones? Finally, the glamour stocks are more likely to be part of indices such as the S&P 500 which most funds track. Stocks that do poorly tend to get kicked out.

Let’s return to the 3Com/Palm situation. We explained that because of the inability to short Palm stock, the two securities could not converge to one price. This was the idea of “limits to arbitrage.” We did not, however, discuss why the price of Palm was so high to begin with. Should it be attributed to irrational investor behavior? No. Palm was one of the most glamorous of all the glamour stocks back at the absolute height of the dot com bubble. Importantly, there was a very limited number of shares outstanding (remember than initially, 3COM only issued 5% of PALM to the public).

In fact, a small number of shares was generally true for many of the technology stocks back then. Given that lots of people wanted to talk about owning these tech stocks at cocktail parties, there was high demand, and thus high valuations. This was a fun time. Investing it tech stocks was also a hobby. There was utility to be gained beyond just the monetary amount of the trading gains. And remember that it is not irrational to think I can buy high and sell higher. That was normal. And rational. We’ll return to this point shortly when we talk about financial bubbles.

Financial bubbles and irrationality

Before leaving the topic of behavioral finance, I want to address one final misconception believed by virtually all behavioral economists not to mention financial journalists. That the existence of financial bubbles proves the irrationality of investors. While financial bubbles are not really a topic of research by Richard Thaler, they have been a major research topic of another behavioral economist, Robert Shiller, who shared the Nobel Prize in 2013.

There is no question that financial bubbles exist. To list a few, there was the famous Dutch tulip bubble in the 1600s, the stock market bubble in the 1920s that preceded the Great Depression, Japan’s stock market and real estate bubbles in the 1980s. More recently, the late 1990s dot com bubble, the mid 2000s real estate bubble and today’s bubble in nearly every financial asset. Most economists admit that financial bubbles do happen (they have no idea why) but they deny that bubbles can be identified until after they have burst. I find this view ludicrous. The whole world knew we were in a tech stock bubble in 1999. Many wrote about a real estate bubble in 2007. And many more hold the view that virtually all financial assets are in a central-bank fueled bubble today. I do believe, however, that the timing of when a bubble will burst is impossible to predict.

As I said above, the mainstream economics view is that the existence of financial bubbles is proof of irrational behavior on the part of investors. I beg to differ. For at least four reasons, the existence of financial bubbles is absolutely not evidence of irrationality.

1. Market participation is NOT optional

In 2007, at the peak of the credit boom that shortly thereafter became the 2008-2009 financial crises and the Great Recession, Chuck Price, CEO of Citigroup famously said, “When the music stops, in terms of liquidity, things will be complicated. But as long as the music is playing, you’ve got to get up and dance.” What he meant was that as a major financial institution (one of the largest in the world), Citigroup had to take the same kind of risks, and go after the same businesses as its competitors. If it didn’t, its revenue and profits would lag its peers and investors and Wall Street analysts would call for Chuck Prince’s head. Though wildly criticized for his statement, Prince was right. Citigroup had to dance.

We have already talked about the mindset of institutional money managers. It is okay to be wrong as long as everyone else is wrong. It is not okay to under-perform one’s peers, or under-perform one’s benchmark. That is to say, if the market is going up and everyone else is taking the ride, as a portfolio manager, you need to take the ride too. You cannot sit in cash or other less risky assets. You play the momentum game or you lose your job.

Even more obvious is that fact that certain institutions are essentially legally obligated to take risks. Pension funds and insurance companies, being highly regulated for instance, must have investment returns high enough to meet future liabilities. But with safer assets at historically low (5,000 year lows, that is) rates of return thanks to the monetary policies of the world’s central banks, these institutions have absolutely no choice but to buy riskier assets such as equities or high yield bonds. This is even true for individual investors saving for retirement. While not legally obligated to take risk to meet return targets like pensions, individuals too cannot afford to invest safely in today’s environment. Earning 0 to 1% in savings accounts or CDs just doesn’t make the retirement math work. They too have no choice but to take more risk.

The point I am trying to make is that in nearly all cases, market participation is not optional. Even if you think valuation metrics are high, or asset prices are expensive, you still have to be invested in those assets. And there is nothing irrational about trying to keep your job, keep your salary, keep your retirement funds growing.

2. Buy high, sell higher usually works

We’ve already talked a lot of about how momentum investing, buy high, sell higher, usually works. For at least the past three decades, this has generally been true for nearly all financial assets including stocks, bonds and real estate. While past performance is no guarantee of future success as they say, past performance is an argument for momentum investing constituting positively rational decision making.

If I see my neighbor make a fortune flipping houses, why shouldn’t I do the same? I probably consider myself to be smarter, and perhaps more highly educated. When can my neighbor do that I can’t? Sure the music might stop some day, but probably not tomorrow. But by the time it does, I’ll be rich too. To you and me, this might seem like irresponsible behavior, but is it irrational? I don’t think so, especially since it usually works. And recall that when deriving utility, money is mostly just a proxy for status. Why not take the risk if my neighbor is taking the risk? If he or she gets rich and I don’t, I’ll regret it. My social status, and hence my utility will suffer.

3. Central banks always come to the rescue

As I’ve alluded to a number of times, one vital lesson that the last several decades has taught the world is that central banks always come to the rescue of the financial system and financial markets. Even more so than just low interest rates or printing money, it is this backstop and the promise of bailouts that encourages (subsidizes) risky behavior. And this is exactly the primary cause of financial bubbles. Why not take risk if there is little downside? Take the risk and make the millions (or billions). If things go bad, the government and the central bank will clean up the mess. What’s so irrational about that?

4. Contrarian limits to arbitrage

Earlier we discussed how limits to arbitrage can lead to violations of the “law of one price” and to what economists wrongly assume to constitute irrationality in financial markets. On a much larger scale the same concept of limits to arbitrage can prevent financial bubbles from being naturally curtailed and from forming in the first place. Recall that everyone in the world knew that the price of Palm stock was too high relative to its parent 3Com. However, nobody could effectuate the arbitrage because Palm shares were impossible to short.

Similarly, as I’ve mentioned before, plenty of participants in financial markets have recognized bubbles well before they have ultimately popped. But not knowing when exactly the bubble will pop (as I also said earlier, a forecast I believe to be impossible) prevents them from shorting the market and “correcting” the bubble. In simpler terms, it is virtually impossible to be a contrarian investor in today’s market environment (and in the market environments of the past few decades).

Being contrarian risks margin calls. I can’t hold my shorts if the market continues to go up. Being contrarian risks underperforming the market. I lose my AUMs as impatient investors pull out. I risk looking like an idiot compared to all the other brilliant managers playing the momentum game and closet indexing. Bottom line is that I may know the market is in bubble territory and will some day correct, but unless I know the timing of that correction (which I cannot), it is far too risky to bet against the market, and to “fight the Fed.” No less than 3Com/Palm, this should be considered a limit to arbitrage and is our forth reason why financial bubble are not evidence of irrational behavior.

That market participation is mandatory, that momentum investing works, that central banks subsidize risk, and that being a contrarian investor is virtually impossible, are together the four reasons that give us much of an understanding of the causes of frothy markets and financial bubbles. But as we’ve noticed, all four of these stem from rational actors in financial markets making what to each actor are perfectly rational decisions.

In 1996, Federal Reserve chairman Alan Greenspan made his famous “irrational exuberance” speech, commenting on the seemingly high valuation of common stocks (they were to go much higher, rising until 2000). Greenspan might have been correct about the “exuberance” part but he was wrong about it being “irrational.” Market participants were acting rationally, maximizing their own utilities. Investors were simply reacting to the incentives laid out, most importantly, and mostly unknowingly, by the unwise risk subsidizing policies of the Federal Reserve.

Conclusion

Until the ascendance of behavioral economics, mainstream economists held the naive and erroneous belief that human beings always set out to maximize their income or wealth. And so it was said that “homo economicus” was a rational creature. But then came Richard Thaler and others who showed that this belief was indeed naive and erroneous. The models were wrong. Human beings are not always, or even typically, wealth maximizers.  We are instead, well…human beings, shaped by eons of evolution and biology. And in helping to show this, Thaler and the community of behavioral economists no doubt deserve some credit.

But the behavioralists made the same two mistakes as those economists they attempted to supplant. First, they forgot to question their own assumptions. They maintained the same non-colloquial definition of rationality, and kept the same proxies for utility they inherited, without a bit of thought as to whether they made sense, or had any basis in reality. Like the economists who came before them, they demonstrated that they too have very little understanding of what factors truly drive human decision making.

Then their work escaped the chambers of academia and became popular. Their studies were easy to understand, common sensical and not too mathy. Fun even. “Irrationality” made great headlines and journalists and the mainstream media ate it up. Bestsellers were published. TED talks were given. Politicians began to listen. The cookie jars of government started to open up to the behavioral economists. Money and power! Power and money!

And naturally, with the money and the power comes the arrogance. And this is their second mistake. The mistake that since the beginning of time has been made by those smart, but not wise. The mistake that til the end of time will be made by those smart, but not wise. Humans are irrational, but we the enlightened ones have the fix! Nudge! Libertarian Paternalism! Economists to the rescue! Government to the rescue! The people must be saved from themselves!

Causes of income inequality: the short version

I recently published a very (very) long post on the subject of income inequality (which I encourage you to read!). Given its length, I thought it might be helpful to readers to publish a shorter, Cliffs Notes version. Here goes.

The dramatic increase in income inequality over the past several decades is one of the most important issues facing the U.S. and the world today. Rising income inequality has led to the election of President Trump, the Brexit outcome in the U.K. and the growing popularity of populist, isolationist, fascist and socialist leaders around the world. It has also resulted in a backlash against capitalism. And while the topic of income inequality is increasingly at the forefront of both mainstream media coverage and economic study, I believe that neither economists nor journalists have correctly identified its underlying causes.

However before we can get to those causes, we need to recognize that there are two separate trends going on contributing to rising income inequality. The first type of income inequality is what we’ll refer to as the decline of the middle class. This is primarily a phenomenon in the U.S. and Western Europe (globally, the middle class has grown enormously over recent decades). The second type of income inequality is the rise of the wealthy and the super-wealthy, which is truly a global phenomenon. These two aspects of income inequality share many of the same underlying causes, however their stories differ and we will discuss each of them in turn.

What do we mean by the decline of the middle class? For starters, real (inflation adjusted) wages for many workers have declined or stagnated. The middle class’s share of national wealth has also shrunken, while middle class indebtedness has risen sharply. Workers also face far more job insecurity than ever before. And while the headline unemployment number in the U.S. is very low (currently under 5%), this statistic does not reflect the high number of able-bodied people out of the workforce, nor the magnitude of underemployment, further exacerbated by the so-called “gig” or “sharing” economy. Finally, and most scary, these trends are affecting young people more dramatically and leading many to debt, to despair and to drug addiction.

In my view, the decline of the middle class has been caused by a combination of three trends: globalization plus regulation plus monetary policy induced financialization. Here’s our story. Beginning in the 1980’s and 1990’s, China, along with many other countries joined the global economy. With abundant low-skilled labor, they put pressure on manufacturing wages in the U.S. and other developed countries. Due to regulations, unions and legacy retiree compensation, manufacturing companies were not able to lower their costs of labor. Instead jobs were outsourced, off-shored and lost while companies went bankrupt and entire industries disappeared.

Meanwhile, in its naive belief that all inflation is monetary, and seeing no apparent inflation due to the effects of global trade, the Federal Reserve printed money, kept interest rates low, engaged in repeated bank and financial market bailouts and created a three decade long financial bubble. The middle class’s cost of living went up instead of down. Real estate, education and healthcare became unaffordable. The only way for consumers not to suffer in the near-term was to take on debt, debt that can never be repaid. More jobs were lost as technology disruption and automation was subsidized. Wall Street grew at the expense of Main Street. Monopolistic crony capitalism came to rule the economy. And last but not least, long-term productivity and the future prospects of the middle class were even further mortgaged as retirement savings, pensions and insurance policies were bled dry.

As damaging as a declining middle class is to society, the shocking rise of the wealthy and super wealthy is even worse. This trend has the same three causes that explained the decline of the middle class: globalization, regulation and monetary policy fueled financialization. However, here the story is different. The rise of the wealthy is primarily a result of monetary policy and financialization. It was then exacerbated by regulation and exacerbated even more by globalization.

Cheap money, subsidized capital and low interest rates led to rising prices for all financial assets which greatly benefited the wealthy who naturally own these assets. Loose monetary policy also led to a wage/price inflationary spiral for the wealthy as prices of luxury goods and services accelerated leading to higher wages for high-skilled (i.e. 1%) workers, not to mention pricing out the middle class from cities like New York and San Francisco.

Monetary policy also led directly to the dramatic growth of Wall Street and financialization of the economy. As banks and financial markets were subsidized, Wall Street grew, and grew, and grew. Increasing revenue brought increasing profits and increasing profits brought increasing bonuses for investment bankers, traders, institutional salespeople, private bankers, quants and others. Hedge funds minted billionaires even while their performance lagged safer investments. Tech startups also created billionaires as cheap capital subsidized growth and valuations, fostering a winner-take-all mentality and unprecedented consolidation and monopoly power in the technology industry.

CEOs of public companies went from earning 30 times the average worker to more than 300 times thanks to monetary policy, government regulation, tax policies and crony capitalism that together favored and subsidized stock options, short-termism, growth, financial engineering and consolidation. Finally, globalization exacerbated all of these unfortunate trends as central banks throughout the world executed the same easy money playbook. Cheap money flowed across borders, asset bubbles sprung up everywhere and corruption allowed the wealthy to amass not just more wealth, but ever greater political power.

Can we solve the problem of income inequality? In theory the answer is yes. In practice the answer is probably not, as nothing that I am about to suggest is realistic given today’s toxic and corrupt political system. There is absolutely no realization among the economics profession, the mainstream media or the political community of the disastrous consequences of “modern” central banking. Nor is there any reason to believe that those in power who have benefited so much from decades of easy money and crony capitalism will change their viewpoint.

But let’s try anyway. The first thing we must do is to normalize interest rates, or better yet, get central banks out of the business of managing the economy and out of the business of bailing out the financial sector. We must make it clear that risk will no longer be subsidized and financial firms will no longer be bailed out.

If we do this, money will dry up and Wall Street will shrink significantly, along with bonuses and financial services jobs. So too will the technology sector contract as tech valuations plummet. 1% cities like New York and San Francisco will once again become affordable for doctors and lawyers and teachers and police officers. The era of the hedge fund billionaire and the tech mogul will be at an end. Jobs that serve no social purpose will disappear, like most (but not all) investment bankers, consultants, hedge fund analysts and private equity professionals. Technology disruption and automation will be slowed. Companies that don’t make money will disappear. Companies that actually make money will thrive and will relearn how to invest in their own employees. Smart people will once again become doctors and scientists and engineers and teachers.

Meanwhile, while the financial system goes through a reset as almost (and should have) happened in 2009, we must also reduce the massive regulations that hinder hiring, that put a floor on compensation, and that support and subsidize big business, big labor and crony capitalism. We must un-monopolize our monopolistic and failing education system. Ditto for our healthcare system. We must find a way to upgrade our infrastructure and to restructure the promised pensions and retirement costs that will ultimately bankrupt our governments.

What must we not do? We must not give in to the populists, socialists, fascists and isolationists of the far left and the far right. We must not abandon capitalism. We must not give up on global trade, if for no other reason than abandoning trade will lead to war (though there are many other good reasons in favor of trade). We must not turn our backs on immigration for it is the only way to achieve significant economic growth, to afford our (hopefully shrinking) welfare state and to bring back manufacturing. We should not try to fix income inequality through re-distributive taxes, or through more regulation or more unionism. Each of these will make things worse, not better.

In summary, we must let the free market work. Let companies succeed that deserve to succeed. Let companies fail that deserve to fail. Let people work who want to work. While it may take a generation or more to re-orient our economy, it is the only way to increase productivity, to revive the middle class, and to preserve our way of life.

What are the real causes of increasing income inequality?

Few topics have gotten more economic press in recent years than income inequality. The issue of income inequality was the driving factor in last summer’s shocking (or not so shocking) Brexit vote and last fall’s shocking (or not so shocking) election of Donald Trump. Income inequality is also responsible for the rising popularity of socialists like Bernie Sanders and the growing power of populist leaders around the world.

Ever since Thomas Piketty’s Capital in the Twenty-First Century was published in 2013, trying to explain the rise in income inequality has been on the forefront of economists’ minds. Yet, mainstream economics has offered no comprehensive or satisfactory explanation for why income inequality has increased, nor has it offered comprehensive or satisfactory remedies.

I believe the underlying causes of increased income inequality are relatively clear and relatively simple. And the remedies too are clear and simple. Of course, as with most of the world’s economic woes, implementing these remedies is practically and politically impossible.

But first things first. Has income inequality even increased, and if so, in what ways? The short answers are yes, no and yes. Let me explain.

Has income inequality really increased?

Income inequality is typically measured statistically by economists as a metric called the Gini coefficient. Loosely, the Gini coefficient measures in a single number between 0 and 1, how evenly distributed income is within an economy. A coefficient of 0 means that everyone’s income is equal (perfect equality). A coefficient of 1 implies perfect inequality, that all income is earned by a single person. And yes, the Gini coefficient has increased in most countries, including the U.S., over the past few decades (in the U.S. from approximately 0.35 in 1970 to at least 0.45 today).

And now that I’ve briefly mentioned it, forget about the Gini coefficient. It’s not very helpful in understanding the root causes of increased income inequality.

As Piketty correctly describes in his book, there are two aspects of income inequality that are happening simultaneously. The first is the decline of middle class income and wealth. Let’s call this Type 1 income inequality. This trend is occurring most prominently in the U.S., but also in Western Europe and other developed countries. The second aspect of increased income inequality is the dramatic rise in the income earned and wealth owned by the rich, e.g. the 1%, more so the 0.1%, and even more so the 0.01%. We will refer to this as Type 2 income inequality. And this second trend is truly global.

Now let me explain why I said the answer to whether income inequality has indeed been rising is yes, no, yes. As I said above, the decline of the middle class (Type 1 income inequality) is mostly a developed world, and most prominently a U.S. phenomenon. When measured globally, the middle class has increased dramatically over the past 30 years, thanks to the equally dramatic rise in economic freedoms in formally socialist economies including China, India, Southeast Asia, Eastern Europe and elsewhere. China itself is responsible for adding perhaps 600 million people to the ranks of the global middle class.

In short, Type 1 income inequality has increased in the U.S. but decreased globally. On the other hand, the rise of the wealthy (Type 2 income inequality) is manifested everywhere. Hence the “yes, no, yes” answer. These two aspects of income inequality, the decline of the middle class and the rise of the super-wealthy, are not unrelated, and they share many of the same underlying causes. However, their stories do differ and we will discuss each of them in turn. But before we do that, let’s discuss something else that is important. Does rising income inequality even matter?

Does rising income inequality matter?

Yes, it does. Very much. But in a manner different from which most economists believe. The rising inequality that the U.S. and the world is currently experiencing is a SYMPTOM of a poorly operating economic system. Income inequality matters in the sense that it is telling us that economic incentives are screwed up. Income inequality in and of itself is NOT, as mainstream economics believes, a CAUSE of economic dysfunction, poor GDP growth, or financial crises. This confusion of cause and effect is perhaps more than anything else the reason why mainstream economics is so wrong about…well…pretty much everything.

In short, rising income inequality is telling us that the world is messed up but it does not, by itself, cause economic woes. On the other hand, it can, and often does cause political woes, as history teaches. When the masses see the quality of their livelihoods reduced whilst they see the rich get richer, political unrest often follows. And sometimes revolution. We’re seeing warning signs of that in the U.S. right now, in the success of what would normally be fringe political candidates like Trump and Sanders, as mentioned above, and in Europe with Brexit.

So, while increasing inequality does not directly cause economic upheaval (remember, it is a symptom not a cause), it certainly can, and likely will cause political upheaval. And political upheaval tends to be very, very bad for the economy, not to mention to the theretofore rich and powerful (guillotines, anyone?).

At this point. we’re almost ready to return to the promise of this article – the root causes of inequality. But if you will, please indulge me for another moment, for there is one more topic we need to discuss. Is there an ideal level of inequality?

Is there an ideal level of inequality?

I don’t know that there is an ideal level of inequality but there probably is a natural level of inequality that would occur in a properly working free market. Pretty much everyone understands that fair or not, some people have more skills, abilities, intelligence, wisdom, work ethic, opportunities, risk appetite and (perhaps most importantly) luck than others. Whether this is fair or not and whether these attributes are innate or not are interesting questions, but immaterial to our present focus. These are the undeniable facts of life here on Earth, and probably everywhere else in the universe.

Equally true is that in a free market where everyone is able to rent out their skills and abilities to the highest bidder, the disparity of skills and abilities will naturally lead to some disparity of income and wealth. Very few among us would disagree that a LeBron James should earn more than the average basketball player, that a Brad Pitt should earn more than the average actor, that a Bill Gates should earn more than the average businessperson.

Why is that? Because most people recognize that, whether fair or not, whether through luck or not, these three individuals 1) have more skill in their domains than their average peers, and 2) that these skills have helped increased societal utility and economic growth. LeBron James through the sales of basketball tickets and basketball jerseys and television viewership, Brad Pitt through the sale of movie tickets and Bill Gates through the productivity enhancements due to Microsoft software (PowerPoint excluded).

Is it fair that LeBron James is a better basketball player than me? Not a relevant question here. Is it fair that because LeBron James is a better basketball player than me that he deserves to be paid more for playing basketball than me? A good question. Given that playing basketball (at a very high level) is economically valued in our society for its entertainment value, the answer is plainly yes.

So clearly, in a free market, some level of income inequality is both natural and desirable. But how much inequality? How much more should Lebron James be paid than the average basketball player? How much more should Lebron James be paid that the average postal worker, or teacher or bartender? How much more should Bill Gates be worth than the average CEO or entrepreneur or computer programmer? This is a much more difficult question, and one that I will not attempt to answer here directly. Sorry.

Conceptually however, we can state that in a free market, one’s income should be directly related to the economic value of one’s labor. This is no more than the basic principle of supply and demand. If my labor is worth more to someone (i.e. to help generate business profits), they will pay me more, or I will take my labor elsewhere (I will be “in demand”). If my labor is no longer helping to generate profits at my current level of income, I will be paid less or replaced by someone more productive. Long story short, in a properly functioning free market, my (and everyone else’s) income should approximate my (and respectively, their) societal productivity.

Type 1 income inequality: the decline of the middle class

Now, finally, we’re ready to discuss the symptoms and root causes of the rise in income inequality. Let’s start with the first type, the decline of the middle class in the United States.

When we talk about the decline of the middle class, especially in the U.S. what do we really mean? Economically we mean several things. In the direct context of income inequality, we mean that middle class real wages have declined. Real wages represent the amount you are paid adjusted for inflation. Real wages are not the amount (ignoring taxes) that is paid to you by your employer. Those are called nominal wages.

To be clear, when economists talk about middle classes wages having declined they don’t actually mean that the amount of money that employees receive (on average) has declined. What they mean is that wages have declined taking inflation into account. For example, wages might have increased 5% over a period of time, but if inflation has been 10% over that same period, then we say that real wages have actually declined. Equivalently, we could say that the average (lower and middle class) worker’s purchasing power has declined. That is, the amount of goods and services that they are able to purchase with their wages has gone down.

Have middle class real wages declined over the past few decades, at least in the U.S.? Actually no, they have risen. But, they have more or less stagnated. Wages have risen slower than the overall economy and much less than mainstream economists think they should have, given overall labor productivity. In other words, given that upper class wages have increased significantly (as we will discuss when we get to Type 2 income inequality), the middle class accounts for a shrinking piece of the economic pie, one of the reasons for the so-called decline of the middle class.

As an aside, however, the picture is a little muddier if you look at total compensation, which would include the present value of promised benefits such as retirement income (e.g. social security and/or pensions) and especially healthcare. But let’s ignore that wrinkle.

Now, when we speak of the decline of the middle class in the context of income inequality, there are a few other economic factors that come into play that make the picture look even worse than just the income data. For one, the middle (and lower) classes have much less savings and are much more highly indebted than ever before. In other words, the middle class’s portion of national wealth has declined even more so than its portion of income. We could just as easily, and perhaps more accurately, be writing about “wealth inequality.”

Second, there is much less job security than in decades past, so whatever income workers are receiving is more volatile. Lifetime employment is clearly a thing of the past. Companies, especially those publicly traded or private equity owned, are quicker to lay off, to restructure and to outsource. Moreover, as established companies are continuously “disrupted” by cheap-money financed and venture capital backed upstarts, they face constant pressure to cut labor costs. Not to mention the high failure rate of those upstarts naturally leads to employment volatility.

Third, while the headline unemployment number in the U.S. is very low (currently under 5%), this statistic does not reflect the high number of able-bodied people out of the workforce, nor the magnitude of underemployment. This is exacerbated by the so-called “gig” or “sharing” economy. People are working fewer hours and at lower skilled jobs than they desire. When one is reasonably highly educated, working 30 hours a week at a Wal-Mart or driving for Uber is not the stuff that the American dream is made.

Fourth, and most scary is that the aforementioned factors of stagnating real wages, high indebtedness and job volatility affects young workers more than anyone else. Young people are graduating college with enormous debt loads and poor job prospects. Plus, the retirement and healthcare benefits generously provided to their parents are less likely to be there for them. Politically, masses of unemployed young people with few economic prospects are the stuff of which revolutions are made.

In summary, while middle classes real wages haven’t exactly declined, it is very true to say that real wages have stagnated, that the middle class has received less of the total economic pie, and that young people face even more dire prospects than other demographics.

The real causes of middle class decline

That the middle class in the U.S. is “not winning” is denied by few, and as I stated at the top of this article, has been the centerpiece of both the populist Trump and populist Sanders presidential campaigns (regrettably if you were a Hillary Clinton fan, she did not take the baton from Sanders on this issue). Trump blamed immigration, free trade, bad deals, Mexico and China. Sanders, during his campaign, blamed free trade, unfettered capitalism, lack of union power, and greedy Wall Street. Mainstream economists tend to blame some combination of globalization, technology and the declining power of labor movements.

As we shall see, while there are kernels of truth in both the Trump and Sanders viewpoints (as well as that of mainstream economists) the root causes are different and very much misunderstood. In a nutshell, the real cause of U.S. middle class plight over the past few decades is an aggregation of three enormously increasing trends: globalization plus regulation plus monetary policy fueled financialization.

Causes of middle class decline Part 1: Globalization

Remember at the top of this article, when we said that while the middle class in the U.S. has declined, the middle class in China and other fast growing countries has exploded? Our story starts here.

Starting in the late 1980s and early 1990s and accelerating thereafter, somewhere between 1 and 2 billion people entered the global workforce. What do we mean by the global workforce? We mean employees of companies making tradable goods and services, or those manufacturing products (and services) which can be sold globally. Where did they come from? As we’ve already stated, primarily from China and Southeast Asia, from India and from Eastern Europe (after the fall of the Berlin Wall). Especially in Asia, and especially at first, these were primarily relatively low skilled jobs.

Aha! So Trump is right? China stole our jobs? Not exactly. Globalization, or more specifically global trade, absolutely has winners and losers. Clearly, the Chinese middle class are better off. And very likely, former U.S. textile workers are worse off. Simplistically, there are a lot more Chinese that have been elevated from abject poverty to middle class than former U.S. textile workers that have lost jobs. In fact, far more Chinese have become middle class over the past two decades than the entire U.S. population. This is thanks to the workings of capitalism and is why Type 1 (middle class) income inequality has enormously decreased GLOBALLY. Humanity as a whole has clearly won.

Of course, that U.S. workers lose jobs or see their real wages decline for the benefit of the Chinese is unpalatable to U.S. presidential candidates and to U.S. voters. But this is not the whole story of globalization. While globalization has had a negative impact on some U.S. workers, it has had a positive impact on others. Who are those others and what are the benefits?

The Chinese like to watch Hollywood movies. They buy Boeing airplanes. They use Microsoft software. They consume Texas beef. Globalization has benefited those workers in export oriented industries, since the market for those products (exports) is obviously far larger globally than it is only domestically.

The second benefit of globalization to the U.S. dwarfs the first. Companies import goods rather than produce them domestically because they are cheaper. ALL consumers benefit from goods that are less expensive to produce, and therefore less expensive to purchase, whether these goods be T-shirts or iPhones. That imported goods are cheaper if wages are sufficiency lower is obvious, but is also a key part of our story explaining the decline of the middle class. We will return to it shortly.

Globalization’s third benefit, hardest to quantify, yet probably most important, is a reduction in the risk of war. Simply put, countries that trade with each other, tend not to go to war with each other. It is safe to say that there are few in the U.S. who would be better off under the circumstances of war with China, or any other (presumably former) trading partner. Regrettably, this is a point that President Trump and his administration seem not to appreciate.

Alas, we’ve digressed a bit. Let’s return to the connection between globalization and middle class decline. Think back to Econ 101, to supply and demand. When there is a big increase to the supply of something, what should happen to its price? Its price should go down. So what should have happened to global low skilled wages if there was a significant increase in the supply of low skilled labor? Wages should have declined. And of course that is what happened globally. Abundant Chinese (for example) labor put pressure on low-skilled U.S. wages.

Aha again! So, globalization caused middle class wages in the U.S. to decline. That explains the impact on middle class income inequality. And this is indeed the narrative told by the likes of Trump and Sanders and others. End of story, right? Not so fast.

Causes of middle class decline Part 2: Globalization plus Regulation

Yes, U.S. manufacturing wages should have fallen. Supply and demand dictates as much. But as we said earlier, they did not fall (on a nominal or real basis). Why not? As economists like to say, wages were “sticky.” Or in other words, there were frictions to the market that prevented wages (more accurately, total compensation) from falling. What were these frictions? There’s a whole bunch. Minimum wages, unions, inflexible work rules and especially the fixed costs of long-term retirement benefits and pensions. Plus disincentives to work (and accept lower wages) due to government provided welfare and unemployment benefits. There’s also one additional very important friction that we will come back to later (no spoilers!).

Said differently, government regulation and unions made it more expensive to hire workers. Naturally, businesses responded in several ways. They went out of business, they moved production overseas, they outsourced production and they switched their production inputs from regulated and expensive labor to unregulated and less expensive capital (machines and robots). All of which cost untold domestic jobs.

To make matters even worse, regulation (and tax policy) nearly always favors large companies over small companies, for the simple reason that large companies can afford lobbyists who write the regulations, whereas small companies cannot. Hence, regulations tend to protect large incumbents from competition, rather than consumers or the general populace to which they purport to “protect.” The end result is the kind of pervasive and monopolistic “crony capitalism” that dominates the U.S. economy. Moreover, by favoring large companies, regulation and tax policy subsidizes outsourcing and offshoring, since large companies can much more easily deal with such complexities than can small companies. Finally, large (especially public) companies are much more apt to layoff and restructure than are small private companies and less likely to make long-term investments (including investment in their employees, so-called “human capital”).

Let’s again return to Econ 101. Regulation, unions, welfare benefits and the like effectively set a floor on wages. And what happens when you set a floor on the price of a good but the true market price is below the price floor? Instead of the price falling to the market price and the quantity adjusting accordingly, the price is not allowed to fall. Instead the quantity falls. This is exactly what happened to middle class jobs.

Because of wage rigidity (more accurately, total compensation rigidity), and other (very significant) regulatory costs, the quantity of domestic jobs fell. The U.S. lost not just a few manufacturing jobs, but entire industries, like apparel, textiles, furniture and steel. And decimated were cities like Detroit and entire regions like the rust belt.

Instead of manufacturing workers earning, say 20% less in order for a factory to be cost competitive with China, the entire factory was closed or exported and the workers were laid off. Now those same manufacturing workers are either long-term unemployed or earning, perhaps 60-80% less working in the service sector or the “gig” economy. What were once steady and long-term careers have been replaced by short-term, unstable and much lower paying jobs.

Before we go any further, let’s quickly review. Competition from overseas led to pressure on U.S. manufacturing wages. But since wages weren’t allowed to adjust downwards, many jobs were lost instead. What should have happened in a free-er market? We should have seen somewhat lower (manufacturing) wages but significantly more (and better) jobs.

If you are perceptive, you might be wondering the following: wouldn’t even lower wages than we have now have increased income inequality even more? No. For two primary reasons.

First, unemployment went up more than it should have. Some of the workers never found new jobs and became part of the welfare state. Others found jobs, for example in the service sector that, as we mentioned above, tends to pay far lower wages than manufacturing jobs. So while manufacturing wages should have fallen some amount, those workers would have been much better off than not working at all, or having their wages fall much more working in the service sector.

To understand the second reason, we need to turn our attention to monetary policy and the resultant unprecedented growth of the financial sector.

Causes of middle class decline Part 3: Globalization plus Regulation plus Financialization

Back again to basic economics. Whether due do importing cheaper Chinese (and other country’s) goods because of cheaper Chinese labor (as primarily happened) or due to cheaper domestic goods because of cheaper domestic labor from the threat of Chinese imports (as should have happened much more), or both, domestic prices should have declined. In economic-speak, the U.S. was, and should have been, importing deflation.

Of course, the prices of many goods did decline, most notably of course, those imported from low-wage countries and most of the stuff sold at Walmart. However, the combination of lower priced goods and wage pressures should have caused the overall price level of the economy to decline. It did not. In fact over the past decades price levels as measured for example by the CPI, have increased somewhere around 2% per year. In other words, we had inflation when we should have had deflation. Why?

The answer is the Federal Reserve. Economists and officials of the Federal Reserve in their infinite un-wisdom believed (and continue to believe) in three myths: 1) that deflation is always bad, 2) that positive inflation (i.e. 2%) is necessary for a healthy economy and 3) that inflation is always a “monetary phenomenon.” In light of these erroneous beliefs, the Federal Reserve implemented what is known as “inflation targeting.” Simply put, they kept interest rates low and printed money in order to engineer a 2% (more or less) rate of inflation.

The model that most simply encapsulates the central bank’s thinking is known as the “Phillips curve,” which posits that there is an inverse relationship between unemployment and inflation. In essence, central bankers believe that inflation is caused by what is known as a wage/price spiral. When the economy is at full employment (everybody who wants a job has a job), additional monetary stimulus will lead to workers demanding higher wages, which will cause employers to raise prices, which is inflationary. They also believe the natural corollary, that as long as the economy is not at full employment, monetary stimulus cannot lead to inflation.

Here’s the rub. How do we know if the economy is at full employment? Does that mean 6% unemployment (keep in mind there are always people who are changing jobs for various reasons so 0% unemployment is not practical)? 5% unemployment? 4% unemployment? Should we count people who are long-term unemployed and too frustrated to be looking for a job? And how about workers that are employed but underemployed, either working fewer hours than they’d like or for lower wages than their skills would indicate?

The short answer is that there is no way to know what level of unemployment represents “full employment.” (This theoretical level is know as the “NAIRU” or “non-accelerating inflation rate of unemployment”). In other words, central bankers cannot stimulate the economy until the economy reaches full employment because full employment is unknowable. Instead, central bankers (in their minds) do the next best thing. They assume that as long as there is no significant inflation, the economy must NOT be at full employment (there is, as they say, “slack” in the economy). Hence, they believe central bankers are free and justified to pursue loose monetary policy.

This is exactly what has been happening over the past three decades or so. The Federal Reserve saw low inflation and therefore assumed that: 1) the economy was not at full employment and 2) loose monetary policy was both warranted and essentially cost-less.

As we’ve stated a number of times, due to globalization and the resultant competition from low-wage countries, U.S. wages should have declined and prices should have declined. Deflation should have been the correct outcome. However, the Federal Reserve, believing that no significant observable inflation meant less than full employment, stimulated the economy by keeping interest rates low and printing money, in an attempt to engineer positive (i.e. 2%) inflation.

The result of this Federal Reserve policy has been disastrous in a large number of ways, but let’s focus for now, on its impact on the middle class. In at least seven ways, Fed policy over the past few decades along with the resultant growth of the financial sector (what we’ll call “financialization”) has helped to crush the middle class.

1. Real wage decline and loss of purchasing power

Instead of falling wages and falling prices (as should have happened), workers experienced falling wages and rising prices. In other words instead of no change to real wages and purchasing power (assuming wages and prices declined the same amount) or even an increase in real wages and purchasing power (if prices fell more than wages), workers were hit with the double whammy of falling wages and rising prices, thus a decline of real wages and a substantial loss of purchasing power.

To be fair, I have no idea if the wage decline would have been greater or less than the price decline, but I am certain that either way, the impact would have been far better than the loss of purchasing power that occurred in the Fed created inflationary environment. My guess is that significantly more people would have experienced increasing purchasing power since everyone in the middle class could benefit from declining consumer goods prices but not everyone faced competition from foreign labor.

Before we move on, recall that when discussing why wages did not fall in light of global wage pressures, we mentioned that there were frictions that prevented wage adjustments (such as minimum wages, regulations, unions and long-term fixed benefits). At the time, I mentioned an additional reason to which we would return later. That time is now. Another important reason why wages were not flexible is because of the Fed-induced rising cost of living.

Workers would have been much more apt to accept lower wages if the overall price level had also been allowed to decline, as it should have. But the mismatch between wages and prices helped lead to what economists call long-term or “structural” unemployment. In a world of increasing cost of living, workers would rather be unemployed and hold out for a job that pays more than accept a job that pays less. Unfortunately, that higher paying job rarely comes along and unemployment tends to persist.

2. Inflation in non-tradable goods

The decline in real wages might have been the most obvious injury to the middle class caused by Fed monetary policy but it was by no means the only one. We’ve already stated that the price of many tradable goods (i.e. consumer goods) declined given globalization yet the overall price index (i.e. CPI) increased. So what did increase in price? The answer is non-tradable goods (that is, goods and services that cannot be manufactured overseas).

Most prominent of these non-tradable goods are three: real estate, healthcare and higher education. These three categories of spending have experienced inflation over the past few decades far, far in excess of 2% per annum. And in fact, all three are likely under-represented in the CPI. When the central bank prints money (or equivalently, keeps interest rates low to encourage banks to print money), the money must go somewhere. And some of that somewhere has clearly been real estate, healthcare and higher education.

Remember that the decline of the middle class is not simply a story of falling wages. It is also a story of despair, hopelessness and a fear that future generations will be worse off than previous generations. That fear is not only reasonable but playing out now as young people, thanks to the Federal Reserve, face not only poor job prospects but unaffordable housing, unaffordable education and unaffordable healthcare.

3. Subsidies to consumer debt

One of the most obvious (and intended) results of low interest rates is to encourage (and subsidize) debt. Naturally, the past few decades of loose monetary policy has seen an explosion of debt, including consumer debt. To be clear, there is nothing wrong with debt, per se. But there is something wrong with too much debt. What is too much debt? Too much debt is when a significant portion of debt cannot be repaid. In other words, when a lot of debt becomes “bad debt” we’ve had too much debt.

We saw exactly this happen in 2008 and 2009 as the consumer real estate debt market cratered. This debt market implosion was the proximate (though not root) cause of the subsequent financial crises. Clearly, too much debt can (or will) cause an asset bubble, and when the bubble bursts a financial crises and recession can (or will) follow.

In addition to creating an asset bubble and its subsequent financial crises, too much consumer debt has had other deleterious effects, specifically on the middle class. Most notably, the middle class is – wait for it captain obvious – over-indebted. In order to maintain their standard of livings in light of falling purchasing power, many people borrowed in order to consume. Many others were forced to borrow to cover the rising cost of medical expenses and rising cost of college education. Recall again that when we talk about the decline of the middle class, we’re not just discussing straight income inequality. We’re also factoring the sense of hopelessness that many (especially among the young) feel, brought upon in large part by debt that can never, and will never be repaid.

The substantial growth of consumer debt also reflects a transfer of consumption from the future to the present. By subsidizing consumption over savings (which is what both low interest rates and inflation do), economic output is pulled forward and there is under-investment. So not only are consumers over-indebted, but future productivity is likely to be lower, making it even harder for the middle class to escape their debts. It is no wonder why, for the first time in U.S. history, the current young working generation will almost certainly be worse off than their parents.

4. Technology “disruption” and automation

In addition to subsidizing consumer debt, Fed monetary policy has had the effect of subsidizing high growth/high risk companies (something I wrote about in more depth here) and enabling them to “disrupt” lower growth/lower risk companies. The primary stated purpose of low interest rates is to spur investment that would not otherwise be made, in the hopes of increasing employment and wages. However, since at least the mid-1990s, much, if not most, of this “extra” investment has been made in the technology sector.

This might sound good, since we are all trained to believe that technology is the future, and that technological innovation is the key driver of productivity growth and long-term economic growth. Unfortunately, because of extraordinarily loose monetary policy, this has not been the case. The vast majority of investment in technology has been wasteful and unproductive, and has destroyed rather than created middle class jobs.

The example I like to use is Amazon, the e-tailing behemoth. Amazon is a company that, thanks to virtually unlimited amounts of cheap capital courtesy of the Fed, has been able to under-price and decimate traditional brick-and-mortar retailers. And yet after 20 years of being in business, and despite its scale, despite its smart employees and brilliant CEO, despite its amazing technology, despite its sales tax advantage and despite its low cost of capital, it has never shown an ability to make a profit from its retail operations. Simply put, Amazon is a company that absent Fed policy would not exist. And absent Amazon, there would be hundreds of thousands more retail jobs in the U.S.

Of course it is not just in the retail sector where middle class jobs have been “disrupted” by easy money fueled technology companies. Virtually no industry has been unaffected. Not only do the disruptees tend to employ significantly more workers than the distruptors (as in the case of traditional brick and mortar retail vis a vis Amazon), but naturally companies facing ongoing threats of disruption are reluctant to hire and disincented to invest in the future.

In addition, this reluctance of companies to hire combined with the enormous regulatory costs of employing full-time workers (for example, providing healthcare insurance) has led to more and more workers being independent contractors and to the so-called “gig” economy. Proponents of the “gig” economy cite flexibility as a benefit to the working masses, but this is nonsense. A good job is one that is stable and offers opportunity for advancement in both compensation and responsibility. Being an independent contractor offers neither.

The uncertainty of employment, generally lower salaries, nonexistent benefits and lack of opportunities for advancement is clearly damaging to the middle class worker. But to add insult to injury, long-term productivity suffers too. Employers have zero incentive to invest in the development and continuing education of their non-employees and those non-employees have zero incentive to invest in their non-employers. All in all, a lose-lose for the economy and for the middle class.

Moreover, the same trends of cheap money and high regulatory costs that have resulted in today’s “gig” economy have also caused the enormous trend towards job automation. Whereas originally automation was a phenomenon affecting low-skilled jobs, more and more it is impacting high-skilled jobs. Contrary to the view of many technophiles in Silicon Valley (and elsewhere), humans are not obsolete. They have just been made too expensive by government regulation intended to help them. Like virtually all government action, it is the unintended consequence that dominates. In today’s world that means we are penalizing companies that employ humans and subsidizing companies that “employ” robots.

We see more evidence of the enormous amount of cheap money in the technology sector when we look at how the business model of venture capital has changed. Prior to the era of cheap money, investors in startups would actually perform due diligence on a company’s business model and path to profitability. Moreover, founders were required to have actual experience in their industry and with the products they were going to sell. Investment exits, whether by IPO or acquisition were almost always predicated on actual profits and cash flow generation.

Today’s model of venture capital is very different. The idea of investing in viable businesses has morphed into the mentality of buying a lottery ticket. Venture capital firms put money into, say 15 companies with the full expectation that 14 of those will be absolute failures and represent total investment losses. Only one of the 15 investments needs to be successful. Which one? Who knows. And what do we even mean by the term successful? Not profits nor revenue nor even a business model but simply an exorbitant valuation, and a liquidity event or exit.

And what of all the so-called “investment” that went into the 14 losers? Can we consider that productive economic investment? Of course not. These companies shut down. Websites turned off. Code deleted. Employees moved on. They leave no productive legacy. Yet, economically it is even worse than a waste of resources. For in their short existences they’ve likely disrupted and done irreparable damage to real companies with productive businesses.

Finally, I would argue that the enormous subsidies to technology companies over recent decades which has fueled the rise of the internet has actually made our economy less productive rather than more. And remember that ultimately it is productivity that results in rising wages. Economic data seems to agree with me. More than three decades since the birth of the personal computer and two decades since the commercial adoption and widespread usage of the internet, we have yet to see rising productivity. Perhaps it is too soon to tell whether or not the internet will lead to productivity enhancements as other new technologies have. But so far, the story is not good.

Moreover, other measures of our societal health seem worse-off, our future mortgaged just like it is when we take on too much debt. For “free” can have a cost, and that cost can be huge.

We can now be entertained 24/7 yet we seem less happy, and more distracted. We can connect with anyone in the world at virtually zero cost yet we seem more lonely, and less able to communicate. We have instant and free access to high quality information yet we are far more susceptible to “fake news” and far more politically polarized than ever before. We have shunned relationships for transactions. We have lost virtually all privacy. We have allowed high quality (and especially local) journalism to be decimated, itself the fourth branch of government, and government’s most important check and balance. We have granted an enormous mouthpiece to fringe politicians, to populists and racists, to conspiracy theorists and last but not least, to terrorists.

5. Growth of Wall Street and the financialization of the economy

As much as monetary policy has spurred the growth of the technology sector, it has even more so been the cause of the growth of the financial sector and Wall Street. This is a topic to which we will return when we discuss the second type of income inequality, the explosive growth of the wealthy and the wealth of the wealthy. For now, however, let’s discuss how Wall Street’s rise has helped drive the middle class’s fall.

But before we do that, let’s make sure we understand the nature and purpose of the financial sector (I’m using “Wall Street” and the “financial sector” interchangeably). The financial sector includes banks, insurance companies, pensions, asset managers such as mutual fund companies, hedge funds, venture capital funds and private equity funds and exchanges where stocks, bonds, commodities and other financial assets can be traded.

Broadly speaking, the purpose (and economic value) of the financial system is twofold. The primary purpose is to efficiently match up the extra money that people, businesses and governments have (what we call “savings”) with those people, businesses and governments that need extra money for good purposes (what we call “investment”). The financial system’s (specifically, the banking system’s) secondary main purpose is to regulate the supply of money.

For a long time, the government has not trusted the banking system with regulating the money supply for fear of bank panics and their subsequent deleterious effects on the economy. This sentiment was most pronounced after the Great Depression when a slew of new banking regulations (not to mention a weakening of the importance of a gold standard) transferred significant control of the money supply from private sector banks to the Federal Reserve and to the government. Over the past three decades, the Federal Reserve’s influence on the money supply has grown enormously after the final withdrawal from a gold standard and the rise of the monetarism school of economics (essentially, the belief that the Federal Reserve can and should manipulate the money supply and/or interest rates to avoid recessions).

As we’ve mentioned a number of times, the disastrous reliance on the Federal Reserve and on banking regulation to manipulate the value of money has led to a world of far too much money, far too low interest rates, and enormous subsidies to risk. The natural upshot of this is a financial sector that has grown far, far beyond its economic value. A financial sector that having been neutered of its second purpose of managing the money supply, does a miserable job at its first purpose, allocating savings to productive uses.

We’ve already discussed three unfortunate trends that go hand-in-hand with the growth of Wall Street: the explosion of consumer debt, the funding of the unproductive (or worse) tech industry and the creation of financial asset bubbles and their subsequent crises. In at least four additional ways, the growth of the financial sector has hurt the middle class. First, by subsidizing speculation and short-termism. Second, by encouraging growth over all other considerations. Third, by vastly increasing systematic risk and decreasing economic stability. And fourth, by providing huge incentives for employment in the financial services sector at the expense of jobs in more productive industries.

Short-term focus

To reiterate, the primary function of finance is to efficiently allocate capital to productive uses. And since most productive uses of capital take a while to become productive, inherently, financial markets should have a long-term focus. However, by massively subsidizing risk, central banking policy has led to a paradigm where the primary function of Wall Street is speculation, and the emphasis, by far, is on short-term results.

We can see this very clearly in the stock market, the primary function of which is supposed to be to allocate money, in an IPO for example, to growing companies that need capital. Instead, the stock market has become dominated by trading of already issued shares by computers and held for micro-seconds (bringing new meaning to short-term results!) and trading volume has exploded. For what good purpose? Not the benefit of “liquidity” espoused by mainstream economists, which encourages short-termism. No, such activity represents an enormous waste of resources.

The emphasis on short-term results does more damage to the economy than just to waste resources. The rise of activist hedge funds leads public companies to focus almost exclusively on quarterly financial results and on financial engineering at the expense of long-term investment. Similarly, the business model of private equity, through leveraged buyouts, leads to excess cost-cutting, more financial engineering, under-investment and frequent bankruptcies. Hedge fund and private equity moguls make billions while the middle class lose their jobs (a topic we will return to when we discuss Type 2 income inequality).

Moreover, by subsidizing Wall Street and public equity markets, the Federal Reserve, along with government regulation, has fostered a compensation system that is inherently biased towards short-term incentives. CEO’s paid with stock options naturally focus on short-term stock performance rather than long-term investment. Likewise do Wall Streeters who receive the bulk of their compensation in the form of annual bonuses. In the old days when Wall Street firms were private partnerships (as they should be), there was incentive for investment bankers to focus on long-term client relationships rather than one-off transactions, as well as an incentive to limit risk since partners’ compensation was illiquid and partners faced capital calls if things went bad. But thanks to the Fed, it became irresistible for firms to go public, and a culture of enormous risk taking took shape.

“I’ll be gone, you’ll be gone,” as the saying goes. In other words, I get the bonus. Someone else takes the risk.

Growth at all costs

In addition to the focus on short-term results, the Federal Reserve’s massive subsidy to risk has also led to a massive emphasis on growth at all costs, since subsidizing risk is essentially equivalent to subsidizing growth. We’ve already seen how this works with tech startups and the venture capital industry where revenue, profitability and cash flow mean nothing. All the counts is growth. But this is true as well in the public markets.

Companies that don’t grow see their stock prices punished by the market. CEOs of companies that don’t grow see their jobs go to someone else. If you can’t grow organically, then grow via acquisition. Why focus on hard things like long-term research and development when you can do easy things like an M&A deal? As CEO I might not be around when the R&D pays off years or decades in the future, but I’ll be around to benefit when the deal happens and I’m a higher paid CEO of a bigger company (and if not, I’ll have received my golden parachute).

These incentives promote highly questionable (read: stupid) M&A transactions that not only destroy shareholder value (as the majority of M&A transactions do), but destroy jobs due to “redundancies.” Moreover, excess cash is not returned to investors, as it should be, and where it can be invested more efficiently. No, it is kept on the balance sheet, for to return cash is an admission that the CEO has no way to grow. And as we stated above, no CEO wants to make that admission.

Systematic risk

The third negative impact of financialization on the middle class has been by vastly increasing systematic risk of the financial system and resulting loss of economic stability. Such financial disruptions have enormous impact on the economy. We experienced such a crises in 2008-2009 and its aftermath, as bankruptcies, layoffs, defaults and recession hit the middle class. We will surely see this happen again. The Federal Reserve believes that it has been making the world safer by backstopping financial markets, mitigating downturns and smoothing out economic activity. On the contrary, the Fed’s efforts have led to an enormous increase in underlying risk.

Increased systematic risk in the economy is manifested in a large number of ways. Massive financialization has fundamentally changed the basic business model of banking from one based on relationships to one based on transactions. This change (which has also happened throughout the economy) is not for the better, contrary to the view of mainstream economists. For hundreds of years, the basic business of banks was simple and unchanging. Banks knew their customers, performed due diligence and kept their loans on their books. In short, they behaved “prudently.” The fear of bank runs, not regulators, led banks to limit risk and keep adequate capital.

Today, incentives are vastly different. Regulation of banks is dramatically higher yet so is the banking system’s underlying risk, as well as leverage. Banks sell off their loans rather than keep them, booking immediate revenue and profit, and obviating the need to perform proper diligence. That debt then gets securitized, “sliced and diced,” often backstopped by the federal government in the process, and then sold to institutions that have no idea what they are buying. Rinse and repeat. The net effect is substantially more debt, especially consumer debt, than the economy can support, and as we so painfully witnessed during last decade’s financial crises.

The financialization of the economy and securitization of debt markets has increased systematic risk in other ways as well. Large companies began funding their day-to-day operations with extremely short-term (overnight) debt which they roll over daily, rather than much longer term loans or lines of credit. Being dependent on debt markets each and every day to make payroll and pay vendors introduces enormous risk into the economy, as again we witnessed when debt markets seized up during the financial crises.

Finally, the perceived reduction of risk due to lower interest rates and the implied backstop of central banking activity coupled with an overriding emphasis on growth of financial institutions led to the explosion of the derivatives market. In fact, the derivatives market dwarfs the underlying debt markets perhaps by an order of magnitude or more. The size of the derivatives market and its inherent counterparty risks turns a bank’s balance sheet into an irrelevant joke and increases the risk of bank (and financial system) insolvency to astronomical levels.

The lure of Wall Street

The next of the seven damaging impacts of the growth of Wall Street of which we’ll discuss is the incentive for people to be employed in the financial services sector rather than put to more productive use. The best and brightest among us are encouraged by enormous compensation to become bankers and traders and hedge fund managers and venture capitalists, instead of becoming engineers and scientists and doctors and teachers. Yesterday Einstein changed the world as a physicist. Today’s Einsteins become hedge fund quants, seduced by money. The benefit to society? Zero. At best.

The bottom line is that when smart people are put to unproductive uses, only they benefit. But when smart people are put to productive uses everyone benefits. And since long-term GDP growth depends on productivity growth, and the long-term health of the middle class depends on GDP growth, until the best and brightest return to productive uses, the middle class is screwed.

In short, Wall Street has become a parasite. Its size vastly exceeds its value. It destroys economic productivity in the short-term by sucking resources that would be better allocated to Main Street. And it kills economic growth in the long-term by mis-allocating capital, leading to over-investment in short-term and unproductive uses of money and under-investment in long-term and productive uses.

Crony capitalism

Fed monetary policy has also led to the very unfortunate rise of crony capitalism. How, you might ask? By fostering a focus on short-term results and growth, by subsidizing public equity markets, by encouraging financial engineering and M&A activity, and by subsidizing valuations, and therefore the value of employee stock compensation, and consequently the quality of employees.

Of course, as companies get larger they have more money to lobby government (they become “special interests”) for various subsidies and tax incentives and protections (regulations, patent protection, tariffs and trade protection) for their businesses. These subsidies and protections work to keep out new entrants, and limit the competition from typically smaller competitors. Now you’ve got a positive feedback loop as these subsidies and protections from the Fed, from Wall Street, and from government make the company larger and more able to lobby for more subsidies and protections.

It is the essence of crony capitalism that companies become larger, industries roll-up and become more concentrated and less competitive, and governments work hand-in-hand with these large companies. A revolving door develops between government and big business. Small businesses are at an enormous disadvantage and suffer. They face disadvantages in funding due to their higher cost of capital, if they can get funding at all. Regulations (sponsored by the larger companies) act as barriers. Licensing requirements multiply, as does paperwork. Small business faces disadvantage in hiring because they cannot offer overvalued stock options.

We are experiencing this throughout the economy in virtually every industry. Local banks disappear as they cannot afford the regulatory costs that large financial institutions can. Small doctors offices disappear as they cannot afford the paperwork that large hospital networks can. Mom and pop retailers and restaurants disappear as they cannot afford the rent increases that large chains can.  Small business struggle to survive, or go out of business. The lucky ones perhaps get bought out by the large companies. Moreover, many businesses cannot get started (outside of the tech industry) as they cannot get funding.

The middle class suffers too, for a number of reasons. As we’ve mentioned previously, large public companies are more likely to outsource and offshore jobs. Moreover, large public companies are less likely to invest in their employees. Innovation tends to fall, hurting productivity. Competition falls, raising prices. We wind up with monopolies which are the true enemy of capitalism. Remember that the foundation of capitalism and rising standards of living is innovation. And most of innovation comes from new business. Innovation is not occurring outside of technology sector, and within the tech sector, as we’ve discussed, it is mostly unproductive.

Special interests are an unavoidable byproduct of democracy. They have always been, and will always be. For there is an inherent risk/reward asymmetry between the entity that benefits from the special interest and the majority of the population that is hurt by the special interest. There will always be substantially more incentive for an entity to deploy resources to fight for a large payout than there is for the general population, each receiving a proportionally small payout by fighting against it. However, in recent decades, the power of special interest, the power of big business and the level of crony capitalism has become unprecedented. More than any other reason, for this we can thank the Federal Reserve, whose policies have favored large businesses and monopolies.

Mortgaging the future

As much as monetary policy has been damaging to the middle class over the past three decades, its impact on the future will be even worse. Unfortunately, the damage has already been done.

We’ve already mentioned the fact that low interest rates have subsidized consumer debt causing the middle class to consume when they should saved, and shifting consumption forward. The result being an indebted middle class that will have to consume less in the future. We have also discussed how much of the easy money fueled investment made over recent decades was unproductive and wasteful. We’ve talked about how Wall Street and the Federal Reserve allowed crony capitalism and monopolistic behavior to flourish at the expense of innovation and future productivity. We’ve discussed how short-term focused companies are under-investing in their employees, under-investing in basic research and development and pursuing financial engineering, acquisitions and share buybacks instead of long-term innovation. Add the fact that our monopolistic education system is failing to train the middle class. Put all these trends together and the inevitable result is lower economic growth in the future.

But there’s one additional impact of monetary policy of which we have not yet discussed. And this, unfortunately, will have an even greater impact on the middle class in the future. Whatever little savings of the middle class that does exist will likely be gone.

Government and the Federal Reserve have already done the middle class an enormous disfavor by encouraging people to use their houses (the primary form of middle class savings) as piggybanks during an enormous real estate bubble. Now Fed monetary policy decimates middle class savings even further because of exceptionally low interest rates. These low interest rates slowly destroy the basic business model of the middle class’s other two primary forms of savings, pensions and insurance policies.

Pensions and insurance companies, needing to match assets with future liabilities, cannot do so with safe investments because of low interest rates and low returns. So they must resort to much riskier investments. Sooner or later, the gap between asset and liability widens too much, and/or asset valuations plummet (which eventually they must) and the middle class suffers without the financial assets they assumed (and government promised) were safe.

Meanwhile, public pensions are in far worse shape as they were so over-promised to public unionized workers to make it inevitable that nearly all local and state governments will sooner or later become insolvent. Of course before that mess happens, another mess will ensue as basic services (police, fire, education, infrastructure maintenance etc.) are cut to try to stave off the inevitable but politically impossible need to reduce pensions. This process has already happened in a handful of municipalities around the U.S. but will become prevalent at some point. The middle class will suffer more than just reduced pensions as crime rises and infrastructure crumbles.

Recap of the causes of middle class decline

This has been a long argument, so before we leave the topic of the decline of the middle class, let’s really quickly recap our story. Starting in the 1980’s and 1990’s, China, along with many other countries joined the global economy. With abundant low-skilled labor, they put pressure on manufacturing wages in the U.S. and other developed countries. Due to regulations, unions and legacy retiree compensation, manufacturing companies were not able to lower their costs of labor as economics 101 would dictate. Instead jobs were outsourced, off-shored and lost while companies went bankrupt and entire industries disappeared.

Meanwhile, in its naive belief that all inflation is monetary, and seeing no apparent inflation due to the effects of global trade, the Federal Reserve printed money, creating a 30+ year financial bubble and laying waste to the middle class. The cost of living went up instead of down. Real estate, education and healthcare became unaffordable. The only way for consumers not to suffer in the near-term was to take on debt, debt that can never be repaid. More jobs were lost as technology disruption and automation was subsidized. Wall Street grew at the expense of Main Street. Monopolistic crony capitalism came to rule the economy. And last but not least, the future prospects of the middle class was even further mortgaged as retirement savings, pensions and insurance policies were bled dry.

To comprehend the causes of the decline of the middle class, we require little more than an understanding of basic supply and demand, something covered in the first week or two of economics 101. And yet, this all happened not just in plain sight of, but was for the most part instigated and directed by the economics profession. Baffling, don’t you think?

So that’s our first story in our explanations of the dramatic rise in income inequality. Now onto story number two. What has caused the dramatic rise in incomes and wealth worldwide for the 1% and even more so for the .1% and even more so for the .01%?

Type 2 income inequality: the rise of the 1%

As damaging as a declining middle class is to society, the shocking rise of the wealthy and super wealthy is much worse. For one, it is a truly global phenomenon. But more importantly, it is the fuel of revolution. History teaches that the masses don’t revolt simply because they are struggling and experiencing a decline in their standard of living (Type 1 income inequality). They revolt when they are struggling YET AT THE SAME TIME, their political and business leaders, the so called elite, are prospering wildly. And the upper class is truly prospering, in terms of wealth and in the political power with which wealth brings. And this, more than anything else going on today, is causing a backlash against capitalism leading to the growing popularity of populist, fascist and socialist leaders.

Just like we did for Type 1 income inequality, we must now ask ourselves why. Why has the level of income and wealth been increasing so drastically for top earners? And why has the level of income and wealth been increasing even more astronomically for the top earners among the top earners?

This insidious trend has three main causes. And in fact, they are the same three causes that explained the decline of the middle class: globalization, regulation and monetary policy fueled financialization. However, here the story is different. The explanation of middle class decline was a combination of the three trends, and all three are required. The rise of the wealthy is primarily a result of monetary policy and financialization. This is the great cause. It was then exacerbated by regulation and exacerbated even more by globalization.

Asset valuations

Let’s start our story with the most obvious result of easy monetary policy, higher asset valuations. As we learn in Finance 101, lowering base interest rates (other things equal) reduces the cost of capital of all financial assets, and raises the valuation of all financial assets. What do we mean by the term “financial assets?” Stocks, bonds, private companies, real estate and anything else that has scarcity, at least a minimal amount of liquidity and can be used as a store of value including art and fine wine and any other collectibles.

Equally obvious, raising the value of such assets benefits those who own those assets by increasing their wealth. And most obviously, it is the wealthy who owns most of the world’s private wealth. Clearly, increasing the value of financial assets makes the wealthy even wealthier, and this is our first example of how central banking activity has led to Type 2 income inequality.

Wage/price spiral

The second most obvious impact of loose monetary policy on Type 2 income inequality comes out of the Economics 101 textbook, yet goes unrecognized by those teaching economics. This is the wage/price spiral whereby wages (and the number of jobs) increase so more workers can afford to pay more money for goods and services. At the same time, providers of those goods and services experience rising demand and thus raise prices. Workers seeing higher prices demand even higher wages, which leads to even higher prices, which we call “inflation.”

The middle class has not experienced such a wage/price spiral and the price index that more or less tracks middle class cost of living, the consumer price index (CPI) has not risen past the Federal Reserve’s target of 2% per year. However, the wealthy as a group have experienced such a wage/price spiral and such inflation, something that does not show up in the CPI, and seems to be completely beyond the view-port of economists at the central bank.

For example, as Wall Street compensation (both per person and the number of jobs) has increased dramatically over recent decades (which we will discuss shortly), New York real estate prices have also dramatically climbed. As have the prices for other goods and services sold in New York, luxury restaurants, for example. We see the same trends, for instance, in San Francisco where the total compensation of technology workers has increased equally if not even more dramatically, as has real estate, restaurant and other prices. We see the same trend occurring throughout the world in major metropolitan cities dominated by finance or technology or some other industry that has experienced dramatic (and inflationary) growth such as oil.

That this wage/price spiral is occurring only for the already high income segment of society clearly exacerbates income and wealth inequality. This is also why luxury good inflation is much higher than regular good (i.e. middle class) inflation.Whereas the CPI has averaged about a 2% per annum increase, I would surmise that inflation of luxury goods and inflation in major metropolitan areas has been closer to 6 or even 8 percent per year. And the higher up the luxury good food chain we go, the higher the inflation rate. In other words, the inflation rate for millionaires is high. The inflation rate for billionaires is even higher. The price of a BMW up. Collectable Ferrari up more. The price of a two bedroom apartment in New York City up. Upper East side townhouse up more. The price of a bottle of non-vintage champagne up. 1982 Chateau Petrus up more. You get the idea.

Growth of Wall Street

So far we’ve discussed how monetary policy has fueled the rise in asset prices owned by the wealthy and the inflationary wage/price spiral of the upper class. Next, I want to discuss how monetary policy has led to the rise of specific segments of the 1%. Let’s start with what is probably the largest segment of the increased 1% over the past three decades of easy monetary policy, Wall Street and the financial sector. Of course, bankers have always been among the highest paid workers in the economy. But over the course of the last 30 years, the number of high-paid finance workers expanded dramatically, as did the level of compensation given to those workers.

Decades of Fed monetary policy, with its below market interest rates and through repeated financial bailouts acted as a direct subsidy to banks and other financial institutions, encouraging these banks and financial institutions to grow and to take on more risk. This was further bolstered by a regulatory environment (spearheaded by the Federal Reserve as well as other regulatory agencies) that has favored large financial institutions over small ones because of the erroneous yet widespread view that big banks are less risky.

With ballooning balance sheets thanks to easy money, Wall Street grew, and grew, and grew. Increasing revenue brought increasing profits and increasing profits brought increasing bonuses. Moreover, every time the industry took on too much risk and got in trouble, the Federal Reserve bailed it out. The lesson learned? Take on even more risk. Use more leverage. 

Financial institutions grew virtually all aspects of their business and invented new businesses as well. Low interest rates, high asset valuations, an expansive risk appetite and tunnel vision for growth resulted in dramatically increased activity in mergers and acquisitions, securities trading, structured products, asset management and other revenue producing areas of large banks. The prime benefactors? The likes of investment bankers, traders, quants, institutional salespeople and private bankers.

Finally, it is worth noting that the massive growth of financial institutions and compensation would never have occurred had those financial institutions remained privately held partnerships, as they had been for centuries. Having one’s own money on the line naturally limits the amount of risk one is willing to take. However, the enormous subsidy to financial markets provided by the Federal Reserve and the regulatory environment favoring large institutions made it irresistible for banks to go public. So not only did banking partners become massively rich and liquid overnight, directly feeding income inequality, but the prudence developed over hundreds of years went out the window.

Hedge funds

While the growth of Wall Street, more than any other industry, resulted in the drastic expansion of the income and wealth of the 1%, a certain subset of Wall Street is probably most responsible for the increase of the wealth of the 0.1% and 0.01% – hedge funds.

Hedge funds are essentially unregulated pools of money that can be invested with few, if any restrictions. Most notably, they differ from mutual funds in that they use leverage and can short stocks. Because what they can invest in is essentially unregulated, and because these investment can be risky, the government limits who can put money into hedge funds. And as we’ll see, just who are the investors in hedge funds is they key to understanding how hedge fund managers became so wealthy.

Hedge funds have been around since the late 1940s but until the past few decades reflected a tiny subset of asset management. Originally, hedge funds were designed to be investment vehicles only for wealthy individuals. However, beginning in the late 1980s and especially 1990s, hedge funds began attracting money from a vastly different type of investor, not the wealthy individual but the institutional investor: pension funds, insurance companies and endowments.

Historically, these kinds of institutions invested in relatively safe investments, most notably bonds, in order to ensure they had the future income to meet anticipated liabilities (i.e. pension payouts and insurance claims). However, because of easy monetary policy and the low interest rates that go along with easy monetary policy, the return on relatively safe bonds became insufficient to meet future liabilities. In order to make the returns necessary to remain solvent, these institutions had no choice but to take on more risk. They began to invest in riskier (i.e. high yield) bonds, stocks and so-called alternative investments such as hedge funds.

Given the enormous amount of money controlled by institutions such as insurance companies and pensions, along with low interest rates and the decades long Fed-fueled bull market, it is no wonder that the hedge fund industry grew wildly. But we have not yet explained why the hedge fund industry minted so many billionaires and contributed to the rise of the super-wealthy. To understand this, we must look to the hedge fund fee structure.

Over the past few decades, other types of investment vehicles have also seen dramatic increases in assets under management. For example, there are many mutual funds that have grown to manage billions of investors’ dollars. And while mutual fund managers are certainly well paid, and are card-carrying members of the 1%, they are not billionaires. That is because mutual fund fees are a very small percentage of total assets under management, typically (and often significantly) less than 1%. Hedge funds, on the other hand, have a very different fee structure.

As the hedge fund industry grew, the typical fee structure for hedge funds became more or less standardized at what is known as 2 and 20. That is to say, 2% annually of assets under management and 20% of investment profits. A handful of high profile funds even charged substantially higher fees. To illustrate the difference in fees, let’s consider a mutual fund and a hedge fund that both had $10 billion of assets under management and produced an investment return of 20%. If the mutual fund charged a reasonable 1% fee, it would be entitled to $100 million annually (1% of $10 billion). On the other hand, the hedge fund charging 2 and 20 would earn $600 million (2% of $10 billion plus 20% of the $2 billion of profit). Whereas mutual funds tend to be part of large organizations with substantial overhead, hedge funds tend to be small, with only a handful of partners sharing the profits. It is easy to see how the fee structure of hedge funds has made some hedge fund managers very, very rich. 

Of course, if hedge fund fees are so high, and if hedge funds are minting billionaires, one would expect their investment performance to be stellar. Alas, this is not the case. While a handful of hedge funds have indeed performed well over time (whether due to luck or skill or both), most have not. In fact, that vast majority of hedge funds have under-performed a broad stock index such as the S&P 500, especially on a risk-adjusted basis. 

The obvious question remains. If hedge fund performance has been weak, and if hedge funds as an asset class add little economic value, how have hedge fund managers been able to get away with charging their exorbitant fees? To answer this we must return to the question of who are the primary investors in hedge funds and what are their incentives.

Recall that the original investors in hedge funds were wealthy individuals. Note also that the investors in mutual funds are individuals, wealthy and middle class. In either instance, individuals have strong incentives to monitor performance and limit fees if performance does not justify those high fees. In other words, people tend to watch their own money.

The managers of institutional money have very different incentives. Remember that institutions with large pools of money are pensions, insurance companies and endowments. Most pensions funds are state run (The California Public Employees’ Retirement System “CalPERS” being the largest in the U.S.), most endowments are either state run or not-for-profits (universities for example) and most insurance companies are highly regulated public companies (though they used to be owned by their policyholders – another casualty of central bank induced financialization).

There are few incentives for institutional money managers to maximize performance, to minimize fees or to limit risk. Instead the incentive is first and foremost to keep one’s job, and the way to keep one’s job is to do what every other institution is doing. To invest in the same asset classes and same funds, use the same consultants and benchmarks. Moreover, as a public, not-for-profit or highly regulated employee, compensation is typically limited and not tied to investment performance. Instead fund managers are “paid” by hedge funds in “soft dollars” or perks such as dinners, shows and golf outings. Why complain over fees if you’re being entertained with fancy dinners, shows and more? The larger the hedge fund, the more dollars and clout they have to entertain institutional fund managers, so the rich get richer.

Long story short, most hedge funds have poor performance, yet hedge fund founders and partners make millions each year, if not billions because they “take care” of their clients, the institutional managers. Not only does this feed income and wealth inequality but hedge funds are effectively and legally skimming off money from middle class pensions and insurance policies. From the standpoint of their value to the economy and to society, it is hard to imagine a less deserving member of the wealthy than the hedge fund billionaire.

Before we leave the discussion of hedge funds, it is worth noting that other alternative asset classes, for example private equity and venture capital, exhibit the same trends as hedge funds. They too have institutional investors. They too charge high fees (also often 2 and 20). They too suffer poor performance, especially risk-adjusted performance as an asset class. They too have minted many a billionaire because of the misaligned incentives of their investor clients.

Tech moguls

If the finance sector has been the largest beneficiary of decades of easy monetary policy, the technology sector is surely second, and not too far behind. As we discussed when explaining the decline of the middle class, monetary policy, by subsidizing growth and risk, has had enormous impact on the tech sector and on the valuations of technology companies. And just as the financial sector has vastly expanded the wealth of the 1% and created a significant number of billionaires, so too has the technology sector.

In nearly every sector of the economy, easy money and the Federal Reserve’s subsidy to risk and to growth coupled with the crony capitalist regulatory advantage for large companies has led to industry consolidation. In no industry, however, has this been as blatant as in the technology sector. We are conditioned to think of technology as the most competitive of sectors given the low barriers to entry, the endless parade of startups and the very notion that technology is “disrupting” established industries. And in some superficial ways this is true.

However, when we dig a bit deeper we realize that no industry is as concentrated and no supposedly unregulated industry has as many companies with as much monopolistic power as technology. Intel for computer chips. Microsoft for software. Google for search. Apple for phones. Amazon for internet retail and cloud computing. Facebook for social media. And newer near-monopolies such as Uber and Airbnb. 

How did this happen to what was, and is supposed to be a highly competitive and unregulated industry?

An economist would probably say this concentration is a result of economies of scale, either because the cost of business is reduced as revenue grows, or because of so-called network effects that result in significant concentration. No doubt that certain sectors of the tech industry do exhibit economies of scale. For example, the high gross margin nature of the software business (software that is expensive to create and then virtually free to replicate) exhibits increasing marginal returns. And businesses such as eBay do exhibit network effects whereby more sellers attract more buyers and more buyers attract more sellers.

However, there is no way that traditional economies of scale can justify the level of concentration in technology. Something else is going on. And that something else is money. Too much money. Too much money has created a winner-take-all game in technology. To some extent this has happened in every industry as nearly every industry has experienced consolidation over the past few decades. However, this has happened far more in technology because of the industry’s generally low startup/growth costs and because the Federal Reserve’s easy money subsidy is greater for high growth/high risk companies, leading to extremely high valuations.

We can see this trend clearly when we examine the business model of startups and venture capital. For startups, the goal of course is to raise money. In the old days, to raise money required significant experience in the same industry or same type of business. You needed to convince lenders, bankers and investors that you had the wherewithal to create and run a profitable business. Today relevant experience means little, and the ability to build a profitable business means even less. What matters is your access to money. Go to the right schools (Stanford), work (briefly) at the right companies, be from the right (wealthy) family. Its all about pedigree, not experience. A breeding ground for inequality and certainly not what we think of as the American way.

If the first goal of startups is to raise money, the second is to grow. That, of course requires more fundraising and more money. And raising money requires a high valuation. And it also requires more employees. And to attract those high quality employees we need to give stock options. And to make those stock options attractive we need a high valuation. And to keep our high valuation we need to grow. And so on. We’ve got a cycle with which only a very small number of companies can keep up. Most cannot. And in the end, the winner takes all. Not because of a better business or higher profits or economies to scale. But because of money.

The third and final goal is to exit, through IPO or more likely sale to one of the larger tech monopolies, fostering even more industry concentration. Exit with a high valuation. With a billion dollar valuation, as a so-called “unicorn.” The winner-take-all, lottery-like startup model jives with the winner-take-all, lottery-like venture capital model. Throw gobs of money at 15 companies with the full expectation that 14 will fail and with the hope and prayer that 1 will succeed. As long as that single success has an exit or liquidity event at a large enough valuation, the venture model works. And with one huge winner and many losers we’ve just described one of the textbook definitions of income inequality.

In the end, we wind up with Googles and Apples of the world, the Bezoses and the Zuckerbergs. The easy money that fosters the “winner-take-all” mentality, that encourages industry concentration, that subsidizes company valuations and stock prices, ultimately creates the tech millionaires and billionaires that are a large part of the income and wealth inequality story. And just as financialization and the growth of Wall Street creates a wage/price inflationary spiral and exacerbates income inequality in financial centers such as New York, so too does the growth of tech valuations and tech monopolies create the same inflation and inequality in places like San Francisco and Silicon Valley and Seattle.

Finally, it is important to point out that while the Federal Reserve is the primary factor responsible for technology sector concentration and the income and wealth inequality stemming from the tech sector (not to mention creating asset bubbles in the technology sector), it is not the sole cause. Federal government policy and regulations play a role here too in the consolidation of the tech industry, by fostering monopolies and crony capitalism, and by treating the internet and related technology as a “public good” which it should not be (for example net neutrality legislation), thereby subsidizing directly technology companies and picking winners and losers.

For example, in the U.S. we’ve got a ridiculous patent system that favors large companies who can afford expensive lawyers and lawsuits at the expense of smaller businesses who cannot. Similarly, companies with very deep pockets can fight, lobby against or even ignore and violate local laws and regulation in order to grow their businesses, as companies such as Uber and Airbnb have done. And once such companies get to a certain scale and size, they can then spend marketing money to affect public opinion and lobby to change the laws to their benefit. Smaller competitors cannot afford to take the same risks, play by the same rules, hire the same lobbyists or spend on marketing to change public opinion.

Public company CEOs

So far in explaining the rise of the wealthy, we’ve talked about Wall Street, we’ve talked about hedge funds and we’ve talked about technologists. The next group to discuss is the public company CEO. In the 1970s, public company CEOs made less than 30 times the average worker’s salary. Today, CEOs of public companies earn upwards of 300 times the salary of the average worker. Yet corporate profitability is not any higher, nor is there any significant correlation between long-term company performance and CEO compensation.

So why have CEO salaries skyrocketed? The answers to this question will be familiar from our previous discussions. Throw in perverse monetary policy and some government regulation, put them together and you get some messed up economic incentives and enormous income inequality.

To understand why CEO compensation has exploded we must first understand how CEOs are compensated. Historically, the bulk, if not all of CEO compensation was paid in cash. However, beginning in the 1980s more and more of a CEO’s compensation has come in the form of stock and stock options. And it is now the stock component that represents the majority of the average CEO’s pay package. And it is the stock component that is mostly responsible for the dramatic increase in total CEO compensation.

The idea behind compensating executives in stock rather than cash is an obvious one: to align management’s interest with that of shareholders. When stockholders do well, the CEO does well. However in practice, this compensation structure has done the opposite. It has served to enrich CEOs at the expense of shareholders. Given the the vast amount of riches a CEO can earn if their company’s stock price rises and through their golden parachutes upon a liquidity event, CEOs have enormous incentive to take risk and to grow at all costs especially through mergers and acquisitions. CEOs are also incentivized to focus on short-term results by managing earnings, to appease activist investors by cutting costs, buying back stock and through other forms of financial engineering. The short-term nature of equity markets also acts to discourage CEOs from making long-term investments in capital, investment in basic research and development, and investment in the training and development of employees.

The next question to ask is how CEO compensation is the fault of monetary policy. The answer is mainly, because low interest rates and subsidized risk led to a 30+ year stock market bubble which directly impacted the value of those stock options, and hence CEO compensation. Moreover, ever higher stock prices naturally led CEOs to demand that more and more of their compensation be paid in stock and stock options rather than in cash, leading to what is essentially a momentum based feedback loop. Owning more stock led CEOs to take on more risk, do more M&A and focus more on growth in order to keep stock prices high. High stock prices led to more CEO demand for stock and stock option compensation. And so forth.

High stock valuations also had the ability to mask poor performance since, as they say “a rising tide lifted all boats.” In other words, many CEOs benefited enormously from an overall bull market and from multiple expansion rather than from higher profits or better company performance. In addition, the Fed’s subsidy to equity markets, to growth and to M&A activity led to larger companies, and hence larger CEO pay packages since it is easier to justify higher compensation for running larger companies. Lastly, just as we discussed with hedge funds, low interest rates forced pensions and insurance into riskier investments like stocks, which further increased stock prices.

In addition to monetary policy, regulation and tax policy also plays a significant role in explaining the significant rise of CEO compensation. SEC regulation stipulates that CEO (and other executive’s) compensation packages are publicly disclosed. This leads CEOs and the compensation consultants they hire to benchmark compensation towards the high range of competitors, an ever escalating game of leapfrog. Moreover, until recently accounting rules allowed companies to ignore the expense of stock options (but not cash salaries) in calculating profits. The advantageous tax treatment of capital gains versus ordinary income rates also served to encourage compensation in stock, as did regulations that limited a company’s ability to deduct from taxable income non-performance based compensation. Lastly, as we’ve noted many times, crony capitalist regulation favors larger companies over small ones, encourages M&A activity and hence, stock compensation.

Before leaving the topic of CEO compensation, we would be remiss in not asking ourselves the following question. Why haven’t investors put a stop to exorbitant CEO compensation? For the same same reason investors haven’t put an end to exorbitant hedge fund fees. That is to say, there is little incentive for owners of stock to do so. Just like with hedge funds, most owners of stock are either institutions (insurance companies, pensions funds and endowments) or mutual funds. Neither institutional fund manager nor mutual fund managers are managing their own money, and therefore have few incentives to go against the grain and fight the battle to bring executive pay under control.

The global 1%

We have talked about how monetary policy, and to a smaller extent how regulation have fueled the rise of Type 2 income inequality. We have not yet mentioned the third contribution to this unfortunate trend, globalization. Unlike with the decline of the middle class, where globalization set off our story, here globalization is more the icing on Mary Antoinette’s cake. And whereas in the story of the middle class, globalization took the form of tradeable (and mostly manufactured) goods and services, here we’re talking about the international flow of money.

In this article when we’ve discussed central banks, we’ve primarily focused on the U.S.’s Federal Reserve. However over the past few decades, all of the major central banks of the world have used the same Keynesian and monetarist models, and executed the same easy-money playbook. So, for the past few decades it hasn’t been just the Fed printing trillions of dollars and subsidizing U.S banks and U.S. financial markets. It has been all of the major central banks of the world, including the European, Japan, China, UK and Swiss doing the same thing, propping up banks and financial markets all over the world.

And over the past few decades with only a few exceptions (China, for instance) capital has been more or less free to flow across borders to seek the highest return. For example, take the “carry trade” which Japan made famous. Since the Japanese were the first to try zero interest rate policies in the 1990s in response to the crash of their real estate and stock market bubble and their ensuing depression, it became profitable for investors to borrow very cheaply in Japanese Yen and then invest that money in higher yielding projects, perhaps in emerging markets or in U.S. real estate.

Remember from when we discussed the decline of the middle class, we talked about how the Fed believes (erroneously) in the idea that they can control inflation through monetary policy. Recall also that one of the key implicit and completely unrealistic assumptions that the Fed makes is that the money printed by the Fed (or by banks spurred on by low interest rates set by the Fed) stays in the U.S. That is, that the new money remains within the U.S. economy and helps creates U.S. jobs.

However, as we said when discussing the carry trade just a moment ago, in a global economy, this is not at all the case. In the international system we have, money can easily flow to wherever it might have the highest returns. So money printed in the U.S. by the Fed or by U.S. banks might not spur U.S. employment but may very well spur investment outside the U.S. Similarly, money printed by the Japanese central bank, given lack of good investment opportunities inside Japan, can easily flow into the U.S. spurring economic activity, employment and ultimately inflation in the U.S.

In either case, in a world of massive central bank money printing AND free capital flows (whether U.S. money flowing overseas or overseas money flowing into the U.S.), the link between monetary policy and domestic economic activity, employment and inflation is severed and broken. Regrettably, the Federal Reserve and other central banks of world seem not to understand this key point.

Now let’s bring the discussion back to income inequality and the rise of the wealthy. In essence, everything we’ve talked about how monetary policy feeds the rise of the wealthy in the U.S. gets increased even further when we add global central banking and the rise of the international financial sector.

We already mentioned how easy monetary policy from the Federal Reserve led to increased asset prices which benefited the wealthy who own far more financial assets than the middle or lower classes. We’ve also discussed how loose monetary policy was a direct subsidy to Wall Street and the financial sector. Very simply, since all the world’s central banks have been subsidizing banks and financial markets, since nearly all large banks have global operations, and since money flows easily across borders these affects on asset prices and the world’s financial sectors were dramatically magnified.

We also discussed how enormous amounts of money have led to high inflation for “wealthy” goods. Naturally, free flowing global capital further accelerates this trend. Its not just domestic hedge funders or tech moguls that compete to own New York real estate or fine wines or modern art or sports teams. So do Russian oligarchs. And Chinese billionaires. And Arab oil princes. Moreover, this inflationary trend is significantly exacerbated by the need for those wealthy people from less free and less democratic countries with their authoritarian governments to get money out of their own countries, even if it means paying inflated prices for assets.

Recall that when we discussed CEO compensation, we discussed how CEOs benchmark their pay packages to the highest comparable CEOs. Globalization has led to even larger companies through international M&A activity and international consolidation. This has had the effect of raising executive compensation in at least two ways. Rather than benchmark pay to only domestic companies, CEOs can now compare pay packages to international and multi-nationals competitors. Second, since executive pay is correlated with company size, today’s larger multinational companies have led to correspondingly larger executive pay. It is worth noting that the rise of CEO compensation used to be primarily a U.S. phenomenon. However in recent years thanks to global monetary policy, the rest of the world has now adopted such “best practices” in compensation. CEOs of the world unite!

Another contributor to global Type 2 income inequality is what we’ve referred to as crony capitalism, at least in developed countries like the U.S. More accurately and less politely, we can call this “corruption,” especially in less developed countries. Whereas the subsidies to large and public companies in the U.S. has led to vastly inflated CEO salaries, in developing countries corruption has caused the rise of oligarchs who were allowed and/or encouraged to essentially steal the assets of their country. This trend was furthered exacerbated by a decades long commodities boom fueled by easy money that led even more so to the rise of billionaires in emerging markets (since emerging markets tend to be heavily commodity focused), not to mention the dramatic wealth of oil producing countries and their ruling families.

Finally, it is worth mentioning that vast wealth buys not just real estate, wine, art and sports teams, but also politicians and political power. And with political power brings a vicious cycle that enhances income inequality. The wealthy benefit from crony capitalism and corruption and gain even more political power. They can then put in place laws and regulation which begets more wealth and more political power. All to the detriment of the lower and middle classes. This trend is most obvious in less developed and less democratic countries but is also quite powerful in the U.S. where the wealthy have gained enormous political power to influence elections through lobbying and through organizations such as Super PACs.

Of course it is not new that money corrupts democracy and that people can buy elections. This has always been the case. But what is new is how much money there is out there and how easy and politically acceptable this type of corruption has become. Just as the rise of the wealthy is a truly global phenomenon so true is their political influence.

The free market and the 1%

As I stated at the top of this section, the dramatic rise of the wealthy and the wealthy’s power is a primary reason why capitalism is under siege. CEOs get paid a hundred or even a thousands times the salary of an average worker for making value-destroying acquisitions and under-investing in the future. 25 year old college dropouts become billionaires overnight for creating products that have zero revenue and zero business model. Hedge fund managers take home nine or ten digit compensation each and every year for speculating with the public’s money and under-performing an index.

Popular disgust is justified. But placing the blame on capitalism is not. What we are witnessing is not capitalism running wild. What we are witnessing has nothing to do with a free market. No, what we are witnessing is a world afloat in too much money thanks to the money printing and risk subsidizing policies of central banks. What we are witnessing are the perverse incentives and crony capitalism that naturally spring from an enormous government sector that allows gains to be privatized but socializes losses.

Finally, to those who believe that the rise of the super-wealthy is an inevitable outcome of capitalism, I point to 2008 when the fundamental forces of the free market were desperately trying to reassert themselves. After decades of easy money and government support, Wall Street was imploding under its own leverage. Global banks were failing. Wall Street jobs were disappearing. Valuations were plummeting. Money was being extinguished. In short, the exorbitant wealth of the wealthy was diminishing, and diminishing drastically.

Scary? Yes. Painful? Yes. But allowed to continue, the world would have eventually returned to one of much greater equality, higher productivity and stronger economic growth. This was the economic “reset” the world desperately needed. Of course as you know natural events were not allowed to continue.

Like every other time over the past three decades when financial markets hiccuped from too much money and too much risk, the market was not allowed to correct itself. Understandably but short-sighted, fearing both economic depression and the loss of power and wealth rapidly being extinguished, the central banks of the world led by the Federal Reserve, and the central governments of the world led by the United States, printed massive amounts of money, lowered interest rates drastically, and bailed out the financial system. Once again, doubling down (in a exponential sort of way) on the causes of the financial crises, leading us to the next and even larger financial crises, and exacerbating even more income inequality.

What can we do to reduce income inequality?

Income inequality is a problem. It will lead to political unrest. It will lead to more politicians of populist persuasion. It will lead to isolationism. It will lead to protectionism and trade wars. It will lead to cold wars and hot wars. Sooner or later, as it gets worse and worse, it will lead to revolution, even in democratic countries like the U.S. What can we do?

The largest contributor to Type 2 income inequality, and a very significant contributor to Type 1 income inequality is the Federal Reserve, and its monetary policy. The first thing we must do is to normalize interest rates, or better yet, get central banks out of the business of managing the economy and out of the business of bailing out the financial sector. We must make it clear that risk will no longer be subsidized and financial firms will no longer be bailed out. Of course, this message will not sink in right away. The financial sector will have to learn the hard way through failure and bankruptcy. It will painful for them, and for all of us. But it must happen. And in fact as we just noted, it almost happened in 2009, but we didn’t let it.

If we stop subsidizing risk and let interest rates be set by market forces, asset prices will fall immediately and substantially. This too will be painful, but it will also have an immediate and substantial impact on income and wealth inequality. As money dries up, Wall Street will shrink significantly, along with bonuses and financial services jobs. So too will the technology sector contract as tech valuations plummet. 1% cities like New York and San Francisco, with their outrageous real estate prices will suffer, but will once again become affordable for doctors and lawyers and teachers and police officers. The era of the hedge fund billionaire and the tech mogul will be at an end.

Jobs that serve no social purpose will disappear, like most (but not all) investment bankers, consultants, hedge fund analysts and private equity professionals. Speculation will never disappear for it is perhaps the world’s second oldest profession and yes, necessary. But it will go back to being the small and unrespected backwater that it has been for most of recorded history. On the flip side, banking will once again be respected and be boring, and that will be good.

Technology disruption and automation will be slowed, though never stopped. Companies that don’t make money will disappear, as they should. Companies that actually make money will thrive and will relearn how to invest in their own employees. Smart people will once again become doctors and scientists and engineers and teachers. Productivity, and the entire middle class will benefit.

Meanwhile, while the financial system goes through a reset, we must also reduce the massive regulations that hinder hiring, that put a floor on compensation, and that support and subsidize big business, big labor and crony capitalism. No, we do not want to return to the days of child labor and toxic rivers. But a consenting adult who is willing to work must be allowed to, at any wage, and in any industry. It will take time, but manufacturing will return. Quicker too if we allow immigration, as we should.

Ending the experiment with socialized monetary policy and reducing regulation is most of the battle for reducing income inequality, helping the middle class and returning our economy to healthy growth. Of course there are other things that must be fixed too. Our monopolistic education system is appalling. It is wasteful, inefficient and poor in results. Ditto for our healthcare system. Our infrastructure is falling apart. Our governments at all levels are bankrupt with pensions and retirement costs that they will never be able to afford.

Simply put, we must let the free market work. Let companies succeed that deserve to succeed. Let companies fail that deserve to fail. Let people work who want to work. We need to relearn the enlightened lessons we learned a long time ago. It is the only way to preserve our way of life. This will take time, maybe a generation, and there will be pain in the process. For more than 30 years, we’ve become addicted to cheap and abundant money and financial bailouts courtesy of central banks. And since the New Deal in the 1930s, we have also grown dependent on increased government regulation. Only a return to market forces will allow the middle class to be revived.

What must we not do?

We must not give in to the populists, socialists, fascists and isolationists of the far left and the far right. We must not abandon capitalism. We must not give up on global trade, if for no other reason than abandoning trade will lead to war (though there are many other good reasons in favor of trade). We must not turn our backs on immigration for it is the only way to achieve significant economic growth, to afford our (hopefully shrinking) welfare state and to bring back manufacturing. Plus it is moral. We should not try to fix income inequality through re-distributive taxes, or through more regulation or more unionism. Each of these will make things worse, not better.

Appendix: The 7 incorrect arguments for income inequality

Before we leave this very long post (and vitally important topic), I want to briefly (I promise) list the various explanations that mainstream economists and politicians have provided for increasing income inequality, and discuss why they are wrong.

1. Globalization

Yes, globalization is part of our story behind increasing income inequality. Yes, globalization has winners and losers (though the winners vastly outnumber the losers). Yes, the world (and the U.S. in particular) likely experienced more global trade than it should have. However, it was not globalization that hurt the middle class. It was the inability to allow the free market to adjust to the effects of globalization that hurt the middle class.

Manufacturing wages should have fallen, but they were not allowed to due to government regulations and unions. Prices should have fallen but they were not permitted to by the Federal Reserve. Tax policies, cheap money and regulation all favored and subsidized big business which had the means and incentives to outsource and offshore. In short, government and the central bank ensured that the losers of globalization were not a few steel or auto or garment workers, but the entire middle class.

2. Skills and automation

Yes, high skilled workers have fared much better than low skill workers. Most of the U.S. jobs lost to globalization over the past few decades were low-skilled jobs, whereas high skilled jobs were mostly unscathed. For a while, mainstream economists tried to explain increasing income inequality by maintaining that the modern “post-industrial” or “service” economy favored high skilled workers, and that low-skilled workers would have to adjust (more education and training) or be left behind. However, today even very high skilled workers seem to be at risk of having their jobs automated away.

The implicit assumption that economists are making when blaming income inequality on skill discrepancies is that today’s post-industrial or service economy is somehow natural or evolutionary, the next stage of capitalism. I believe this is nonsense. The makeup of today’s economy (in the U.S., for example) is simply the outcome of government and central banking policies that for decades have favored capital at the expense of labor. It is too expensive to hire workers and at the same time, cheap money subsidizes technology and automation. Put another way, there are massive incentives to invest in software and robots and massive disincentives to invest in people.

Moreover, many of those high-skilled jobs that have been created have been in finance and technology, those two vastly unproductive sectors of which we’ve discussed previously. And of course, it doesn’t help low-skilled workers that the K-12 education system stinks in the U.S. due to it being a government run monopoly, nor that the price of college education has skyrocketed and become unaffordable due to Fed money printing. Long story short, it is no doubt true that the gap between low-skilled wages and high-skilled wages has grown. But there is nothing natural about this. It is due to the perverse policies of governments and central banks.

3. Decline of unions

Yes, the number of private sector union jobs has declined significantly over the past several decades, as has the power of unions. Yes unions fight for their own workers which typically results in rising wages and compensation. Yes, over the past few decades, the decline in the number of unionized workers has been correlated with the rise of income inequality. As we know however, correlation is not causation. In fact, the relationship between the two trends is exactly the opposite of what most economists not to mention left-leaning politicians believe.

As we discussed previously, unionization was one of the factors that helped cause middle class decline. Like any other special interest, unions served the few and hurt the many. By protecting the wages and compensation of a small number of workers, they helped make entire industries uncompetitive. Had unions been even more powerful or protected, the U.S. would have lost even more jobs. Just look to the automobile industry, where manufacturing thrives in “right-to-work” (anti-union) states like South Carolina and has been decimated in pro-union states like Michigan.

It is crucial to get the message across that to be anti-union is NOT to be anti-worker. Quite the opposite. Being pro-worker (and pro-middle class) requires one to advocate against unionization, against stronger labor laws and against minimum wages. To paraphrase Star Trek’s Mr. Spock, the good of the many must outweigh the good of the few.

4. Mercantilist China

Yes, China has been mercantilist. Yes, China has intervened in currency markets to keep the value of its currency from rising too fast, and thus aided its export oriented industries. Yes, China has subsidized many of its domestic industries with government directed financing and made it difficult for foreign companies to do business in China. Yes, the Chinese government has been reluctant to enforce international copyright laws and has helped Chinese businesses engage in blatant intellectual property theft.

If all this is true, why isn’t China mercantilism that cause of increasing income inequality? For the exact same reason that globalization isn’t the root cause. U.S. policy was just as complicit as the Chinese. By not allowing domestic wages and prices to fall, and by subsidizing consumer debt, government and monetary policy encouraged Chinese imports as much as did a weak Chinese currency. As we’ve mentioned previously, for a while U.S. consumers benefit from cheap Chinese goods, but as jobs and the middle class disappear, over the long-term it suffers.

Before we move on, keep in mind a few more things about China and Chinese mercantilism. First, as I’ve mentioned previously, by moving from a socialist to quasi-market based economy over the last 30 years, China has elevated more people out of poverty than any country in the history of the world. Second, one must keep in mind that the U.S. (and other Western countries) employ many of the same protectionist policies, such as import tariffs and export subsidies, as does China. Third, the U.S. in its very early days employed many of the same mercantilist tactics such as state-sponsored intellectual property theft of British technology and protective tariffs as China does today with the U.S. Fourth, understand that the Federal Reserve’s low interest rate policy as as much a manipulator of exchange rates as is Chinese monetary policy (other things equal, low rates weaken the Dollar).

Finally, for those of you with schadenfreude on your mind, know that China will have it coming. By suppressing its currency to aid exports, China has effectively been engaging in massive vendor financing (the Chinese government has to buy U.S. bonds). Sooner or later (either through higher U.S. interest rates or default) the value of these bonds will be reduced, leaving China with a substantial loss on its holdings. In addition, China has crammed probably three generations of economic growth into one, with unprecedented financial stimulus and an enormous debt bubble. Corruption and crony capitalism is rampant and the government is increasingly totalitarian. Pollution is horrific and demographics are terrible. The Chinese bubble will burst and it will be painful. It’s just a matter of time.

5. Greedy Wall Street

Yes, Wall Street, or more accurately, Wall Streeters, are the epitome of greed (I tell MBA students all the time that there is only one reason to go into a finance career – for the money). Yes, Wall Street has become parasitic to Main Street, to the overall economy and to the middle class. And yes, Wall Street is the conduit through which the enormous amounts of money that contributed to income inequality and the rise of the 1% has flown.

But Wall Street is not to blame. It is simply, and legally, taking advantage of the rules and incentives set up by government and by the central bank. Those rules and incentives are perverse, absurd and detrimental, and the fault must lie with those that make and enforce the rules and incentives, not those that follow them for their own gain.

For decades now, the Federal Reserve has subsidized risk through low interest rates and through the implicit (e.g. the “Greenspan put”) and repeatedly explicit bailouts of the stock market, banks and the financial system. Time after time, the financial system has taken on too much risk and has been bailed out by the central bank. Each time this happens, the lesson is enforced that risky behavior has no cost, that government will always come to the rescue. Profits are privatized and costs are socialized. Next time, take even more risk.

But it’s not just the Federal Reserve. Federal (and to a smaller extent state) government plays a role too in making the rules that distort incentives. Among other things, government has institutionalized a “too big to fail” banking policy, backstopped risky consumer debt (i.e. mortgages and student loans), socialized risk through FDIC insurance and directly invested enormous sums of money, through pensions, in dubious asset classes including the stock market, private equity and hedge funds.

The upshot of constant and ever-increasingly subsidized risk is one financial bubble after the next (each one larger than the previous), a financial sector vastly larger than its social role justifies, and the massive income inequality of which we’ve discussed. Vilify Wall Street all you want, but understand that Wall Street would have, and should have failed at least a half-dozen times over the past 30 years, if not for the bailouts, subsidies and upside-down incentives created by the Federal Reserve and the federal government.

Lastly, there are many out there who do blame government for the failings of Wall Street. Not because of the bailouts but because of a lack of regulation. To this I say two things. One, the free market is a far better regulator than government, IF it is allowed to work. Second, complex regulation (and financial regulation is certainly complex) CANNOT be effective when the regulators are paid a fraction of what is paid to those they are regulating.

6. Capitalism

Yes, capitalism inevitably leads to some level of income inequality. As we mentioned at the top of this article, people have different skills and experiences. Some are smarter, some work harder, some take more risk, some are luckier. Moreover, capitalism has never started at time zero. That is, in any country where capitalism has been introduced, there was already at least some legacy inequality, whether in business, in wealth, in land, in education, or in power.

Life is unfair and will always be unfair. But capitalism is the least unfair of any economic system ever tried, and I would argue, the least unfair of any economic system that will ever be tried. Capitalism cannot guarantee equality of results, nor would we want it to, but it is the only economic system that leads to anywhere near equality of opportunity. It is the only economic system that has raised masses of people out of poverty and the only economic system that has increased average life expediencies for those masses.

Unfortunately, capitalism is under siege by both the political right and the political left, by the mainstream media and even by mainstream economists. More than any other reason, it is under siege because of increasing income inequality. However, in criticizing capitalism, the mistake that is being made is to assume that Wall Street represents capitalism, that big business represents capitalism, that giant technology companies represent capitalism. Nothing could be further from the truth.

The United States is less a free market (less capitalistic), than it has ever been in its history. While the trend towards more socialism (i.e. welfare state) and less capitalism has been going on since at least the New Deal in the 1930’s, the past three decades have seen an enormous increase in both the size of government and in government regulation.

We’ve privatized gains and socialized losses in real estate, finance, banking and healthcare. We’ve subsidized big businesses and giant, monopolistic technology companies creating the essence of crony capitalism. We’ve rewarded greed and punished responsibility. We’ve screwed up incentives so badly that is has become cheaper for money-losing, job destroying companies to raise money than it has for money-making, job creating ones.

But more than anything else, we’ve allowed our central bank to play God. We’ve let a central agency determine interest rates, the single most important set of prices in the economy. Think about it slowly. A central agency setting key prices. In a nutshell, more than any other, this is the reason we are living in less and less a free market economy and more and more a socialist one.

Interest rates, you see, are the lever that determines the trade-off between consumption and savings, the trade-off between the present and the future. After 30 plus years of low interest rates, we’ve mortgaged the future. We’ve consumed instead of saved. We’ve transacted instead of built relationships. We’ve gone for the quick profit at the expense of long-term investment.

Capitalism only works if you allow it to work. We don’t. Instead we let government set prices, subsidize monopolies, reward failure, and punish savers. Simply put, and contrary to popular belief, today’s U.S. economy is nothing at all like a free market. Nor is any other economy in the world. Consequently, the current level of income inequality (especially at the wealthy end) is vastly greater that it would be, and should be, in a free market, and vastly greater than what is politically healthy.

Don’t blame capitalism for income inequality. Blame the lack of capitalism.

Before moving on, two final points. First, inheritance. There are those historical figures like Marx, and those contemporary figures like Piketty that have a fundamental distaste for inheritance, viewing it as unfair and perpetuating inequality. Such social commentators tend to favor a steep, if not 100% inheritance tax. No doubt one can make the case that inheritance is unfair. No doubt one can make the case that inheritance perpetuates inequality.

However, inheritance is necessary to both a democracy and to a free society. This is something that wise students of the enlightenment, such as the U.S.’s Founding Fathers, understand, and the unwise do not. As we discussed a moment ago regarding the importance of interest rates, the fundamental trade-off of life, and society, is that between now and later, the short-run and the long-run. Spend or save. Consume or invest.

In a democracy, there are crushing forces that favor the short-term at the expense of the long-term, most notably the desire of politicians to be elected and re-elected. The most vital, and often only constituent of the long-run is the owner of assets, assets that can be passed down to heirs. And most important of those assets are not cash or stocks or land but businesses. Here is another sad example of how government and monetary policy has mortgaged the future. By favoring and subsidizing public companies at the expense of private ones, especially in the most reputationally oriented industries such as banking and insurance, we’ve lost one of the few vital proponents of the long-term view and one of the crucial “checks and balances” on government.

Finally, socialism. There are those idealists or utopians that believe that all men and women should be equal, that there should be no such thing as income inequality. As a method of achieving their dream they turn to the idea of socialism. Such people seem not to have ever read history. For each and every time socialism has been tried it has failed. Whether in the former U.S.S.R. or Eastern Europe under the Iron Curtain, whether in Mao’s China or North Korea or Cuba or most recently in Venezuela, the outcome is the same. For the masses: misery, famine, starvation, corruption, low life expectancy, fear, repression, limited freedom. For the small political class: power and luxury. In short, the most extreme form of inequality one can imagine. Ironic.

How any sane and educated person can believe socialism to be desirable is beyond my comprehension.

7. Income inequality is justified by productivity

This last erroneous explanation for the rise in income inequality is different from the others. Whereas proponents of the first six explanations view increasing income inequality as both damaging and unjustified (as I do), those who accept this line of reasoning view today’s level of income inequality as a natural, desirable and justified consequence of free market activity. They view the decline of middle class wages as a result of declining productivity, and the increase of upper class wages as a result of increased productivity.

The arguments go something like this. Middle class wages have declined because they do not have the technical skills, creativity or work ethic to compete in a modern, technologically advanced, global economy. Upper class wages have increased because they do have the technical skills, creativity and work ethic to compete in a modern technologically advanced, global economy. Moreover, the economics of scale of both a technological and global economy make high skill workers even more economically productive. In short, middle class productivity has declined so middle class wages have declined and upper class productivity has increased so upper class wages have increased.

This is wrong, and believers of such a narrative have as wrong an understanding of the free market as do those on the political left. While both skill disparity and globalization play a role in explaining both types of income inequality (as we’ve seen), changes in productivity cannot explain the increases in inequality. This is especially true of the Type 2 (upper income) income inequality trend. For instance, there is absolutely no way to justify the enormous increase in the ratio of CEO pay to average work pay over the past few decades using any sensible metric of CEO productivity. Moreover, among public companies, there is no correlation between CEO compensation and company performance. Finally, there is no evidence that low-skilled workers have suddenly become less productive.

In short, it is erroneous to credit (or blame, depending on your biases), the free market with causing increased income inequality of either type over the past few decades. It is also an error to view this increase in income inequality as either natural or desirable.

Is quantitative easing (QE) money printing?

An economics student with whom I’ve been corresponding recently relayed to me that his economics professor stated that the Federal Reserve’s policy of quantitative easing (QE) did not equate to printing money. This question causes a lot of confusion between economists and non-economists so it seemed like a good topic for a quick post.

So, should QE be considered money printing?

At first glance it should. In fact, if you Google the term “quantitative easing,” Google provides the following definition, “the introduction of new money into the money supply by a central bank.” Similarly, Wikipedia starts off its entry on quantitative easing with, “Quantitative easing (QE) is a monetary policy in which a central bank creates new electronic money in order to buy government bonds or other financial assets to stimulate the economy…”

Before we go further, let’s understand the basic mechanics of quantitative easing. In a nutshell, the Federal Reserve (or any other central bank) purchases long-term bonds from banks and other financial institutions using newly created money. Now, when we use the term “newly created money” we do NOT mean that the Fed prints a whole bunch of brand new Ben Franklins ($100 bills, for those of you reading this outside the U.S.). Instead, the Federal Reserve makes an electronic entry in its computer system, indicating brand new money.

Let’s say that the Fed purchases $1 million of bonds. On its balance sheet, the Fed records a liability of $1 million reflecting the brand new (electronic) money that it created and records an asset of $1 million reflecting the bonds it just bought. The Fed’s balance sheet is now larger by $1 million than it was prior to the purchase.

Now let’s look at the balance sheet of the bank that sold the bonds. On the asset side, the bank now holds $1 million of additional cash (in the form of reserves) courtesy of the Fed’s new money. Also on the asset side, it has decreased its holdings of bonds by $1 million. Hence, there is no change to the net value of the bank’s assets (and no change to its liabilities).

The simple answer to whether quantitative easing is printing money is clearly yes, since the Fed creates new money in order to purchase bonds. As we’ve seen, the Federal Reserve’s balance sheet increases by $1 million. The commercial bank’s balance sheet doesn’t change (the makeup changes but not the amount). So the net effect to the entire system is a $1 million increase. However, to economists, the story does not quite end here.

Economists argue that while the Fed is technically creating new money, it is not actually increasing the money supply. And here we find that the question of whether or not QE is money printing is really a semantic argument rather than a true economic argument. In essence, it boils down to whether “printing money” is the same thing as “expanding the money supply.”

The argument that economists make is that what the Federal Reserve is really doing is an asset swap. The Fed is merely swapping newly created money for bonds. The key point economists are trying to make is that once the Fed owns those bonds, they are not really part of the economy and should not be counted as part of the money supply. In essence what economists are saying is that ONLY the private sector banking system, by making a new loan, can expand the true money supply. The central bank cannot expand the money supply by creating electronic money.

Is this true? First, it depends on what your definition is of the “money supply.” There is no one agreed upon definition of the money supply and in fact economists have different measures of the money supply (e.g. M1, M2, M3). Ultimately, you must pose the question, “what is money?” The true meaning of money is equally important, complex and misunderstood and something I do not dare tackle here, but hope to in a future (and much longer) post. For now, I reiterate what I said earlier in this post, that whether quantitative easing is or is not money printing is really a semantic discussion, and not an important one.

There are much more important, and economic (rather than semantic) questions to discuss regarding QE, which we will turn our attention to shortly. However, before moving on, I want to add one point to the discussion. I would argue that when the Federal Reserve purchased bonds from the private sector financial system, it paid MORE than fair market value, given its massive size and status. Hence, by overpaying (compared to what others would have paid), QE was not exactly a one-to-one asset swap.

For example (and I’m making up numbers here) if the true market value of a bond was $80 and the Fed paid $100 then the $20 difference should indeed be considered “money printing” even if you take the position that QE is, otherwise, an asset swap. To be fair, not being a bond trader, I have no idea to what magnitude the Fed overpaid but I would bet that they did, especially for the mortgage backed securities (MBS) that the Fed purchased (the Fed purchased both US treasures and MBS in its three rounds of QE).

There’s an additional argument that some economists and commentators make with regards to QE not being money printing. They claim that even though the Federal Reserve expanded its balance sheet by trillions of dollars under QE (and ZIRP), because there was no meaningful inflation (not to mention poor GDP growth and poor employment figures), Fed policy cannot and should not be considered to be “printing money.” I’ve written previously (here) a post on why the Fed’s extraordinarily loose monetary policy hasn’t led to inflation. Briefly, however, I believe this line of argument is faulty for two reasons.

First, it is erroneous to equate expanding the money supply with inflation. This fallacy is, in my opinion, one of the reasons why the Fed (and other central banks) have caused so much damage to the world’s economies. I have and will continue to write about this topic elsewhere, but simply put, measures of inflation such as the CPI and expanding the Fed’s balance sheet are two different things. It is even less true to say that the money supply can be equated to economic growth or employment.

Second, and much more importantly is the counterfactual. How do we know what would have happened in the absence of QE? We know that the U.S. economy (and many others) was facing enormous deflationary pressure after the financial crises. This is precisely why the Fed along with other central banks resorted to unprecedented and extraordinary policy. It is certainly possible, and perhaps likely, that absent QE, the economy would have experienced substantially lower GDP, lower employment (higher unemployment) and even lower inflation (or deflation).

If true, then QE was certainly inflationary even if the CPI was ONLY 2%. In short, to state that a massive amount of QE cannot be considered inflationary (or be considered printing money) simply because the economy did not overheat is bad science and holds no water since we have no idea what would have happened without QE.

As I wrote above, I believe that the question of whether quantitative easing is technically “money printing” or not, is a semantic one and of little importance. What is important are the following three questions: 1) what was the purpose of quantitative easing, 2) did it work and 3) was it justified.

What was the purpose of quantitative easing?

At the highest level, the goal of quantitative easing was to help the economy. First, to help prevent the economy from falling into a depression and to help stave off deflation (both, real fears to central bankers after the financial crises).  Second, to help expand the economy faster, to reduce unemployment, grow wages and increase inflation.

Specifically, as we’ve already discussed, the Federal Reserve was buying long-term bonds (Treasuries and MBS) in order to reduce long-term interest rates. This was a new policy because historically the Federal Reserve only purchased short-term debt in order to target short-term interest rates. However, given that targeted short-term interest rates were already at zero because of the central bank’s zero-interest rate policy or “ZIRP,” the Fed decided to target long-term rates.

The stated goal of lower long-term interest rate was to encourage borrowing. For example (in theory), lower long-term rates should lead to lower mortgage rates, which should induce more people to buy new homes. Similarly, lower rates should make it cheaper and easier for businesses to borrow to build new factories or open new stores. The end result (again in theory) being more economic activity, higher GDP, more jobs and lower unemployment rates.

There were at least two other goals of QE, both significantly less talked about by central bankers, for what should be obvious reasons. One was to help members of the banking sector “repair” its balance sheets. The idea being that an unhealthy bank is unlikely to lend. A bank with a healthy balance sheet more likely to lend, and thus aid the economy. That’s the polite way of looking at it. The more cynical viewpoint is that QE represented further bailouts to the banking sector. (One of the reasons why, as I discussed above that I suspect that the Fed was paying above market prices, especially for MBS, whose value otherwise should have been marked down).

The second less discussed goal was to directly increase the price of financial assets. This idea is known in econ-speak as the “wealth effect,” an idea in which the Federal Reserve seems to believe. Other things equal, lower interest rates raise the value of all financial assets. The wealth effect posits that individuals whose financial assets have risen in value (and therefore have more wealth) will consume more. More consumption (other things equal) naturally leads to a faster growing economy, more investment and less unemployment.

Regardless of which of the Fed’s motives you believe were more important, in all cases the intent of QE was to spur lending in order to grow the economy faster (or prevent it from shrinking). And, as I stated above, every loan that the banking sector makes is considered an expansion of the money supply. So (and this is a key point), whether or not the act of quantitative easing by the central bank is technically “printing money” or not, the INTENT of QE is clearly to expand the money supply.

Did quantitative easing work?

Remember, the purpose of quantitative easing was to lower long-term interest rates in order to induce more lending, more investment, more jobs and more economic growth. Did it work? Well, after trillions of dollars of bonds bought by central banks, the answer is…wait for it…we have no idea. On the one hand, the U.S. economy (same story for other economies that experienced QE) did not experience very strong GDP growth, employment and wage growth, nor did headline inflation (CPI) reach the Fed’s target of 2%. On the other hand, economic growth, though weak, was at least positive, the employment rate did decline significantly and the economy avoided the Fed’s worst fears of deflation.

Economics is not a science primarily for the following reason: economists cannot run experiments. That is to say, economists cannot rerun the financial crises hundreds or thousands of times with and without quantitative easing to determine whether or not QE was effective. And as I stated above, we have no idea what the economy would have looked like in the absence of QE.

Personally, I would guess the following: quantitative easing was somewhat effective in preventing the worst effects of the post financial crises deflationary pressures and had somewhat of an effect on increasing GDP and reducing employment. I would further surmise that QE was nowhere near as effective as the Federal Reserve (and other central banks) had hoped.

Was quantitative easing justified?

Up until now, pretty much everything I’ve written would be considered mainstream economics. Here, in this final section, I will deviate. The question that remains, and the most important one in my mind, is SHOULD the Fed have engaged in quantitative easing.

It has become standard economics since Keynes in the 1930’s to focus exclusively on the short-term results of economic policy and to ignore the long-term effects. I’ve already stated that I believe that QE aided the economy in the short-term by increasing GDP and lowering unemployment. To mainstream economists, that is good enough. Not to me. The question that needs to be asked (but never is), is whether the long-term negative effects of quantitative easing outweigh the short-term benefits. I believe they do, and massively.

Quantitative easing (along with ZIRP) has re-inflated an asset bubble that has been trying to burst for 30 years and delayed the inevitable restructuring that must occur. It has bailed out banks and rewarded bankers that deserve, and need, to fail. It is has subsidized “investment” in non-productive financial activity such as M&A, stock buybacks and parasitic technology companies, all which destroy jobs, at the expensive of long-term job creating real investment. It has been the primary cause of increased income inequality, which if not reversed, will destroy liberty (and lives) throughout the world.

In the near-term some of us might be better off because of quantitative easing. In the long term, we are all worse off.

Why is the world so messed up?

Here’s another post inspired by the Trump election.  What the hell is going on in the world?  Why are people so angry? Why are the Brexits and the Trumps of the world winning elections? Why are extremists of the right-wing and the left wing, the populists, the isolationists, the politicians of anti-immigrant and anti-trade persuasion, the fascists and the socialists gaining power all across the western world?

It feels like in recent years that the world has taken a big step backwards.  The short-lived optimism brought upon by the end of the cold war has been replaced by fears of global terrorism and the anxiety brought upon by power-hungry dictators and empowered rivals such as Russia and China. Meanwhile, belief in a prosperous “age of moderation” was shattered by the global financial crises and by the indisputable evidence of surging income inequality.

Many smart people have tried to explain the various factors causing our world-wide angst. Capitalism. Wall Street. Globalization. Trade. Technology. Immigration. Terrorism. Some get parts of it right.  Some get none of it right. But few correctly see the larger picture, that is, the fundamental trends underpinning these trying times. We can do better.

I believe that the world is experiencing forces brought upon by a combination of two global trends: 1) massive financialization brought upon by short-sighted monetary policy, and 2) the growth of big government and its evil-twin, crony capitalism.  Together (and they do go together), these two decades-long trends have depressed productivity and economic growth, subsidized job loss due to technological disruption and excess international trade, and sown the seeds for global terrorism.

No institutions have done more damage to the global economy over the past several decades than the world’s central banks.  No idea has done more damage to the global economy over the past several decades than the belief that a centralized government agency can, and should, dictate the economy’s interest rates.  Led by the U.S.’s Federal Reserve, this monetary policy experiment has lead to a world in which money is in massive over-supply, risk is massively under-priced and the financial sector has grown to become a massive drain on productivity.

Low interest rates are supposed to encourage investment.  Financial bailouts are supposed to prevent disastrous depressions.  Perhaps a short-period of monetary stimulus and a once in a blue-moon bailout might not do too much economic damage.  But 30+ years of easy money and near-continuous bailouts of banks and the financial system have created such economic distortions that to categorize the U.S. economy as anything near a free market would be utterly wrong.

Of course, Wall Street is not the only entity in town that has grown substantially larger. Growth of federal governments has been almost as devastating to global economies. Marx thought that it was capitalism that was unstable and would inevitably collapse.  He was wrong. Regrettably, it is democratic government that seems ultimately unstable and prone to collapse by slowly, but inevitably strangling the economy.

Democracy’s fundamental flaw is that it is biased towards its own growth.  Growth of the government workforce, growth of regulation, growth of taxes, growth of disincentives, growth of monopoly.  The flip side?  Lack of productivity, lack of efficiency, lack of employment, lack of competitiveness, lack of growth, lack of freedom.  What began as more or less a free market, becomes, through the growth of government and the cradle-to-grave welfare state, a system of crony capitalism, less and less distinguishable from socialism.

Decades of easy monetary policy combined with the growth of big government have, among other things:

  • Encouraged speculation and short-term financial results at the expense of long-term productive investment in infrastructure, research and development and human capital.
  • Subsidized consumption at the expense of savings, fostering a culture of indebtedness and instant gratification and exacerbating worldwide trade imbalances.
  • Subsidized investment in vastly unproductive uses, creating serial asset bubbles in the process.  Nowhere is this more evident than in the technology industry where money losing companies funded with massive amounts of inexpensive capital that employ few disrupt profitable companies that employ many. This is not creative destruction, as some would claim.  This is subsidized economic suicide.
  • Subsidized large, publicly traded and monopolistic companies at the expense of small, privately-held and entrepreneurial companies because of easy access to capital markets, crony capitalism and an emphasis on financial engineering, M&A and private equity activity.
  • Caused enormous inflation in non-tradable goods such as healthcare, higher education and real estate.  Is it any wonder why the middle class is drowning in debt?  Is it surprising that young people can’t afford to pay for college, can’t afford healthcare and can’t afford to buy a house?
  • Destroyed the centuries-old business model of local, relationship-based banking and is in the process of destroying pensions, retirement savings and the insurance industry.  Collectively, these are the cornerstones of a capitalist economy.
  • Directly enriched the wealthy by funneling money through and to Wall Street and inflating financial assets, creating an enormous bifurcation of “haves” and “have-nots.”
  • Encouraged an entire generation of the best and brightest to become investment bankers, traders, venture capitalists and consultants, rather than scientists, engineers, doctors, and teachers.
  • Allowed governments (the U.S. in particular) to finance naive, adventurous wars in the middle east without the sacrifice of higher taxes, and thus without sufficient contemplation from the citizenry.  Further, easy money and big government has subsidized a military-industrial complex lobbying for arms sales, arms subsidies, arms grants and general armament of questionable groups, not to mention all sorts of military involvement and war. Needless to say, the predictable result has been anarchy, terrorism (often, facilitated with our own weapons), untold number of deaths, and the largest migrant crises since World War II.
  • Fueled a worldwide energy and commodities boom that enabled petro-dollar dictators like Vladimir Putin and Hugo Chavez to stay in power, and countries like Iran and Saudi Arabia to sponsor and finance global terrorism and religious extremism.
  • Subsidized internet and communications technologies that have led to a less-informed global citizenry, the decimation of more-or-less non-partisan media coverage in favor of the consumption and belief in “fake news” and conspiracy theories, as well as aiding in the planning and recruitment of terrorists.  Oh, and few if any productivity increases.
  • Destroyed entire manufacturing sectors because of regulation, tax policies, protected unionism, and the short-sighted policies of refusing to allow wages to fall.  The result being outsourcing, offshoring and global trade far beyond what would likely occur under a true global free market, and significant unemployment.
  • Completely divorced the healthcare industry from competitive forces, resulting in the worst of all worlds, the privatization of profits and the socialization of costs (just as the government did with the financial services industries).  The inevitable results being skyrocketing healthcare costs, a less healthy populace and monopolization within the entire healthcare vertical.
  • Created a bloated, wasteful and monopolistic education system that favors teachers, administrators and bureaucrats at the expense of students.  The result of which is an education system that neither produces the “good citizens” necessary for democratic government nor the job skills necessary for a competitive economy.
  • Fostered a culture of dependency, blame, over-sensitivity and selfishness rather than self sufficiency, responsibility and community.

The ramifications of poor economic growth and the slow-motion implosion of the welfare state

The upshot of decades of absurd and counterproductive monetary policy and an ever-growing government? Economies especially prone to speculative bubbles and financial crises. Economic growth and productivity far below potential. A bleeding and resentful middle class.  Easily financed and poorly planned wars with the terror and chaos that follows.  And income inequality the likes of which the world has probably not experienced since before industrialization.

But it gets worse. Combine poor economic performance with the enormous welfare state and you get a downward spiral difficult, perhaps impossible to break.

First and foremost, poor economies hurt those at the bottom of the food chain, most notably young people.  With job prospects few or nonexistent, young people delay or completely avoid forming households and having children.  You wind up with an aging population with fewer and fewer workers paying into the ponzi-like welfare system and ever greater number of aging retirees taking money out. This is playing out all over Western Europe, but even more obviously in Japan, a country in its third decade of economic depression.  (It is mainstream economics to blame Japan’s weak economy on its demographic challenges and aging population.  However, this gets cause and effect exactly wrong.  It is Japan’s weak economy and poor job prospects that causes its demographic challenges and aging population.)

Further, what happens when masses of unemployed and underemployed young people with poor prospects and little hope are further and further removed from productive society?  They turn to drugs (witness the opioid epidemic in the U.S.), crime, and in some cases terrorism.

Moreover, a stagnant or shrinking economic pie causes everyone within society to take a zero sum mentality.  That is, whatever government benefits you get, means less that I get. The result is a bifurcation of the populace into two groups: those within the system that are currently benefiting from the crony capitalist welfare state, and those outside it trying to get in.  Most notably, who’s in the “out” group?  The young and the immigrants. Naturally, this bifurcation leads to resentment and anti-immigration bias. It leads to a two-tiered society.  It leads to an unassimiliated underclass, as has occurred in many Western European countries.

So now you’ve got a slow death cycle.  The economy is weak and jobs are scarce. The young are unemployed. Immigrants are shunned.  The population ages and more and more money flows to entitlements, to pensions, to retirees, to healthcare.  Meanwhile local services, education, infrastructure and other forms of investment are cut.  More money to unproductive uses, less money to productive uses.  So the economy becomes even weaker, and the cycle continues. Yet the elite blame capitalism and ask for even more government.  Sooner or later, crises ensues. Pensions can’t be paid. Local governments go bankrupt. Then state governments. Then federal governments.  The implosion of the welfare state.  It is occurring in Western Europe.  Though less apparent and more slowly, it is occurring in the United States too.

The way forward:  optimism or pessimism?

As I’ve mentioned several times, the twin maladies of easy money and big government have led to a stagnating world economy, financial bubbles in nearly every asset class, excesses of trade and technology, unprecedented income inequality, global terrorism and anti-immigrant and anti-trade sentiment throughout the world.  Is there anything we can do? And are there any reasons to be optimistic?

First, we need to end the era of easy money.  We need to stop subsidizing financial markets. We need to let banks and investors fail if they deserve to fail. We need to allow market forces to set prices, whether of financial assets or labor, and allow those prices to decline. We need to let our economy reorient itself from its short-term and transactional focus back to one based on long-term investment and long-term relationships.

We cannot continue to subsidize large corporations at the expense of smalls ones, just because large companies have the money to lobby. We must find a way to reduce pensions at the state and local level. We must return healthcare to a market system and recognize that one way or another healthcare consumption must shrink.  We need to limit the power of the federal government, return power to local governments and reduce regulations that favor monopoly.

We must not turn our backs on global trade, but recognize, and acknowledge two truths.  Yes, trade will always have negative effects on a small portion of the population (while having less obvious, but more significant positive effects on a larger portion of the population).  And yes, there has been an excess of outsourcing, offshoring and foreign trade over recent years.  But this is due to the prevalence of easy money and crony capitalism, not because of free market forces.

Similarly, we must recognize that while entrepreneurship is fundamental to a strong functioning and growing economy, the vast majority of recent entrepreneurship, specifically from the technology sector, has been wasteful at best, and extraordinarily damaging at worst.  Only an end to stimulative monetary policy will fix this.

Finally, we must encourage not discourage immigration. Immigration is morally correct, is good foreign policy and is economically beneficial. Immigrants must be viewed as assets, which they are, not liabilities. And given aging populations and poor economic growth, population growth through significant immigration is the only chance to delay the inevitable implosion of the welfare state for another generation.

Are any of these things realistic given today’s toxic, and corrupt political system? Not a chance. There is absolutely no realization whatsoever among the economics profession, the mainstream media or the political community of the disastrous consequences of “modern” central banking. Nor is there any reason to believe that those in power who have benefited so much from decades of easy money will change their viewpoint.

Similarly, there is no political will to accept the near-term pain required of weaning the economy off of monetary stimulus and letting the economy restructure as needed. There is no political will to cut pensions. No political will to view healthcare as a consumer good, not an entitlement. No political will to end crony capitalism, to end the power of special interests.  In short, there is simply no incentive for politicians to favor a long-term outlook. And herein lies the paradox of democratic government:  it works until it grows too big to work.

So what happens next?  Perhaps the world stumbles on for a while. Populists continue to come to power. The rich stay rich, the powerful stay powerful and the poor stay poor. Trade suffers, immigrants are shunned.  Economic growth is weak. Capitalism continues to be viewed as the problem, big government as the solution. Maybe another financial crises that we can inflate our way out of. Maybe another financial crises that we can’t. Sooner or later the music stops.

About 100 years ago, the world sleepwalked into World War 1. Today the world sleepwalks into the next global disaster. Regrettably, I see few reasons to be optimistic.

A quick note on the election of Donald Trump

Like many who live in one of the elitist bastions of the United States (New York City, in my case), I am disappointed and dismayed by the election of such a simple-minded, volatile and enormously unqualified man as president. But two incredibly important points need to be made.

First, about half of what Trump said during the election cycle was absolutely correct, most notably that Washington is corrupt, needs to change and needs to shrink. The other half (the “Wall”, Mexicans, immigration, women, Muslims, etc) is frightening. Which half will come to pass over the next four years? Who knows. That uncertainty is also very frightening.

The second important point, and an undeniable message of the election, is that Trump, and many other of his populist leader brethren around the world, are speaking to a vast proportion of the populace that feels left behind by the so-called modern economy.  And they are right.

But what is even more scary to me than a Trump presidency is the backlash, not from college campus protestors, but from those on the left, pondering how this all happened. They will blame the rise of Trumpism on the failures of capitalism and free markets.  The prominence of ultra-liberals and socialists like Elizabeth Warren and Bernie Sanders will grow.  And just as the power of the Republican moderates has been neutered by the far-right, the Democratic moderates risk being made irrelevant by the far-left.

As I’ve written about elsewhere on this site, the middle class is not being failed by free markets.  The middle class is being failed by lack of free markets.  It is being failed by corrupt big government and crony capitalism, by monetary policy that subsidizes Wall Street to the detriment of Main Street, and by regulation and tax policies that favor monopolistic large companies over competitive, and job-creating small ones.  The “modern economy” is not a natural outcome of flawed capitalism but a natural outcome of flawed government.

Finally, I speak directly to the young people and college students reading this.  As much as we must fight the bigotry, racism and exclusionary tendencies of Trump, we must fight the anti-market and socialist tendencies of those that will oppose Trump.  Neither’s policies will make America great again.  To be honest, I don’t know how to do this.  Maybe it means supporting moderate Democrats and moderate Republicans.  Maybe it means a viable third-party.  But we must do something, and we must do something now.

As always, feel free to comment or get in touch.

Why hasn’t the Fed’s loose monetary policy since the financial crises led to inflation?

Since the financial crises of 2008, the Federal Reserve has expanded its balance sheet from about $850 billion to about $4.5 trillion. In other words, the Fed has created $3.6 trillion of new money, representing more than 20% of U.S. GDP.

Back in 2008 and 2009 lots of smart people assumed that money printing of that magnitude would surely cause serious inflation, if not hyperinflation. And yet, with all that new money, inflation, at least as represented by the Consumer Price Index (CPI) has remained below the Fed’s target of 2%. Why? Why hasn’t the Fed’s extraordinarily loose monetary policy led to significant inflation?

In no particular order, here are seven possible explanations. Which ones are true? All of them.

1. Banks have not lent the money

Of all the reasons for why the Fed’s extraordinary monetary policy hasn’t led to inflation, this one is both the most obvious and the least controversial.

The mechanism of monetary policy is for the Federal Reserve to buy securities (e.g. government debt) from banks with newly created money. This newly created money is then available for banks to lend to businesses and consumers. Since banks historically earned no interest income on this idle money (“excess reserves”), banks should have incentive to lend the money in order to earn interest income.

The fable told in economics textbooks is that of the money multiplier and the reserve requirement. The story goes that once reserves are created on a bank’s balance sheet, the bank will then lend out all of it except the portion it is legally required to hold. For example, if $100 of new money is created by the Fed and deposited in a bank, and if the reserve requirement is 10%, then the bank will lend out $90. But that’s not the end of the story.

That $90 can then be used to buy machinery or hire workers or build a house, and that $90 will ultimately be deposited back at another bank (or the same bank, it doesn’t matter) by the receiver of the money (the machinery vendor, the worker or the homebuilder). Now the banking system can lend out another 90% of that $90, or $81. This process continues indefinitely ($100+$90+$81+$73+$66, etc.) and ultimately, $1000 of money is created, equal to the original amount divided by the reserve requirement (in our example, $100/.01). In other words, the money multiplier is 10, since 10x the Fed’s original deposit is created.

If this was the way the world really worked, then the $3.6 trillion created by the Fed would have really led to something like $36 trillion of new money (the reserve requirement in the U.S. is 10% on most balances). This would equate not to 20% of annual GDP but 200% of GDP. With that amount of newly created money, it is a near certainty that inflation would have followed.

Clearly, that hasn’t happened. Massive inflation hasn’t followed because the banks haven’t lent the money. The so-called multiplier effect simply hasn’t occurred. Instead, banks have kept most of the excess reserves on their own balance sheets, to the tune of $2.5 trillion. There are a number of reasons why.

First, the Fed, beginning during the financial crises began paying interest to banks on excess reserves. Hence, since banks do earn some income on unused reserves, they have less of an incentive to lend. Second, thanks to extraordinarily low interest rates set by the Fed (the Fed is targeting an interest rate when it buys securities with newly created money), banks earn relatively little interest income on the funds that they do lend. Why earn only a little income on risky lending when you can earn only a little bit less income without taking risk?

Third, banks are still repairing their balance sheets that never fully recovered after the financial crises of 2008. And given the severity of the last financial crises, banks now realize that they had better keep more reserves ahead of the next (inevitable) crises. Fourth, the Fed and other bank regulators, through various regulations and “stress tests” have significantly increased the amount of capital banks are required to have, and curtailed the amount of risk banks can take.

Lastly, and most importantly, banks haven’t lent because they can’t find very many creditworthy borrowers that want to borrow. Consumers, as a group, are still over-indebted. And businesses are facing the twin hurdles of global oversupply and “disruption” from easy-money fueled technology companies. With this onslaught, it is no wonder why most businesses have no appetite to borrow and to invest.

Before we move on, let’s revisit the stated purpose of easy monetary policy: to encourage banks to lend. As I said above, that banks aren’t lending all the money that the Federal Reserve created is common knowledge. And as I also mentioned, the Fed and other government regulators are actively forcing large banks to reduce risk. Seems a bit contradictory, doesn’t it?

This begs the following question: what is the true purpose of easy money, if not to directly stimulate the economy through increased lending? To recapitalize (bail out) banks? To raise asset prices? To simply appear to be doing something to help the economy, so that you’re not blamed for the next downturn?

2. The money has flowed overseas

Textbook economics dictates that the Fed can at least influence, if not control the level of inflation. By lowering interest rates and printing money, the Fed can stimulate lending, which stimulates businesses activity, which results in increased employment, which results in increased aggregate demand, which puts pressure on wages and ultimately prices.

However, as we’ve already discussed, banks aren’t lending as much as the Fed would like to U.S. consumers and U.S. businesses. But perhaps they are lending the money overseas? This is what is known as the “carry trade.” Made famous over the past two decades involving the Japanese Yen, it applies equally well to the United States. Simply put, borrow where interest rates are low (e.g. Japan and the U.S.) and then lend the money in emerging market countries where both interest rates and growth prospects are higher. In other words, freshly printed (digitally, that is) Federal Reserve money winds up not in the U.S. as intended, but instead financing Spanish real estate or Brazilian mines or Chinese factories.

So perhaps the textbooks are correct that low interest rates and new central bank money will lead to higher GDP and ultimately inflation. But in a world of free-flowing capital, that higher GDP and even inflation can just as easily occur in other countries rather than the country in which the money was created. In other words, in the small, closed economies of the ivory tower, perhaps an all knowing central authority can indeed control inflation. In the real world, perhaps not.

3. The CPI is understated

Like all government created economics statistics, the CPI is an enormously complicated statistical measure that very few people understand. To quote Wikipedia, the CPI is “a statistical estimate constructed using the prices of a sample of representative items whose prices are collected periodically.” But what are those representative items? What I buy and what you buy might not be the same. And what prices should be used? Prices in New York? In Detroit? In rural Alaska? And how should we collect these prices? And how often?

Yet those are probably the easy questions. There are much harder ones. What about quality changes? How should new cellphone features be factored into the price index? How about safer cars? Or less legroom when flying? And how about substitution effects? If the price of steak goes up, I might switch to chicken, which is cheaper. Since I no longer buy steak, should the price index only account for chicken? But if I really prefer steak, isn’t that really a reduction in my utility, and hence similar to a quality decline in my food consumption?

Perhaps the most controversial input in the CPI is how it accounts for housing, the single largest expense for the average consumer, and thus the largest component of the CPI. Rather than directly include the change in the price of houses, like it does for other consumer goods, the CPI takes into account something called “owner’s equivalent of rent” (OIR). This is a measure, based on surveys, for how much monthly rent homeowners believe that could get if they were to rent out their homes.

Leading up to the financial crises, housing prices where increasing at a rate double that of OIR. Many have suggested that the Federal Reserve ignored the obvious signs of the housing bubble because they were focused on inflation indices such as CPI, which vastly understated the inflationary impact of rising house prices.

The CPI is truly a black box, and I will be the first to admit that I have little understanding of what exactly is contained in that black box, or how that black box is constructed. And to be fair and truthful, you can find plenty of economists who will argue that the CPI overestimates true inflation (mostly due to quality increases) rather than underestimates inflation.

However, one thing is undeniable. As much as I hate to sound like a conspiracy theorist, I would be remiss to point out that the government has a huge incentive to understate the CPI, for at least two reasons. First, because many government entitlement programs, most notably social security are tied to cost of living increases. The lower the official inflation rate, the lower the entitlement payments the government is obligated to make. Second, inflation is a key component of reported economic output (i.e. GDP). The lower the inflation rate (in this case the GDP deflator), the higher the headline real GDP figure. All the better for incumbent politicians.

Long story short, given the government’s bias for lower inflation, I would guess that true inflation is perhaps 1-2% higher than the reported CPI number. But whether I am right or not, here’s the most important thing to remember. Measuring inflation is enormously complicated and messy. Even without a bias, this is art not science. So why should a central bank rely on such a figure, a figure who’s margin of error is probably at least a full multiple of itself, to justify printing trillions of dollars? To me, this is unwise, unscientific, undemocratic and bordering on criminal.

4. Asset prices are inflated

Take someone off the street (Main or Wall) and ask them what they think of the Federal Reserve. Assuming they know what the Federal Reserve is, they might say the Fed is doing a good job or a bad job. They might say that the Fed should do more to help the economy or less. They might say that the Fed should lower interest rates further or raise them. But if you ask them, regardless of their economic beliefs, to state one criticism of the Fed, I’d bet most would say the following: the Fed contributes to rising asset prices and to asset bubbles.

Remember the discussion of owner equivalent of rent from above? Just in case you don’t, we said that the housing component of the CPI reflects an estimate of changes in housing’s rental price rather than its sales price. Why is that? Because renting a home is considered consumption while buying a home is considered investment. The CPI is meant to measure consumption. Not investment. And therefore, not asset prices.

The Fed won’t admit they they create asset bubbles. But they do. Easy money in the 1990’s led to the first tech bubble. Easier money in the 2000’s led to the worldwide real estate bubble. Even easier money now has led to a bubble in all financial assets. Stocks, bonds, real estate, art, wine, you name it.

Let’s talk finance 101. Mathematically, the flip side of a low interest rate (technically, a low cost of capital) is a high valuation. In fact, the Fed has explicitly stated that they believe in what is known as the “wealth effect.” If your stock portfolio is higher and the value of your house is higher, then you are more likely to spend money, which should help the economy. Whether or not this wealth effect is real is irrelevant here. What is relevant is that higher asset prices are both a mechanical consequence AND a desired outcome of central banking’s loose monetary policy.

While the CPI might not reflect inflation, asset prices do. And the more risky the asset, the more the asset has been inflated. If this sounds scary, it should. Sooner or later, asset bubbles burst. When they do, financial crises tend to follow.

What asset inflation also means is that investors are paying more today for income tomorrow. We see this looking at metrics like Price/Earnings on stocks (very high) or Capitalization Rates on real estate assets (very low). These two metrics (essentially inverses of each other, hence the opposite direction) reflect, respectively, the high price paid today for a dollar of earnings from public companies or a dollar of cash flow from real estate investments.

Said another way, paying a high price now means that future rates of return on financial assets will likely be much lower than they have been in the past. This is great for today’s sellers of assets, who are receiving very high (inflated) prices. Sellers can use that money to consume, theoretically helping the economy (another impact of the wealth effect). However, buyers of financial assets pay more now, and will receive lower cash flows later on, reducing future consumption. So, not only does asset inflation result in those dangerous asset bubbles, it also pulls consumption forward, meaning lower economic growth in the future. Not to mention, you’re killing the business models of insurance companies and pensions, which rely on investment income to meet future obligations. Truly a disaster waiting to happen.

Long story short, while the $3.6 trillion the Fed has printed (and the corresponding reduction in interest rates) may not have led to very high CPI figures, it has helped lead to asset inflation. And the riskier the assets, the more inflated they are. This has gotten the world into trouble before. It will do so again.

5. Inflation is much higher for the wealthy

When we were talking about the CPI, I mentioned that there are many assumptions that have to be made in order to construct such an index. For instance, what products to include. Also, who to survey. In reality, the government folks (the Bureau of Labor Statistics or “BLS”) who are responsible for publishing the CPI do construct several different indices based on who they survey. For example, they have separate price indices for urban consumers and rural consumers. They also have a price index for consumption by the elderly.

To my knowledge, none of those price indices show inflation levels that would worry the Fed. However, I believe there is a subset of American (and global, for that matter) consumer that is experiencing inflation on a level that should concern the central banks of the world. That consumer is the wealthy.

Whereas the CPI has been running under 2% for many years running, I believe that a properly measured index for wealthy consumers would show inflation running at somewhere between 6-10% annually. In other words, perhaps 3-5 times the CPI. And the richer the consumer, the higher the inflation.

What has happened to this segment of consumer is a classic wage/price spiral. In a textbook wage/price spiral, money printing heats up the economy causing prices to rise. Workers seeing prices rise, demand higher wages. Higher wages result in businesses raising prices to offset higher wage costs. Higher prices cause workers to demand even higher wages, etc, etc, etc.

In today’s world, that is what is happening but the money first flows through Wall Street, and then flows to the owners and operators of high risk assets, such as hedge funds, tech entrepreneurs and public company CEOs. Those “workers” see their cost of living go up (e.g. housing prices in Manhattan or Greenwich, CT) and demand higher compensation and the cycle continues.

I freely admit that I don’t have hard data to back up my contention. However, observation and intuition says that the annual increase in the cost of living in a “1%” city like New York or San Francisco has clearly increased well beyond 2%. So have the costs of eating in a 4-star restaurant, imbibing a grand cru burgundy, staying in a Four Seasons hotel, paying full-fare for an Ivy League education (or buying your kid’s way into Harvard) and many more such worldly pursuits.

To be clear, I’m not advocating that you feel sorry for the rich. Not at all. For incomes of the wealthy have increased correspondingly. Remember, the point of this article is to explain why the Fed’s loose monetary policy hasn’t led to inflation, at least as measured by the CPI. However, in my view, it has led to very high inflation for this subset of consumers. Moreover, the asset inflation that I talked about above, plays a large role here too. For who owns most of the financial assets in the world? The wealthy. So just as there is price inflation for the 1%, there is massive wealth inflation.

If this sounds to you like I’m blaming the Federal Reserve for the enormous increase in income inequality over the past few decades than you are exactly correct. Regardless of intention, good or bad, the money that the Fed, and other central banks have printed has, for the most part, not flowed to Main Street. And it has not flowed to the middle class. Instead, it has gone to the wealthy, the super-wealthy, and the uber-wealthy. It has gone to Wall Street and to New York. It has gone to Silicon Valley and to San Francisco. It has gone to hedge funders and investment bankers, to tech entrepreneurs and venture capitalists, to CEOs and star athletes and perhaps worst of all, to politicians or at least ex-politicians.

6. The Fed is chasing its own tail

The next explanation for why the Fed’s easy monetary policy hasn’t led to inflation is what I will call the Fed chasing its own tail. What I mean is that contrary to intention, cheap money actually leads to lower prices rather than higher prices.

Let’s once again review the textbook rationale for easy monetary policy. Printing money and lowering interest rates leads to more lending and borrowing, and therefore to more investment and more spending than would otherwise happen without easy money. The implicit assumption is that the economy is operating under capacity (i.e. there is unemployment) and therefore, that the additional spending and investment expands the economy until it reaches full capacity, at which time inflation should occur.

Other things equal, I agree that low interest rates and easy money leads to more investment and, especially to more riskier investment. However, like everything in the real world, other things are not equal. As I wrote about extensively here, I believe that a substantial portion of the investment fueled by easy money actually retards economic growth, lowers employment and reduces overall prices (though not to the wealthy, as discussed above).

The prime (get it?) example I used in my previous article was of Amazon (see, “Amazon Prime”???). Amazon is a company that absent cheap money would likely not exist since it has no ability to actually make money. Yet, it has “disrupted” traditional retailers, resulting in hundreds of thousands of lost jobs, a multitude of retail bankruptcies and yes, lower prices.

This trend is prevalent throughout the economy with few companies or industries spared from Fed subsidized tech disruption. In other words, easy money does indeed spur some (i.e. tech) investment. But when taking into account the disruptive secondary effects, we find that overall investment, employment and economic activity are actually lower. Prices are lower too, contrary to what the textbook models state. Consumers benefit from lower prices for the time being. But mostly, the benefits accrue to a handful of venture capitalists, tech entrepreneurs and highly skilled developers, all part of the 1%.

7. Deflation is winning

I now present to you one final explanation for why central bank money printing has not led to inflation: it is being offset by deflation.

Let us remember why the Federal Reserve and the other central banks of the world are pursuing extraordinary monetary policy, to the tune of trillions of dollars created and zero or even negative interest rates. They are doing these things in response to the financial crises of 2008 and the global “Great Recession” that followed. Governments and central banks were, and are, desperate to prevent falling prices. This fear has led to a money experiment never before seen in 5000 years of recorded history.

We need to ask ourselves why did the financial crises happen in the first place. Obviously many, many books have been written about the causes of 2008. Unfortunately, nearly all of them have been wrong. As briefly as possible I’ll try to summarize the true causes.

The financial crises, like all financial crises, was a natural, market reaction to an economic bubble fueled by cheap money, subsidized risk and the perverse, short-term incentives that stem from cheap money and subsidized risk. What made this crises worse than most in recent history was that the market was trying to correct not years of financial mismanagement, but decades.

Erroneously believing that monetary policy could (and should) smooth out the business cycle and prevent recessions, the Federal Reserve has repeatedly printed money, set interest rates below their market rate and bailed out banks and other financial services firms. Each time it has done this, it has led to an even larger asset bubble and further reinforced the message that risk takers will be bailed out. Naturally, each bailout has been larger than the one before. 2008 was very large. Yet, the Fed, continued and wildly expanded its playbook, still believing that the cure for too much money is more money.

Unfortunately, the cure for too much money is not more money. Money needs to be extinguished. Oversupply needs to be reduced. Companies without profitable business models need to disappear. Over-leveraged banks, regardless of size, need to fail. Investors who took stupid risks need to learn painful lessons. This is the only way a free market can work. And this is the only way an economy can grow over the long-term.

Even though the central banks of the world have created trillions of dollars of new money, that new money is still fighting against gravity. That gravity is a massive deflationary current worldwide stemming from decades of easy money. So in one sense, what the Fed is doing is working. Prices are not declining, and are, in fact not far from the Fed’s target of 2% inflation. Moreover, we are not, at least officially, experiencing “depressionary” conditions.

Why hasn’t trillions of new dollars caused inflation? It has, but we don’t notice in measures such as the CPI because it is fighting the gravity of deflation. Sooner or later, however, gravity wins. It always does.

Conclusion – what comes next?

The past eight or so years have seen the central banks of the world print trillions of dollars. We see negative interest rates in parts of Europe and in Japan, something that has never happened before in human history. Now we hear calls from many mainstream economists for central banks to raise their inflation targets even higher, and louder and louder shouts for “helicopter money.”

Yet, inflation remains “stubbornly” below the target set by the Federal Reserve and most of the world’s central banks. Meanwhile, the world’s economies are growing slowly, if at all. And while the stock market and other financial asset prices continue to rise, income inequality continues to worsen, as does the political unrest that income inequality fosters. We see unemployed young people with student loans they will never repay. We see unprecedented amounts of homeless on our city streets. We see anti-capitalist, socialist and fascist politicians worldwide gaining votes and gaining power.

From this mess there are two conclusions one can draw: either central banks aren’t doing enough or what they are doing isn’t working. Mainstream economists have concluded the former. Print and spend, is what they say. How much to print and spend? Until it works. What if it still doesn’t work? Then print and spend some more.

However, one need only look at Japan to see this folly. More than two decades of extraordinary money printing and extraordinarily low interest rates have done nothing to awaken Japan from a generation-long depression. Meanwhile, Japan’s population ages and shrinks, an effect, not a cause of a stagnant economy.

I said it earlier and I will say it again. Easy money and subsidized risk cannot solve the problem of an economy ailing from decades of easy money and subsidized risk. But central banks and politicians will continue to try it. Because they have no understanding of what truly ails the economy and know no other way.

So what happens next? With no end in sight to more money printing and continued low interest rates, is inflation ultimately inevitable? Not necessarily. The U.S. and the rest of the developed world can continue limping along with slow growth, low inflation, continued income inequality, worsening demographics. We are all Japan.

Sooner or later, another financial crises will hit. Perhaps months from now, perhaps years, perhaps even decades. Will the central banks be able to save us again? For sure, they will try. But eventually they will fail. Whether the endgame is massive deflation or hyperinflation is unknowable and probably immaterial.The result will be the same. Crises. Depression. Ultimately, a financial reset.

It will be very painful. But it is, unfortunately, very necessary. And just maybe, 100 years from now, historians will look back and ask themselves how could we have been so primitive, so unwise, so naive to think that printing money is a good idea.

Mainstream Economics Myth 3: Insufficient aggregate demand causes recessions

In this series of articles, I use the term “mainstream economics” to illustrate what I believe to be the consensus views of economists and the ideas taught at most universities and found it most economic textbooks. However, for this post, I want to be a bit more specific about what I mean by mainstream economics. Perhaps more than anything else, what defines a mainstream economist today is the belief in Keynesian economics, or more precisely a Keynesian explanation of economic downturns and a Keynesian solution to economic downturns.

This view has dominated mainstream economic thought since the 1930s with a brief interruption in the “stagflation” days of the 1970s.  Ever since the global financial crises of 2008-2009, the Keynesian view has effectively monopolized economics.  Certainly in the U.S., and in most of the world, virtually all tenured economics professors, columnists, political advisors and central bankers adhere to the Keynesian religion.

Before we go any further, let me very briefly explain the Keynesian (and mainstream) tale of economic recessions.  First, what do mean by the term “recession?”  Conceptually, let’s call a recession a widespread (or economy-wide) reduction in economic activity (i.e. GDP) accompanied by high or rising unemployment.

The economy is merrily galloping along at full employment and in equilibrium.  Economic growth is robust and everyone who wants a job has a job.  Then, BOOM, out of the blue comes some unpredictable “shock” to the economy.  This shock causes consumer confidence to decline, which results in consumers spending less money than they should, which results in businesses having to cut investment and layoff workers.  With fewer employed workers, consumers as a whole spend even less money, businesses invest even less, layoff even more and  there is a vicious spiral leading to poor (or negative) economic growth and high unemployment.

In econ-speak, the economy suffers from “insufficient aggregate demand.”  That is to say, consumers are not spending enough money to keep the economy running as it had been prior to the “shock.”  In contrast to a free market view of  a self-correcting economy, due to somewhat mysterious structural reasons, such as sticky wages, the Keynesian economy now gets “stuck” in an “equilibrium” of less than full employment.  Finally, the economy cannot get “unstuck” and back to full employment without the aid of government (fiscal and/or monetary) stimulus.

In the next post in this series, Myth #4, we’ll discuss the Keynesian viewpoint that government (especially through monetary policy) can both prevent recessions and get us out of them.  Here, however I want to focus on the first part of the story, the Keynesian myth that insufficient aggregate demand is the cause of recession.

Now, let’s return to the mainstream, Keynesian story of recession.  To believe the Keynesian explanation requires four major assumptions, all four of which are unsatisfactory and/or false.  The first assumption is that the “shock” to the economy, that is the proximate cause of recession, is unpredictable. The second assumption is that the post-shock amount of aggregate demand is below the “correct” or so called “equilibrium” level. The third assumption is that the reduction in aggregate demand is due to “confidence” issues.  The fourth and final assumption is that it is demand, rather than supply that is the key driver of the economy (at least in the short-term).

Let’s start with the first assumption, that the “shock” is unpredictable.  In a small or undiversified economy, it is reasonable to say that some unexpected event might cause an economy-wide downturn.  For example, an economy highly dependent on agricultural output might experience recession due to drought.  An economy dependent on a single commodity (oil, for example) might experience recession if there is a decrease in the global price of that commodity.

However, in a large, diversified economy such as the United States, recessions are not caused by some unpredictable, exogenous shock.  They are caused by an unsustainable expansion of money and credit leading to an unsustainable expansion of investment.  When the “unsustainable” becomes realized, you get a recession.   In the old days, much wiser people than today’s economists and politicians understood that what we now call “recessions” where part of a business cycle.  And not for nothing did they call it a “boom-bust” cycle.  You don’t have the bust without the boom.

The second key erroneous assumption made by mainstream economists is that in a recession, the level of aggregate demand drops below what had been the “natural” or “equilibrium” level.  Of course, there is no question that demand drops from previous levels in a recession. However, if you believe that modern recessions always follow credit and investment booms, as I do, then you should understand that the previous (boom) level of aggregate demand was actually higher than it should have been and not some natural or equilibrium level.  Incidentally, whether equilibrium even exists (preview:  it does not) is something we will cover in Myth #9.

The third foundation of the Keynesian view of recession is that aggregate demand is reduced due to issues of “confidence.”  To paraphrase Keynes himself, “animal spirits” of both consumers and businesses in an economic downturn are depressed.  No doubt this is true.  But poor confidence is a cop-out reason for weak economic activity.  For low confidence is a symptom, not a cause of recession.

The fourth and final problem with the Keynesian explanation of recessions relates to the focus on demand rather than supply.  As we’ve stated already, the “boom” part of the economic cycle is a result of an unsustainable expansion of money, credit and investment.  This investment boom results in over-supply, whether it be over-supply of houses or factories or stores or mines or social networking apps.  When the investment bubble bursts, as it inevitably must, this over-supply must be pruned before robust economic growth can once again return.  It is the painful pruning of over-supply, through bankruptcies, layoffs, closures and investment cuts that is the true driver of the recession part of the cycle.  Hence, it is far more intellectually honest to refer to the cause of recession as excess aggregate supply (stemming from the boom) rather than insufficient aggregate demand (stemming from low confidence).

Long story short, I believe that the mainstream or Keynesian explanation for recessions is positively wrong.  Recessions are not caused by some unpredictable shock which leads a positive feedback loop of poor confidence and low aggregate demand.  Instead, recessions are the inevitable result of an economic boom fueled by an expansion of money, credit and investment.  More or less, this is the story espoused by the non-mainstream economists known as the Austrian school, and in future posts, I’ll cover this explanation in much greater detail.

Finally, as we have done and will continue to do in each of the articles in this series of mainstream economic myths, let’s ask ourselves why this topic is of vital importance.  Once again, we answer that what matters is not so much the explanatory, but the remedies inferred from the explanatory.

If recessions are indeed caused by insufficient demand then government can “cure” recessions by artificially creating more demand (i.e. spending) through the use of monetary or fiscal stimulus.  This is exactly the Keynesian prescription and exactly what economists have advised, and governments have implemented since the Great Depression of the 1930s and in unprecedented scale since the financial crises of 2008/9.

However, if the true cause of recession is the inevitable aftermath of an investment boom fueled by money and credit, then further stimulus is exactly the wrong thing to do.  Stimulus, among many other deleterious things, exacerbates the problem of oversupply, delays the inevitable correction and encourages the kind of risk-taking that caused the boom in the first place.

So as I hope you can see, understanding the root cause of recession is of vital importance to the long-term health of the economy.  And unfortunately, today’s consensus explanation is utterly wrong, and has caused immeasurable damage to the global economy.  We must fight to remedy this.

Mainstream Economics Myth 2: Market failures are common

I freely admit that I have a faith in free markets that few possess.  Yet anyone who believes that markets are perfectly efficient or result in an optimal condition or some kind of utopia is utterly naive.  Such conditions don’t exist in the real world.  However, it is equally wrong to believe that market failures are common, an assumption made by mainstream economists today.

There are two problems with the mainstream view that market failures are common.  The first problem stems from the mainstream definition of a “market failure.”  Economists define a market failure as anytime an outcome is less than perfectly efficient.  Perfection is a pretty high bar.  And I’m sure you can also appreciate that equating the word “failure” to anything less than utter perfection is a little bit unfair, demonstrating both mainstream economics’s misunderstanding of markets and its inherent anti-free market bias.

The second problem is to identify a “market failure” in situations where no market actually exists.  As we’ll see in a minute, this is more frequently the case when discussing so-called market failures such as externalities and public goods.

What do economists mean by market failures?  There’s a number of broad categories and I’ll very briefly address the ones that are most common.  First, that individuals are irrational.  As I’ve already discussed in Myth #1, this is viewed as a market failure justifying government action.

A second type of market failure is what is knows as information asymmetry.  For example, if I am a used-car salesman and you are in the market to purchase a used car, I clearly have more information about the cars I have to sell than you have about the car you might buy.  That I might be less than scrupulous and sell you a lemon of a car is considered a market failure and justifies to most economics government involvement in this transaction through regulation.  Yet, there are plenty of free market solutions that do a much better job than the government of dealing with information asymmetries.  These include reputation, branding and marketing, consumer agencies, consumer reviews, civil/tort law and insurance.

Simply put, information asymmetry is the every day state of the world.  It is impossible (except in silly economic models) for all parties in a transaction to have perfect information.  But as long as a transaction is fully voluntary to all involved parties, information asymmetry does not represent a market failure.

Another commonly assumed market failure is the natural monopoly.  Monopoly is indeed the enemy of free markets.  Yet, in a free market, there is no such thing as a natural monopoly.  Over the long-term (and that long-term need not be very long), the free market will always provide incentives for innovation that will result in substitutes and break a short-term monopoly.  This is true even for industries requiring substantial investment and exhibiting substantial economies of scale, such as transportation (e.g. roads and railways), utilities and telecommunications networks.  The only monopolies that can subsist are those that are government created, government sponsored, government subsidized or government itself.

A fourth category of market failure about which economists are fond of speaking is externalities and public goods.  A prime example of an externality is over-fishing, which is sadly all too common in waters that have no ownership.  Paradoxically, inefficiencies cause by over-fishing is viewed by economists as a market failure when it should be viewed as a failure of government to create a market.  You cannot have a market failure when you have no ownership of the underlying assets.  Timber is a good example.  Where forests have no private ownership you see over-foresting.  Where forests are owned by private enterprises you do not.

There is no question that externalities such as over-fishing, environmental damage, pollution and global warming are big issues to communities large and small.  But to use these examples to shout “market failure” in the absence of a market is wrong and unfair.

That last type of market failure that I’ll briefly mention is the economy’s ability to recover from an economic downturn.  That the economy does not return to full employment is viewed by mainstream economists as another failure of free markets.  We’ll discuss this issue in greater depth later on, but as a preview, I note that this view incorporates three errors:  1) not understanding the true cause(s) of the downturn, 2) not appreciating that government intervention inhibits the market from recovering and 3) faith in a very silly economic concept called equilibrium.

Before I leave this post, I want to answer the question, “so what.”  Why does it matter whether something represents a market failure or a failure to have a market?  It matters not so much in the classification of economic phenomena, for that is semantic, but in the remedies proposed.

Anytime economists spot what they believe to be (correctly or more than likely incorrectly) a less-than perfect economic outcome, they immediately point to government as the savior, usually in the guise of more regulation.  They rarely ask themselves whether a much better outcome would be to create a market where none existed.  And even in cases where that might not be possible, they infrequently bother to ask, or properly analyze, whether the government “solution” would result in even worse efficiency or outcome than the supposedly market failure itself.  But that’s a story for another post.