Retract The Lancet’s (and WHO funded) published study on mask wearing – Criticism of “Physical distancing, face masks, and eye protection to prevent person-to-person transmission of SARS-CoV-2 and Covid-19: a systematic review and meta-analyses”

It has been drilled into our heads by the media and by politicians over the past six months that wearing masks to prevent the spread of Covid is based on “the science.” But is that really true? Or is the so-called science supportive of masks really pseudo-science or junk science?

A little while ago, I started to write a post on the case against masks. It seemed natural to start by examining the scientific support for widespread mask wearing. I began with what seemed to be the most widely cited (as least in the media) pro-mask article, published in the prestigious medical journal, The Lancet (June 27, 2020 issue), and funded by the World Health Organization (WHO). This is an article entitled “Physical distance, face masks, and eye protection to prevent person-to-person transmission of SARS-CoV-2 and COVID-19: a systematic review and meta-analysis” authored by Chu et al.  and is a meta-analysis of previously published articles on SARS, MERS and Covid-19 respiratory viruses.

This study (henceforth referred to as the “Lancet study” or “Lancet meta-study”) concludes that mask-wearing as well as physical distancing and eye protection in both public and healthcare settings would result in a large reduction in the risk of Covid infection (though the authors judge the certainty of the effectiveness of both mask wearing and eye protection as “low.”

I read and analyzed each of the 29 studies referenced by the Lancet on the topic of mask wearing (I ignored the studies that focused on physical distancing and eye protection). What I found was shocking. In short, the Lancet meta-study should properly be considered junk science based on junk science that even if true, has no relevance to widespread community mask wearing. Based on my own analysis, I believe the Lancet study should be retracted.

Poor quality of the underlying studies

Let’s start by discussing the poor quality of the underlying studies. A number of the studies are non-peer reviewed and unpublished. Not a single study was based on a randomized control trial. All are observational studies based on questionnaires or interviews.

Because of the nature of observational studies, all of these articles suffer from bias. The most obvious type of bias is recall bias. As stated above, these observational studies are based on questionnaires or interviews given in most cases months (and in one study more than a year) after events took place. To quote from one study, “We encountered difficulty in our study with obtaining precise exposure history from subjects, some of whom had tended more than one patient, and all of whom had imperfect recall of an extremely stressful period” (Teleman).

Even more important than recall bias, however, are the psychological biases to which nearly all of us are prone. The first of which is telling an interviewer what they want to hear. For example, “not only was it difficult for respondents to recall behaviors during specific periods within the previous 2 months, but respondents may have been concerned that results could be used to evaluate their performance” (Ha). It is distinctly possible that healthcare workers who are trained to wear masks might feel pressure to disclose to interviewers that they wore masks even if they did not. Obviously, this acts to overstate mask wearing. An additional type of bias is to project one’s historical actions on whether or not the subject became infected. In other words, healthcare workers that subsequently got sick are more likely to say they did not wear masks (“If I got sick I must have forgotten to wear a mask”) and healthcare workers who did not get sick are more likely to say they did wear mask (“I didn’t get sick so I must have always worn a mask”). Together these biases render questionnaire-based studies likes these much less reliable, to the point of uselessness.

In addition to biases, nearly all of these studies suffer from what is known in statistics as multicollinearity, when there exists significant correlation between two or more independent variables. Most of these studies claim that mask wearing is protective. However, there is likely a strong correlation among healthcare workers between say, mask wearing and glove wearing or mask wearing and gown wearing or mask wearing and hand washing. In these instances, it is impossible to determine whether, say, mask wearing is the protective factor or hand washing. Moreover, it is highly likely that subjects (and especially healthcare workers) who voluntarily were masks when not required are excessively cautious and take other preventative precautions. Similarly, those subjects not wearing masks (either when required or not required) might take fewer precautions and/or more risks when interacting with symptomatic patients. For instance, in one very small study (Kim & Jung), the only one of nine healthcare workers exposed who got sick was a security guard (almost certainly less trained in medical precautions than doctors and nurses).

This problem of multicollinearity is compounded by the fact that most of these 29 studies reflect univariate analyses. That is, they make no attempt to separate the effect of masks from other potentially protective measures (i.e. other PPE, handwashing, face touching, etc.) using regression analysis. Lastly, the Lancet meta-study, as we will see in the next section, takes only the univariate data from these 29 studies even for the handful of studies that do perform multivariate analysis.

Poor quality of the meta-study

The Lancet meta-study examines the 29 individual studies, and for each study calculates how many exposed people who wore face masks were infected with SARS/MERS/COVID and compared those figures to how many exposed people who did not wear face masks were infected. As we have discussed, the Lancet study is a textbook example of “garbage in, garbage out.” But it gets worse.

The first problem with the meta-study itself is that it is riddled with data errors. Specifically, the authors miscalculated the figures and make one or more errors interpreting the data for at least eight of the studies (Scales, Heinzerling, Reynolds, Seto, Alraddadi, Peck, Burke, Ha) (details below). Four of the studies may have contained data errors as I was unable to replicate the Lancet’s summary data (Pei, Ki, Kim & Choi, Lau). Six of the studies reflected exceptionally weak, biased or poor design and should not have been included (Kim & Jung, Nishiyama, Loeb, Wang & Pan, Wu, Tuan). At least four of the studies showed results that were not statistically significantly regarding masks (Yin, Heinzerling, Nishiura, Alraddadi). Finally, for two of the studies I was not able to access more than 1 page abstract so I could not verify the quality of study or data (Yin, Park)

Even more importantly, an additional eight of the studies should not have been included in the Lancet meta-study because they did not reflect a true comparison of the Mask Group vs. the No Mask Group (Liu, Wang & Huang, Ho, Teleman, Wilder-Smith, Kim & Choi, Ryu, Pek). Most of these eight studies compared only a full PPE group with a not-full PPE group, rather than a mask group with a non-mask group. For instance, a healthcare worker in the not-full PPE group might still having been wearing a mask but no gown or glove or goggles.  

The third problem with the meta-study is the various inconsistencies from study to study. In some studies, the mask group represents healthcare workers who “always” wore masks while in other studies the Mask Group reflects mask wearing “sometimes” or “most of the time.” Correspondingly, the No Mask Group could reflect “never” wearing masks or “sometimes” wearing masks. Another glaring inconsistency from study to study is what is considered a positive case. Some studies consider positive cases only if the subject tested positive with a PCR or serology testing regardless of the exposed subject having symptoms (including fever). Other studies do the opposite and consider a positive case if the subject was exposed to a patient and was symptomatic regardless of whether or not they were tested for the virus. Moreover, a few studies tested subjects for antibodies weeks or months after the events, and thus almost certainly undercounted cases.

Finally, there is the question of the six studies that had zero positive cases in both the Mask and Non-Mask groups. Convention says to ignore these studies in a meta-study, which is what the Lancet authors do. However, this decision seems questionable given that some of the studies which were included also had very few positive cases. For instance, one study (Kim, Jung) had only one positive case (out of 9 subjects) and seven other studies had fewer than 10 positive cases (Scales, Park, Heinzerling, Loeb, Ho, Ki, Tuan).

Following is a table summarizing the 29 studies (studies are listed in order of Lancet Table on P. 1981, Figure 4). Full details of my analysis of each study are further below.

1 Scales incorrect data
2 Liu should not include, not mask vs. no mask
3 Pei cannot replicate data
4 Yin cannot verify data
5 Park small study, unclear results, not statistically significant
6 Kim, Jung tiny study, obvious flaws
7 Heinzerling incorrect data, not statistically significant
8 Nishiura not statistically significant
9 Nishiyama weak study, questionnaire long after event
10 Reynolds incorrect data
11 Loeb Very small, weak study
12 Wang, Pan weak study
13 Seto incorrect data
14 Wang, Huang should not include, not mask vs. no mask
15 Alraddadi incorrect data, not statistically significant
16 Ho should not include, not mask vs. no mask
17 Teleman should not include, not mask vs. no mask (only N95 vs non-N95)
18 Wilder-Smith should not include, not mask vs. no mask (only N95 vs non-N95), and redundant data with Teleman
19 Ki cannot replicate data
20 Kim, Choi cannot replicate data, should not include, not mask vs. no mask
21 Hall zero positive cases so not included in Lancet summary
22 Ryu zero positive cases so not included in Lancet summary, should not include, not mask vs. no mask (full PPE vs. not full PPE)
23 Park zero positive cases so not included in Lancet summary
24 Peck incorrect data, should not include because not mask vs. no mask, zero positive cases so not included in Lancet summary
25 Burke incorrect data, zero positive cases so not included in Lancet summary
26 Ha incorrect data, zero positive cases so not included in Lancet summary
27 Lau cannot replicate data
28 Wu weak study, high possibility of bias
29 Tuan weak study

Irrelevance of the meta-study to community mask-wearing

We have already established that the Lancet meta-study is weak science based on weak science. But even if it were a quality meta-study of quality studies, its conclusions would still be irrelevant to the matter of the effectiveness of widespread mask wearing among the general public.

Every single one of the 29 studies is a study of whether the mask wearer or non-mask wearer got sick (or was virus positive) having being exposed to symptomatic carriers. Nearly all of these studies (27/29) examined healthcare workers (or in one case, visitors) in a healthcare settings (i.e. hospitals). Moreover, the majority of interactions between the study subjects (mask wearers or non-mask wearers) and the infected index patients occurred with extended contact in close indoor quarters.

However, this study has been used by politicians, health officials and the media to justify widespread mask wearing by the asymptomatic general public, often outdoors, in order to protect not the wearer of the mask but others (“source control”). Not a single study detailed in the Lancet meta-study discusses whether masks protect the general population from asymptomatic spread. Moreover, nearly all of the subjects of the 29 studies were healthcare workers, trained to correctly wear masks and provided with clean masks which they presumably did not reuse and disposed of properly. It is simply nonsensical and unscientific to extrapolate studies of the protectiveness of masks wearing protecting the wearer to studies of masks on the carrier protecting the general population, untrained in proper mask wearing and who reuse dirty masks for days or weeks on end, and constantly fiddle with them. Moreover, at least two studies (Wilder-Smith, Peck) note the fact that asymptomatic spread seems to be limited or nonexistent, further weakening the case for widespread mask wearing.

Conclusion

I believe the Lancet meta-study should be retracted. It is riddled with data errors and contains studies that should not have been included. Most of the rest of the studies included are very small, of exceptionally poor design, or report weak and statistically insignificant results. In summary, the Lancet study shows, at best, weak and circumstantial evidence that masks (most notably, properly fitted N95 masks) may be protective of healthcare workers exposed to symptomatic coronaviruses patients in a healthcare setting (in close quarters for extended contact).  But even if the science was valid, this meta-study has no relevancy whatsoever to widespread mask wearing by the general public and should not be used to justify mandated masks.

The remainder of this article summarizes my finding for each of the 29 studies pertaining to masks that are listed in the Lancet meta-study (in the same order of Lancet Table on P. 1981, Figure 4):

1. Scales et al. 2003

  • Lancet Assumption:
    • Face Mask Group: 3/16 (positive/total)
    • No Face Mask Group: 4/15 (positive/total)
  • SARS study in Toronto of 31 healthcare workers who had direct exposure to a single symptomatic patient; data via questionnaire
  • Lancet data appears incorrect – should have included 6 total positive cases, not 7 (6 probable, 1 suspected); Corrected data is FM Group: 3/13, No FM Group: 3/18
  • No Mask group includes healthcare workers who “sometimes” wore mask
  • “SARS developed in one healthcare worker despite the fact that the worker wore an N-95 mask, gown and gloves.”
  • “Our study involved a small number of cases, and definitive conclusions cannot be drawn from a report of this size.”
  • This was a small study that showed no statistically significant difference between the mask and no mask groups.

2. Liu et al. 2009

  • Lancet Assumption:
    • Face Mask Group: 8/123
    • No Face Mask Group: 43/345
  • SARS study in Beijing among hospital healthcare workers exposed to symptomatic patients; data via questionnaire
  • Cannot replicate Lancet figures
  • Lancet misrepresents the data – seems to have taken 12-layer group as the mask group and the non-12 layer group as the non-mask group when the non-12 layer group includes 16-layer, N95 and disposable masks
  • Interestingly, this study showed no statistically significance for the effectiveness of N95 masks versus other types
  • “Another possible bias is that the case group attributed their infection to some high risky performance (e.g. performing intubation) and less efficient protection (wearing only one layer of mask while attending patients), while the control group did the opposite.”
  • Study concludes multilayer masks helpful but those people might be OCD and use other precautions
  • This study, as I understand it, should not have been included because there is no data on NO masks, only on different types of masks

3. Pei et el. 2006

  • Lancet Assumption:
    • Face Mask Group: 11/98
    • No Face Mask Group: 61/115
  • SARS study of healthcare workers in China in hospitals; data via questionnaire
  • Cannot replicate Lancet data
  • Face mask event in Lancet summary represents double 12 layer cotton masks but NOT general cotton masks; if both types used then FM event should be 86/328
  • No data given on no mask so no idea where Lancet got 61/115, however seems implausible since article states that “98% of healthcare workers wore masks…”
  • “In multivariate analysis the masks as factor didn’t enter logistic regression model…”

4. Yin et al. 2004

  • Lancet Assumption:
    • Face Mask Group: 46/202
    • No Face Mask Group: 31/55
  • SARS study of healthcare workers in Guangdong, China caring for severe SARS patients; data via questionnaire
  • Cannot find full study in English; only have abstract so cannot verify data

5. Park et al. 2016

  • Lancet Assumption:
    • Face Mask Group: 3/24
    • No Face Mask Group: 2/4
  • MERS study among Korean hospital of healthcare workers and patients who interacted with single symptomatic MERS patient
  • 1 page summary only; no full text so cannot verify quality of study or data
  • Only 1 out of 5 positive cases were confirmed; 4 were probable
  • Unclear if not wearing surgical masks means no mask or means other type of mask
  • Mask results not statistically significant

6. Kim, Jung et al. 2016

  • Lancet Assumption:
    • Face Mask Group: 0/7
    • No Face Mask Group: 1/2
  • MERS study of healthcare workers exposed to a single symptomatic patient in South Korea
  • Single healthcare worker who got sick was security guard, not a doctor or nurse and the study discusses fact that security guard possibly contracted MERS elsewhere
  • This is a tiny “study” that is limited relevancy

7. Heinzerling et al. 2020

  • Lancet Assumption:
    • Face Mask Group: 0/31
    • No Face Mask Group: 3/6
  • Covid-19 study in California of healthcare workers exposed to a single symptomatic patient; data via interview
  • Lancet completely misinterprets data; correct figures are:
    • Face Mask Group: 0/3
    • No Face Mask Group: 3/34
  • Of 3 positive cases in the no-face mask group, 1 individual wore face masks “most of the time”
  • 121 healthcare workers were exposed and 43 had symptoms (including fever, cough, shortness of breath, or sore throat), but only 3 tested positive with PCR tests
  • Study assumes that 40 w/ symptoms were Covid negative but that seems unlikely especially given February 2020 timeframe
  • No data on the mask use of the 121 exposed (43 with symptoms) as there was no Covid testing for non-symptomatic patients
  • In addition to “recall bias”, and “the low number of cases which limit the ability to detect statistically significant differences,” “additional infections might have occurred among asymptomatic exposed HCP who were not tested…”
  • This study reflects very weak science
  • Mask results not statistically significant

8. Nishiura et al. 2005

  • Lancet Assumption:
    • Face Mask Group: 8/43
    • No Face Mask Group: 17/72
  • SARS study at Vietnam hospital of healthcare workers and relatives exposed to confirmed cases; data based on survey conduced 1 year after onset of epidemic
  • Minimal difference in % positive from Face Mask Group vs % positive from No Face Mask Group (19% vs 24%) – not statistically significant
  • “Put simply, the use of masks alone was shown to be insufficient to contain the epidemic.”
  • Significant bias and limitation to the study: “mask usages…is vulnerable to recall bias,” “…the estimates of the protective effect of masks…may include the effects of other concomitant changes…”

9. Nishiyama et al. 2008

  • Lancet Assumption:
    • Face Mask Group: 17/61
    • No Face Mask Group: 14/18
  • SARS study at 3 Vietnam hospitals of people exposed to SARS patients; data by questionnaire survey 7 months after epidemic for 1 hospital and 14 months for other 2 hospitals
  • Lancet ignores “sometimes” mask use data
  • Very simplistic study – no discussion of other prevention measures (e.g. gloves, gowns) except handwashing
  • Limited information in “short communication,” not full scientific study

10. Reynolds et al. 2006

  • Lancet Assumption:
    • Face Mask Group: 8/42
    • No Face Mask Group: 14/25
  • SARS study in Vietnam hospital of healthcare workers exposed to single patient; data via questionnaire
  • Study reports two different types of activity: 1) exposed healthcare workers who “talked to or touched index patient without mask” and 2) “came within 1 meter of index patient without mask”
  • Lancet used latter group, which shows somewhat stronger pro-mask results
  • However, If one “touched” as patient, they must have been within 1 meter, so it appears correct interpretation should have used the other set which is weaker and shows non-statistically significant results (data shown for “talked to or touched”):
    • Face Mask Group: 15/51
    • No Face Mask Group: 7/16
  • No analysis of other types of PPE use
  • Significant bias and limitations, including, “small sample size,” inability to assess “duration, or the intensity of potential exposure,” “selection bias favoring enrollment of persons with less opportunity for direct contact with the index patient.”
  • Very simplistic and poorly designed study

11. Loeb et al. 2004

  • Lancet Assumption:
    • Face Mask Group: 3/23
    • No Face Mask Group: 5/9
  • SARS study in Toronto hospital of nurses exposed to symptomatic patients; data via interview
  • 5/9 No Mask Group is “non consistently wearing mask”, not necessarily wearing no masks
  • 2/16 SARS positive individuals always wore N95 mask and 1/4 SARS positive individuals always wore surgical mask
  • “Difference for SARS infection for nurses who consistently wore N95 masks and those who consistently worse surgical masks was not significant.”
  • Small weak study, for example, single nurse with the most number of shifts (most exposure by far to index patient) had “inconsistent” use of N95 mask (and was included in No Face Mask Group)

12. Wang, Pan et al. 2020

  • Lancet Assumption:
    • Face Mask Group: 0/278
    • No Face Mask Group: 10/215
  • Covid-19 study of healthcare works in hospital in Wuhan, China
  • Mask group equals “wore N95 respirators, and disinfected and cleaned their hands frequently”
  • No mask group equals “wore no medical masks, and disinfected and cleaned hands only occasionally”
  • Data does not differentiate between the effects of mask wearing and cleaning hands
  • What is meant by “medical masks” – might healthcare workers have worn non-N95 masks?
  • Other data table shows as strong department effect: respiratory, ICU, infectious disease departments had zero positive cases, hepatobiliary pancreatic surgery, trauma and microsurgery, urology had all of the positive cases so the difference might be type of interaction, not masks (8/10 in one department: hepatobiliary pancreatic surgery)
  • “A randomized clinical trial has reported that the N95 respirators vs medical masks resulted in no significant difference in the incidence of laboratory confirmed influence.”
  • This is a very weak study that should not have been included because it does not clearly define the mask group and no mask group as properly mask vs. no mask

13. Seto et al. 2003

  • Lancet Assumption:
    • Face Mask Group: 0/51
    • No Face Mask Group: 13/203
  • SARS study in Hong Kong hospitals of healthcare workers exposed to symptomatic patients; data via questionnaire
  • Lancet seems to have misinterpreted data
  • 0/51 is for surgical masks only; if we use all masks (including 2 layered paper masks, surgical and N95) then the FM = 2/169 and No FM = 11/85

14. Wang, Huang et al. 2020

  • Lancet Assumption:
    • Face Mask Group: 1/1286
    • No Face Mask Group: 119/4036
  • Covid-19 study of healthcare workers in China in neurosurgery departments in 107 hospitals; data via questionnaire or telephone interviews
  • Lancet completely misinterprets data – conflated masks/no masks with Level 1 (119/4036) vs Level 2 (1/1286) protection
    • Level 1 includes surgical masks: “Level 1 protection: white coat, disposable hat, disposable isolation clothing, disposable gloves and disposable surgical mask (replace them every 4 h or when they are wet or contaminated)”
    • Level 2 includes N95 or higher masks, goggles, gloves, etc.: “Level 2 protection: disposable hat, medical protective mask (N95 or higher standard), goggles (anti-fog) or protective mask (anti-fog), medical protective clothing or white coats covered by medical protective clothing, disposable gloves and disposable shoe covers”
  • This is level 1 vs level 2 study, not mask vs no mask study
  • Proper data based on study’s Table 1 shows Face Mask group had 95 positive cases (out of 120 infected staff) and No Face Mask group had 25 cases (out of 120 infected staff); no data given on mask use for non-infected individuals
  • Study also ignored 300 symptomatic healthcare workers who tested negative for Covid-19
  • Significant limitations to study: “the variables of the study are relatively simple,” “protective measures adopted by the medical staff members were not fixed but changed over time. Therefore, the analysis based on protective measures might be affected by time bias.” “respondents’ descriptions might be inconsistent with the facts, which could affect the reliability of the results,” “some cases had uncertain documentation of the exposure history, and recall bias might exist…“
  • Study should not have been included as not correctly mask vs no-mask

15. Alraddadi et al. 2016

  • Lancet Assumption:
    • Face Mask Group: 6/116
    • No Face Mask Group: 12/101
  • MERS study of healthcare workers in Saudi Arabian hospital (2 cohorts exposed to patients – explain); data via questionnaire
  • Lancet misinterprets data: figures of mask group (6/116) and non-mask group (12/101) is for N95 masks, not all masks!
  • Should have used the data labeled, “Covering of nose and mouth with medical mask or N95 respirator), in which case data would be:
    • Face Mask Group: 11/151
    • No Face Mask Group: 7/66
  • Not statistically significant if we use correct data
  • Study also does not take into account other PPE (gloves, gown, eye protection)
  • The No Face Mask group “sometimes” work masks
  • Study ignores symptomatic but negative tested healthcare workers: “most uninfected reported illness”

16. Ho et al. 2004

  • Lancet Assumption:
    • Face Mask Group: 2/62
    • No Face Mask Group: 2/10
  • SARS study of healthcare workers in hospital in Singapore; data via questionnaire
  • Data is for “protected” vs. “unprotected” – no mention of masks specifically, only “full PPE” (likely “N95 masks, gowns and gloves”)
  • Data shows only 4 positive cases and 72 total when there were actually 8 positive and 112 total healthcare workers exposed to symptomatic patients
  • 55 healthcare workers actually were exposed and had some symptoms but only 8 tested positive
  • This study should not be included because not specifically for masks

17. Teleman et al. 2004

  • Lancet Assumption:
    • Face Mask Group: 3/26
    • No Face Mask Group: 33/60
  • SARS study of healthcare workers at hospital in Sngapore; data via telephone interview questionnaire
  • Study only measures if N95 is worn – other group is not necessarily no-masks (likely wore surgical mask)
  • Study should not have been included

18. Wilder-Smith et al. 2005

  • Lancet Assumption:
    • Face Mask Group: 6/27
    • No Face Mask Group: 39/71
  • SARS study of healthcare workers in Singapore hospital; data via telephone interview questionnaire
  • Appears to be same data as previous study (Teleman et al) – should not include both studies (same Singapore hospital – Tock Seng Hospital)
  • Data is for N95 masks vs no N95 masks, not no masks
  • Should be 80 study participants, not 98
  • Study should be excluded for two reasons: redundant data with previous study (Teleman) and study is not reflective of masks vs no mask
  • “Based on our data in Singapore, transmission from asymptomatic patients appears to play no or only a minor role” (remember, the point of mask mandates is to protect wearer against asymptomatic individuals)

19. Ki et al. 2019

  • Lancet Assumption:
    • Face Mask Group: 0/218
    • No Face Mask Group: 6/230
  • MERS study from hospital in South Korea of hospital healthcare workers and patients exposed to a single symptomatic patient; data via video data and interview
  • Possible bias because patients who are less likely to wear masks than healthcare workers are also less likely to maintain other safe behaviors
  • Hand washing sems more important than masks especially since 2/11 patients had no direct contact with index patient – don’t touch face which regular (non healthcare-trained) people seem to do with masks on
  • Study gives data on % people who wore surgical masks but no data if infected patients wore or did not wear masks
  • Study data shows 4 positive patients with mask data (Table 2 of study) while Lancet states there are 6 – no idea where Lancet data comes from
  • Cannot replicate Lancet data

20. Kim, Choi et al. 2016

  • Lancet Assumption:
    • Face Mask Group: 1/444
    • No Face Mask Group: 16/308
  • MERS study of healthcare workers in South Korean hospitals with direct contact with MERS patients; data via questionnaire survey
  • Cannot replicate data; study says at least 2 cases wore N95 and were infected (Lancet says only 1)
  • “Appropriate PPE was defined as use of all of the following: (a) N95 respirator or powered air-purifying respirator (PAPR), (b) isolation gown (coverall), (c) goggles or face shield and (d) gloves). If any part of the PPE was missing, it was considered to be exposure without appropriate PPE.”
  • This is a study of full PPE (described above) vs. non-full PPE, not mask vs. no-mask. Hence, study should not be included

21. Hall et al. 2004

  • Lancet Assumption:
    • Face Mask Group: 0/42
    • No Face Mask Group: 0/6
  • MERS study of healthcare workers in one hospital in Saudi Arabia of healthcare workers exposed to a single patient; data via questionnaire
  • Nobody got sick – 0 cases, though some had symptoms and tested negative
  • Typical recall bias, since questionnaire was 4 months after event
  • 87% of healthcare workers worse surgical masks, though not necessarily 100% compliance
  • 33% of healthcare works used N95
  • Study not included in Lancet summary data due to zero positive cases in both groups

22. Ryu et al. 2019

  • Lancet Assumption:
    • Face Mask Group: 0/24
    • No Face Mask Group: 0/10
  • MERS study in South Korea of people exposed to MERS patients; data via interview, 7 months after events
  • No differentiation between PPE (gown, N95 mask, glasses, gloves) and only masks
  • 1 person had fever and wore full PPE but wasn’t tested for MERS at the time
  • Face mask group (24 people) is Grade 3 and Grade 4 = Full PPE
  • Non-face mask group (10 people) is Grade 1 and Grade 2 = without full PPE (but could include mask)
  • Significant study limitations: bias as questionnaire was 7 months after event; also study might have “missed some mild or asymptomatic cases,” “serological tests were performed several months post-exposure, pre-existing MERS antibodies may have decreased or disappeared in the interval, potentially leading to underestimation,” “number of participants was relatively small and may not be representative or generalizable.”
  • Study should not be included because Grade 1 and 2 versus Grade 3 and 4 is not mask/no-mask
  • Study not included in Lancet summary data due to zero positive cases in both groups.

23. Park et al. 2004

  • Lancet Assumption
    • Face Mask Group: 0/60
    • No Face Mask Group: 0/45
  • SARS study in United States of healthcare workers exposed to SARS patients in 8 healthcare facilities; data via questionnaire
  • 17 healthcare workers developed symptoms but zero tested positive
  • Study not included in Lancet data due to zero positive cases in both groups

24. Peck et al. 2004

  • Lancet Assumption
    • Face Mask Group: 0/13
    • No Face Mask Group: 0/19
  • SARS study in United States of people exposed to single SARS patient; study comparing individuals exposed pre-diagnosis to the index patient and post-diagnosis; data via questionnaire
  • Of pre-diagnosis contacts, 11/26 contacts had symptoms but all tested negative for SARS; pre-diagnosis contacts included household contacts
  • Cannot replicate Lancet figures
  • Correct data as per study’s Table:
    • Face Mask Group: 0/26
    • No Face Mask Group: 0/30
  • Not mask vs. no-mask but Full PPE (N95 respirator, gown, gloves worn “every interaction”) vs. not-full PPE – study should not be included
  • “To date, no asymptomatic SARS-CoV infection or transmission before onset of symptoms has been definitively documented.”
  • Study not included in Lancet data due to zero positive cases in both groups

25. Burke et al. 2020

  • Lancet Assumption:
    • Face Mask Group: 0/64
    • No Face Mask Group: 0/13
  • Covid-19 study in United States of close contacts of positive cases; data via interview
  • Lancet has incorrect  data (76, not 77 total individuals in study’s data table). Correct data should be:
    • Face Mask Group: 0/63
    • No Face Mask Group: 0/13
  • 25/163 healthcare workers had suspected Covid, but these were not apparently among the 76 with interview data
  • Study not included in Lancet data due to zero positive cases in both groups

26. Ha et al. 2004

  • Lancet Assumption
    • Face Mask Group: 0/61
    • No Face Mask Group: 0/1
  • SARS study of healthcare workers in one hospital in Vietnam exposed to SARS patients; data via questionnaire
  • ~23% of healthcare workers had symptoms but zero tested positive for SARS
  • While “all 62 SARS ward workers reported wearing masks during the outbreak,” “only 56 reported ‘always’ or ‘usually’ using a mask while in SARS patients’ rooms.” (after first week of patient care). Hence correct data should be:
    • Face Mask Group: 0/56
    • No Face Mask Group: 0/6
  • Study limitations include, “subject to recall and reporting bias, because not only was it difficult for respondents to recall behaviors during specific periods within the previous 2 months, but respondents may have been concerned that results could be used to evaluate their performance. Estimates of SARS exposures and the frequency of personal protective equipment use among SARS ward workers are therefore probably inflated.”
  • Study not included in Lancet data due to zero positive cases in both groups

27. Lau et al. 2004

  • Lancet Assumption
    • Face Mask Group: 12/89
    • No Face Mask Group: 25/98
  • SARS study of household members exposed to SARS patients in Hong Kong; data via telephone interview/questionnaire
  • Cannot replicate Lancet’s data
  • This study is listed in the Lancet article as a study in a “Non-health-care setting” (meaning, a study of mask-wearing in the community, not healthcare setting). However, this is not correct. While the study analyzes family members of SARS patients (non-healthcare workers), the mask data is of those family members during hospital visits. Therefore, the study should more correctly be listed as a “health-care setting.”
  • Of all the Lancet mask studies, this is the only one that has any data on mask wearing by symptomatic patients, rather than mask wearing by the non-infected. Study only reports during a hospital visit whether neither visitor nor patient was wearing a mask, both were wearing masks, or one was wearing mask (no reporting is made between whether the SARS patient or the visitor is the one wearing a mask).
  • 128 cases with data, 32 visited, 8 both had masks, 7 with one wearing mask, 17 no masks
  • 2121 controls with data, 242 visited, 85 both masks, 76 with one wearing mask, 81 no masks
  • Study limitations: “no way to confirm that the probable secondary infection of household members actually came from the index patient. Nosocomial infections, rather than secondary infections, may also have occurred in some of the household members during hospital visits to the index patient, but it is not possible to distinguish the two scenarios.”  “The case definition of SARS coronavirus was nonspecific…it is possible that some of the cases were in fact pneumonia rather than SARS.”

28. Wu et al. 2004

  • Lancet Assumption:
    • Face Mask Group: 25/146
    • No Face Mask Group: 69/229
  • SARS study of community cases and control group in Beijing; control group had no close contact with SARS patients; data via questionnaire
  • No face mask group includes people that “sometimes” wore face masks
  • Study limitations include low participation rate, recall bias, “those who agreed to participate may have self-selected for unknown reasons that could have biased our findings. For instance, several patients responding to the open-ended comment section mentioned that they were certain their illness was not ‘SARS’’
  • Figures dependent on the number of the control group, which is totally the choice of the study.
  • Confirmed cases equals people with symptoms, not serology testing (many other studies are the opposite – only positive if tested positive even if symptoms)

29. Tuan et al. 2007

  • Lancet Assumption:
    • Face Mask Group: 0/9
    • No Face Mask Group: 7/154
  • SARS study in Vietnam of household and community contacts exposed to SARS patients; data via questionnaire/interview
  • Face Mask Group cases is defined as wearing mask “sometimes/most times” (not necessarily always) and the No Face Mask Group is defined as “Never” wearing a mask. This is inconsistent with nearly all other studies in Lancet
  • Very simplistic univariate analysis
  • “There have been no conclusive reports of transmission occurring from SARS cases in the pre-symptomatic phase and we also found no evidence of transmission occurring prior to onset of symptoms.”

Enough with the pandemic hysteria

The hysteria over Covid-19 is out of control. It never ceases to amaze me how the vast majority of quote unquote smart people can act in such a sheepish, thoughtless manner. Politicians I get. But that the only pushback to excessive, expanding and lengthening lockdowns are coming from the utterly thoughtless extreme right is absurd. And enormously damaging.

Studies around the U.S. and around the world are very consistently showing that the number of people exposed to coronavirus (that is, who test positive for antibodies) vastly exceeds scientists’ and doctors’ initial estimates. And by vastly, I mean by orders of magnitude. The true number of people exposed will wind up being somewhere between 10x and 1000x the number of people who have, to date, tested positive. This has been found in New York City, in Miami, in Massachusetts, in California, in Japan, and elsewhere.

This is incredibly good news, yet the mainstream media and politicians have either completely ignored these findings, or have started to view this information as instead, dire. Using the lower and most conservative estimate of 10x means that the actual mortality rate from being exposed is one tenth what the science community first thought, what the mainstream media continues to report, and what the general population continues to believe. In other words, the death rate isn’t the reported 3-5% of Wuhan or the assumed 12% of Northern Italy but, at worst, 0.3%-1.2%.

A more likely exposure rate of 20-50x compared with the number of people who have tested positive leads to an overall mortality rate somewhere in between approximately as fatal as the seasonal flu and one to three times more fatal than flu. I’ll venture to guess that when all is said and done, the overall fatality rate will wind up no more than 0.2%. Perhaps twice the 0.1% of the flu. Deadly yes. Reason to be hysterical, no. And while it is very helpful for science to confirm these numbers, this state of affairs should have been obvious to the well-informed from the beginning, since we knew how few people we were testing. Instead, the media and the politicians both benefited from grasping onto high death figures. And now they won’t let go.

Beyond the clinical fatality rate is the fact that for the majority of people, being exposed to Covid-19 will prove asymptomatic. Even more good news is that children are almost always asymptomatic and have very low viral loads. Children don’t pass this on to adults, something many adults (and especially teacher’s unions) are fearful. So called grown-ups pass it on to children, who have a fatality rate of close to zero, almost certainly less deadly than a bad influenza year.

Add to the fact that deaths due to Covid are as likely, if not more likely to being overcounted in the U.S. than undercounted. Deaths are being attributed to the virus if the hospital patient or nursing home inhabitant tested positive or showed signs of the virus. However, many such people, especially those in nursing homes, would, statistically, have died anyway of other causes (nursing home inhabitants in the U.S. have a median life expectancy of about 5 months!). In other words, not every Covid-positive or Covid-assumed-to-be-postive death is a death caused by the virus. We also know that at least in Europe, a milder than normal flu season left more vulnerable people alive in the fall and early spring, leading to more deaths being attributed to coronavirus.

Totally unlike the Spanish flu of 1918-1919, Covid-19 is overwhelmingly affecting the old and the unhealthy, not the young and healthy.

So why, as I mentioned above, is the media reporting this good news (to the extent they are reporting it at all) as dire? Because they believe it means that Covid-19 spreads more rapidly than expected, and use that information to justify even more drastic lockdowns.

It is indeed possibly, perhaps likely, that Covid-19 spreads more easily than expected. It almost certainly spreads more easily than the seasonal flu given that the we have vaccines for the flu. But to reinforce the key point explained above, this means the virus is far less deadly than we feared. In addition, it is seeming more and more likely that virus has been spreading in the U.S. not since March as first thought, but since as early as November or December. It also means we are far closer to herd immunity than we thought (especially in densely populated, and hard-hit New York City).

I’ll speculate on what would be one more piece of good news. We know that receiving a high viral loads makes one much more likely to have serious complications from Covid. In fact even more so than old age, viral load may be the single highest risk factor, which would explain the stories of otherwise healthy and young healthcare workers becoming very sick and in some cases dying. It is further reasonable to assume that asymptomatic carriers have low viral loads and if and when they do spread the virus, they will spread small amounts and recipients will also likely be asymptomatic. In other words, provided society can protect the high-risk population, getting to herd immunity may be far less deadly than scientists believe.

Clearly we need a lot more data. Even more clearly, we need a lot less panic.

The undeniable truth is that the scientific news over the past month has been overwhelmingly positive. Mortality projections are significantly lower. Hard-hit areas like Italy, Spain and New York are all showing vastly improving figures, indicating the worst is both over, and much less worse than feared. Yet, the data is not consistent with perception.

Moreover, the stated goal of government ordered lockdown was to “flatten the curve” so that healthcare system would not be overwhelmed. Even in New York, the so-called “epicenter,” there are unused beds and an overcapacity of ventilators (which may in fact have done more harm than good). In most of the country, hospitals are essentially empty with few Covid cases and even fewer non-Covid cases as patients with other diseases have been scared or turned away. Hospitals around the country have resorted to layoffs and furloughs.

Instead of backing off off extreme lockdowns, politicians have instead “moved the goalposts,” insisting now that we cannot return to any semblance of normal life until we test more or until the “second wave” passes. Essentially, governments are saying we can’t resume life until there is zero risk. This is absurd.

Where I live, in New York City, the lockdown has grown more severe, restrictions on activities have increased nearly daily and the path to opening up seems less clear and more distant. Every day, people appear to be more hysterical over Covid-19 and overwhelmingly willing to imprison themselves, throw tens of millions out of work and risk something far worse than the Great Depression. Why? Let me suggest a few reasons.

Reason #1: the contented

For one, the upper-middle and upper classes are enjoying their staycations. The privileged (not to mention virtually all government employees) are getting paid by their employers to sit at home in their pajama bottoms and do little more than participate (or not) on a couple of daily Zoom calls. In short, without money worries, they have been given an unlimited hall pass from “adulting,” having become perpetual (stay-at-home) Ferris Buellers. Endless, guiltless, state-sponsored sloth. Naturally, they support the government lockdowns and will resist loosening them so they don’t have to go back to real life.

Even more important, those under lockdown are being made to feel like they are contributing to the efforts. They are helping! They are sacrificing! Let’s shame the people going outside and contributing to the little economy that still exists as horrible virus-spreaders not doing their fair share. Let’s laud those binge-watching Netflix as sacrificing and righteous!

On the other hand, the lower and middle classes that live paycheck-to-paycheck not to mention small business owners clearly have a different calculus. They, of course, have far less political power.

Reason #2: the politicians

A second reason is clearly political. Most obviously, there is the typical political mentalities of short-termism, cover-your-ass and hunger for absolute power. Much better politically to have saved a single life for which you can take credit even if it means having destroyed an economy which can be blamed on “nature.” We are at war, they say. And war requires extreme sacrifices and extreme measures. We can’t worry about costs and benefits and about collateral damage when we are at war. As a politician, it is always better to do something rather than nothing, even if that something is wrong. And it is always better to err on the side of what your constituents will view as their short-term public safety. It also seems apparent that many politicians, governors especially are enjoying the massive power they have seized during this pandemic to control the lives of their constituents.

Even more importantly, governors and mayors and county executives, especially those from blue states, are competing to contrast their efforts with the appalling federal response. The governor that can lockdown the hardest and exclaim their apathy for the economy the strongest becomes the most popular, highest revered anti-Trump. Cuomo in New York. Murphy in New Jersey. Newsom in California. Whitmer in Michigan. Who can be the great general that leads us into battle against the mighty adversary of the virus? Plus, being typical arrogant politicians, they will never be able to admit they were wrong.

Politically speaking, locking down is the easy part. Even more easy when you’ve brainwashed the populace into hysteria and your approval ratings are high. On the other hand, picking up the pieces of what is left of the economy will be much, much harder. With vastly diminished tax revenue and skyrocketing unemployment, budgets will have to be cut along with vital services. Crime and poverty and despair will dramatically rise. Those popularity ratings we mentioned won’t look so good then. I do think politicians in blue states are smart enough to know this and are scared to death of what awaits them, and us, once the lockdowns end. Which is why they will delay lifting lockdowns as long as possible, devastating cities like New York for what could be a generation.

As a side note, one must wonder if extended lockdowns in the rust belt (witness Michigan in particular) could end up costing Biden the presidential election. Democrats risk digging their own grave here as politically astute (and non-astute in every other way) Trump might wind up successful in placing the blame on Democratic leaders for what will almost certainly be a horrific economy come November.

Reason #3: the media

The third reason why lockdowns are continuing despite mounting evidence of abating danger is the media. We are witnessing the ultimate national version of “if it bleeds, it leads.” With nothing else for the sheltered-at-home to do than entertain themselves with television or the internet, the media has a captive audience. The more the scaremongering, the greater the ratings and the higher the readership. I’m not the first to call this “pandemic-porn” or “panic-porn”. People aren’t attracted to good news and dry statistics, they are attracted by videos of overrun emergency rooms, by images of grieving families, and by stories of otherwise healthy and fit moms and dads, suddenly stricken. Lastly, that the media is, like the virus, epi-centered in New York City means the rest of the country exaggerates their fears.

And yet it isn’t only the profit-oriented media at work in fomenting hysteria. The government is doing this too. Unrelenting reminders from our politicians on how dangerous our situation is. Roadsigns and street placards reminding us to social distance and “flatten the curve.” Signs on every storefront requiring masks to enter. Even trying to distract oneself and relax by listening to local radio requires enduring constant public service announcements (the only form of advertising that remains) reminding one to shelter, and if you can’t shelter then to distance, and if you can’t distance then to cover up.

Reason #4: the education, or lack thereof

The final reason I will give for why lockdowns are popular and thus persisting is because of the awful state of education in this country. I’m not talking about education for poor people or minorities or in urban areas. I’m talking about education for the privileged and the smart. Bluntly speaking, it sucks. We barely learn history and most of us learn no statistics at all.
Lacking a proper understanding of history means few know of or appreciate what history teaches: societies survive pandemics time and time again, even with horrific loss of life. Societies do not tend to survive economic collapse.

Not understanding statistics (something for which doctors and journalists and politicians are notorious) means most of us have no ability to understand, interpret or form any proper judgment on the kind of scientific studies about which we are now reading. We blindly trust the “experts” even though those experts have been hugely wrong with their models from the beginning of the pandemic. And we vastly overrate our own risk of dying just because we know somebody who knows somebody who died.

How to end the lockdowns

By far, the most important aspect of ending the lockdowns is to reduce the level of hysteria amongst the general population. We need to talk people off the ledge by making them understand the true mortality rates. Politicians and the media have scared the population into believing that Covid-19 is a death sentence. We need to now change the message to one that is far less dire and far more factually correct. For the vast majority of the population, Covid is no more likely to cause death than the flu. For the young, it may even be less likely. Moreover, Covid exposure for the majority of people, including the elderly, will prove asymptomatic.

In order to change public perception, politicians are going to have to admit they were wrong and that they overreacted. Politicians can blame the scientists and say that say that they trusted the early models that wound up being overly pessimistic. They can maintain the position that they followed the experts and had the safety of the population as their prime objective. They have to say that they now realize that yes, Covid is serious, but nowhere near deadly enough that it is worth destroying an economy. For most politicians this will be impossible. But perhaps a handful of brave souls will show true leadership. Naturally, it will prove even harder for the scientists to admit that their models were wrong and that they were the cause of such huge damage. Unlike the politicians, the scientists have nobody but themselves to blame.

Unfortunately, as you may have perceived, we have a bit of a chicken and egg situation. Hysteria won’t fall until politicians change their message and the fear of leaving one’s house subsides. But politicians won’t change their message until support for the lockdowns falls. Support for the lockdowns won’t fall until the fear level is reduced. Ultimately, economic distress and social unrest will force a change of both positions. But clearly, we would all be better off if the change happened sooner.

In addition to reducing hysteria, the federal and state governments need to pass legislation to indemnify businsess, schools and other organizations from the inevitable lawsuits that will brought if someone catches the virus at that establishment. We cannot have businesses and other entities fearful of lawsuits and scared to open. To be evenhanded, legislation should also ban lawsuits against the governments that forced these lockdowns in the first place, even if they were misguided. In the spirit of moving forward, let’s all agree we are much better off leaving the lawyers out of it.

Now let’s talk about what should open when lockdowns are ended. I firmly believe that the correct answer is everything. Yes, everything. But I recognize that given the widespread and pervasive level of fear, that’s not going to happen. We’re going to have to do this in stages, as other countries and some southern states have started to do in order to prove to politicians and the general public that it is safe to do so. Start with schools. Schools should reopen immediately. Children are at very low risk from coronavirus and are suffering greatly without education, without human contact and in many cases, with unlimited screentime. All outdoors spaces should also be opened immediately, including parks and playgrounds, beaches, golf courses and outdoor sports facilities. The odds of catching the virus outside, or at least inhaling enough of a virus load to become seriously sick is minuscule. The benefits of exercise, fresh air and sunlight are immeasurable.

All doctors, dentists and other healthcare providers should resume normal operations. Hospitals should also reopen and operate normally for all patients. I don’t believe there is a single hospital, even in New York City, that is, at this date, overburdened with Covid patients. Cancer treatments should resume. Elective surgeries should be re-instated. Children should be receiving vaccines. Adults should be getting their in-person (not tele-medicine) annual physicals.

All non-retail businesses should be allowed to open without restriction. Retail and restaurant establishments should be able to open, albeit with some restrictions on capacity to be phased out over a short period of time. Large gatherings will have to be phased in over time, again with increasing occupancy. To reiterate, the gradual phase-in for the reduction of the lockdowns are not for health reasons necessarily, but to de-sensitize the hysterical masses and show them that life is indeed safe, and can continue.

Locations with a large number of elderly and high-risk people, most notably nursing homes, should remain severly restricted until the threat has passed. Far more government resources should go to to protect nursing homes and the elderly. This has been an appaling governmental failure to date.

Anyone with coronavirus symptoms should obviously stay home and be self-quarantined. To the extent they have to go outside for healthcare services or essentials, they should be mandated to wear face covering. As for those asymptomatic, there is very little evidence that masks are effective in slowing the spread of virus. Anybody that comes into close contact with people should be allowed, though not mandated, to wear a mask. For the rest of the population, masks and other facial covering should be optional. Even if there is a slight benefit to wearing masks in an indoor setting (and there is unlikely any benefit in an outdoor setting), I would argue that the constant reminder of fear seeing everyone in masks, and the anti-community sentiment that comes with pervasive masks outweigh any potentially small benefit. Plus, there are public places where masks cannot be worn, such as restaurants and hair salons.

Let’s now discuss testing and data. Politicians, having moved the goal post away from “flattening the curve” are now insisting that we cannot lift lockdowns until testing is widespread. This is absurd for two reasons. For one, testing at this point is essentially useless. Second, we are many, many months away (if not more) from being able to test everyone with symptoms in a timely manner. We can’t (and shouldn’t) keep society locked down long enough to wait for widespread testing. Anybody with symptoms should assume they are positive and self-quarantine. Now, with the virus widespread, a positive test has virtually no value.

What government should be focusing on, with regards to data and testing, is twofold. First, antibodies. Let’s understand the true number of people exposed already to the virus so we can calculate the true mortality rates. Let’s also see how close we might be to herd immunity. Note also that we do not need widespread testing to accomplish this. Relatively small samples in a given area can be statistically significant. Second, the government should be coordinating hospitals and scientists to understand who is really at high risk and who is not. Why are a very small number of otherwise healthy people getting seriously sick and in some cases dying. Is it the high viral load, for example, of healthcare workers? Or some other hidden underlying factor? And if it is viral load, does it only happen in a healthcare (hospital) setting or because of some super-spreading event? We need this information to know where social distancing is helpful and where is it useless and unnecessary.

Anyone that says that the miracle answer is “testing” is either foolish or lying. In order for testing to be effective in containing the virus, we would have to test asymptomatic people each and everyday. We obviously do not have the resources to do this, nor would the vast majority of the populace allow this to happen.

Lastly, let’s talk about what else we should absolutely not do. We cannot lockdown the world until we have a vaccine, which is likely 12-18 months away (and possibly longer). One way or another we have to learn to live with the virus.

We should not mandate temperature checking in order to go to school, enter a store, dine at a restaurant or board a plane. Nor should we allow either biometric or cellphone contact tracing. The benefits are at miniscule at best, and the costs to freedom and privacy are enormous. We must not let happen what occurred after 9/11, which is to give up our liberties. Unfortunately, we seemed to be heading directly towards those same mistakes. Then it was hysteria about terrorisim, now it is Covid hysteria. If we make people fearful and stressed out when going about what should be everyday activities like we do going through TSA checkpoints, we are essentially guaranteeing economic depression. We should not give up freedom for the theatre of false security. As Benjamin Franklin famously said (and I am not the first commentator to use this quote). “They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.”

How do we know the lockdowns didn’t work?

We have one last thing to talk about. If you are paying attention you should be wondering the following: first, how do we know it wasn’t the severe lockdowns that stopped the spread of Covid, and second, how do we know the spread of the virus won’t accelerate once the lockdowns are lifted. There are three sets of answers. 1) We’re starting to know, 2) stopping the spread might not have been the optimal health solution anyway, and 3) it doesn’t matter.

1 – Evidence that lockdowns were ineffective

Let’s be honest here. As of yet, we cannot with certainty prove that the lockdowns were ineffective. However, there is a significant and growing amount of circumstantial evidence that indicate this is the case. The most obvious place to start is Sweden, the black sheep of the world.

As you probably know, Sweden was one of the only countries in the world to decide not to that lockdown the country and implement only moderate (and weakly enforced) social distancing. Restaurants, bars, elementary schools, parks and hair salons all remained open. For that, they have endured ridicule from all over the world, in fact, with many openly rooting for a high death count. So far the evidence shows that Sweden’s death rate, while higher than its neighboring Scandinavian countries is about average for Europe, lower than France, Italy, the U.K. and Spain. While Sweden’s economy has been hurt, as it is significantly export oriented, the damage is much less so than in countries that have locked down. Finally, it is obvious that Sweden is much further along the path to herd immunity than nearly all other countries of the world.

Additional evidence that lockdowns were ineffective and unnecessary is that there seems to be no correlation between states (and between countries) that locked down early, and their death rates. In other words, whether a state or country locked their populations down earlier or later seems to have no impact on fatalities.

We will have much more evidence in the next few weeks as countries in Europe (Switzerland, Austria, Spain and Germany for example) as well as certain, mostly Southern, U.S. states begin to open up their economies and free their inhabitants to resume life. Early evidence from Germany and in the United States from Georgia shows no meaningful increases in new cases since the lockdowns were lifted. We also await the findings of more antibody testing surveys so we can get a better sense for how widespread is the virus is, and its true fatality rate.

In addition to these studies, we can utilize what was, at one time, called “common sense.” As we’ve discussed, many more people were exposed to Covid and the exposure has been going on for much longer. We can draw three conclusions for these facts, the first two of which we have already mentioned. The mortality fate of Covid is much lower than expected and we are closer to herd immunity. The third conclusion is that the government ordered lockdowns were too late to stop the spread of the virus. Perhaps they might have been effective in December or January or even February but they were not effective in March and April.

It is also plausible, as one New York based ER Doctor has concluded, that the decline of Covid infections that we are experiencing (the so-called flattering curve) is due not to lockdowns but to the natural progression of the virus. That is to say, the pandemic would diminish in severity regardless of what governments ordered.

In a similar vein, it is reasonable to assume that the death rate would be highest at the beginning of the pandemic (a time point we have past) and would decrease significantly as it becomes more widespread. We know that in most countries and in most states, more than half of all deaths occurred in people living in nursing homes or long-term care facilities. We also now know that Covid is widespread in nursing homes. As the virus spreads, there are fewer and fewer nursing homes to infect (residents are either dead or almost certainly immune). Any further spreading of the virus including the feared “second wave” will have fewer deaths and hence, it is wrong to project total fatalities based on initial mortality rates. The same logic applies to healthcare workers, most of whom we can assume have already been exposed. Lastly, it is also wrong to extrapolate the death rate in a densely populated city such as New York to the rest of the country.

2 – Stopping the spread might have been the wrong thing to do

Next, let’s move on to the second answer about lockdowns, that stopping the spread might not having been the optimal course of action even if the goal is to minimize fatalities. Clearly lockdowns have reduced the spread of Covid to some extent, especially by virtually eliminating travel. However, the question remains whether that strategy will ultimately save lives. It is not at all clear that lockdowns are the best strategy to prevent deaths over the long-term, given the demographic of what populations are at risk. It is highly plausible that a far better strategy would have been to quarantine and protect the high-risk population, most notably those in nursing homes but let the virus spread freely amongst the majority of the population (especially children and the working population) that are at very low risk.

This strategy of protecting the high-risk and encouraging herd immunity among the majority and low-risk has even more merit if we take into account predictions that we will face a second wave of Covid in the fall at the same time as the seasonal flu, a double whammy for those at risk. Notwithstanding the fact that nearly all Covid predictions of such nature have been overly pessimistic and erroneous, if we take this idea as face value then we must conclude that getting to herd immunity in the spring and summer when the flu is mostly dormant is a superior strategy.

3 – Even if the lockdowns were effective in saving lives, the cure was still worse than the disease

The last answer is the most controversial to many people, but the most important. Even if the lockdowns were shown to be effective in saving lives in the near-term, the byproducts of these lockdowns are worse than the disease itself. This does not mean that those with this viewpoint (as I have) are insensitive to thousands of people dying. It in fact means exactly the opposite. We believe that the attempt to save lives by locking down the population will over the longer-term cost far more lives than it saves. If lockdowns (and hysteria) persist, the damage to health from these effects will greatly exceed Covid deaths. These effects are much harder to quantity than the number of deaths so it is easy for government to ignore them. But the health of the entire population including mental health must be taken into account.

We know already that the impact of the lockdowns on the health of the general population is enormous, yet governments in their destructive policies, have paid virtually no consideration to these factors. We know that people are dying of non-Covid causes because they have been unable or too afraid to seek aid in hospitals. We know that hospitals in rural areas have already closed or are at risk of closing because of lack of patients. We know that cancer patients are skipping their treatments and children are missing their scheduled vaccines. We know that there have been suicides due to joblessness and because of social distancing. We know that there are rising cases of domestic and child abuse. We know that people are drinking more alcohol at home. We know that most people locked down have virtually stopped exercising and getting fresh air. We know that screen addiction is up, especially among children. We know that stress and feelings of helplessness and loneliness are pervasive.

Those are just some of the direct effects of the lockdowns on health. The economic effects of shutting down the world economy will prove to be far, far worse. A recent widely reported study forecasted 130 million people worldwide are at risk of starving to death because of the economic disruption. In the United States, 30 million people, nearly one fifth the working population, have already filed for unemployment. The world is heading towards a depression that will rival or even exceed that of the Great Depression of the 1930s. The health effects of years of economic despair are immense

How about the ramifications of central banks printing trillions of dollars? Or the federal government running record deficits and handing out trillions with little oversight? What about state and local governments with little tax revenue and massive unemployment bills? Will they cut services and see crime dramatically rise? What about pensions? Will we finally see states file bankruptcy? And how long can any oil producing country survive with today’s oil prices? Can China placate their population with their first downturn in more than a generation? This is scary stuff. The kind of scary stuff that history shows causes wars and revolutions.

Who knows what other effects economic depression will bring. Depression in Germany in the 1930s led to Hilter’s rise and World War, with tens of millions of deaths. Could something similar happen again? Not impossible, perhaps not unlikely.

There is one other economic trend worth mentioning – income inequality. We know that the virus itself takes a harder toll on the poor and on minorities for two reasons. First, because these populations tend to be unhealthier and have higher rates of obesity, diabetes, heart disease and other risk factors. Second, because they have less access to quality healthcare services. In the near-term there is little the government can do to alleviate the Covid-related health outcomes between the more and less fortunate. However, the worst thing government can do is exacerbate the gap between the rich and poor with policy, but that is exactly what lockdowns do. Perhaps the second worst thing they can do is to further empower tech giants such as Amazon and Facebook and Google at the expense of brick-and-mortar retail, mom-and-pop businesses and local journalism.

There is a huge discrepancy between those getting paid throughout the lockdowns (mostly the well-off, white collar workers as well as government employees) and those laid off, furloughed or otherwise not allowed to earn an income. The latter group are mostly the poorer and blue collar workers. Governments are literally creating a great schism within society that is bound to cause social unrest. The unforgivable closing of schools exacerbates these inequalities. Private schools can maintain high quality online learning and wealthy and highly educated parents can supplement their children’s education. The children in public schools and of less privileged families learn little or nothing. This gap in education will never be made up.

Lastly I want to talk briefly about New York City, where I live (though most of what I write here applies to other cities as well). New York City is being decimated, and not by Covid itself, but by the government lockdowns and by the fear that politicians and the media have instilled. Imagine New York City without restaurants, without museums and art galleries, without concerts and the theater, without retail stores and Christmas windows, without tourists. Contemplate a city made for walking where people are so afraid of each other that they won’t walk on the same sidewalk. Think about social distancing requirements that preclude profitable business and make city life unbearable. And now add a reduction in local services due to massive budget cuts, a rise in crime and a mass exodus of population. What I describe is not just New York City under lockdown but the New York City that will exist for years or even decades after lockdowns are lifted. We are witnessing the death of New York City, the city that I love. Governor Cuomo, who has been lauded by many for leadership will go down in history as the man who destroyed New York City.

Conclusion

In absolute numbers, the virus has been, and will continue to be deadly for many thousands of people. This is tragic. Tragic for the dead and tragic for their surviving families. But the tragedy of perpetual lockdown and overblown hysteria will prove to be far greater.

We know now that the virus has been spreading for months longer than scientists first thought. We know now that at minimum between 10 and 50 times the number of people who have tested positive have actually been exposed to the virus, the vast majority of them completely asymptomatic. We know now that the true fatality rate is at worst, only several times that of the seasonal flu. We know now that of the people dying, over half were from nursing homes and approximately 90% had more than one underlying health risk. We know now that the risk to children is almost exactly zero.

Over the past six weeks, over 30 million Americans have filed unemployment. The Federal Reserve has printed trillions of dollars. Government deficits have skyrocketed to unprecedented levels. Oil future prices have plummeted to below zero. Meanwhile, more than a hundred million people are at risk of starving to death across the world due to economic disruption. Domestic violence has increased. Cancer patients are forgoing treatments. Children are missing scheduled vaccines.

The lockdowns were never justified to begin with. They certainly aren’t justified now. But even worse than the damage caused by the lockdowns is the damage caused by creating mass hysteria. By frightening the vast majority of the country’s, indeed the world’s population, into thinking that their lives are severely at risk if they leave their home, we have essentially guaranteed that any recovery will take years even once the lockdowns are ended. We will have caused many times the number of deaths as did Covid.

If we don’t reverse the hysteria soon, we are almost certain to manufacture another Great Depression. We will see wars fought over oil and food. We will see social unrest and crime. We will see revolutions and civil wars. We will enable strongmen and dictators around the world and here in the U.S. We will give up our privacy and our freedoms in near totality. We will see great cities like New York neutered beyond recognition.

Many people have referred to our current situation as the greatest threat to the world since World War II. They are correct. But let’s be very clear about something. The great threat is not from the natural disaster of the pandemic. No, we are living through the beginnings of a man and woman-made disaster. As a society, we’ve taken a serious but manageable pandemic and through childish overreaction, turned it into something far worse. Unfortunately, we will be feeling these effects certainly for years, likely for decades and possibly for generations.

The cure is worse than the disease

A quick note published on April 19, 2020:

I originally published the following very short piece on March 22 with the intention to expand it significantly. However, a few days later, President Trump gave a press conference using the same words, “the cure is worse than the disease.” Having not had time to explain my thoughts further and not wanting to get lumped in with Trump’s line of reasoning (or lack thereof), I pulled the post. However today I re-instate it as it was.

While I have little doubt that lockdowns have significantly reduced the spread of the Covid-19, I wholly stand by view that shutting the economy and effectively imprisoning the majority of the population will be, over the long-term, far worse than the effects of the virus. This was true when the sophisticated “models” showed a worse case scenario of over 2 million Americans dead and it is even more true now that the models predict far fewer than 100,000 dead.

I, by no means, belittle the human tragedy brought on by nature. But the human tragedy being brought on by humans is far worse.

The original post (published March 22, 2020):

History teaches us that societies survive pandemics, even with horrifying loss of life. Societies do not survive economic collapse.

I write this as the U.S. economy grounds to a halt with nearly all states issuing mandatory lockdowns. The financial markets meanwhile are melting down, worse than in the 2008-2009 financial crises. The Federal Reserve has already, in about a week, exceeded its past extraordinary monetary stimulus. Congress is negotiating a stimulus package that dwarfs 2009’s.

And there is no end in sight.

We are past the time where social distancing will be effective. Perhaps this would have worked in December, or January, or even February but it will not work in March or April or May. The virus spreads too robustly and we have no technology to detect asymptomatic carriers in sufficient numbers.

We’ve lost the battle for containment. Unless we allow the economy to restart immediately, we will lose the war for society. Losing this war will prove far more painful than even the worst-case scenario of 2 million Americans dead.

More to come…

The enormous downsides of low interest rates

The Federal Reserve recently lowered its benchmark interest rate. Meanwhile yields on 10 and 30 year U.S. Treasuries are at or near record lows, as are rates around the world. And according to recent news articles, nearly 25% of government bonds worldwide have negative yields, some $15 trillion worth. Moreover, the likelihood of negative rates in the United States seems greater with each passing day, especially given the political pressure being applied by those in power.

I’ve written elsewhere about the immense dangers of easy money and modern central banking policy. However, given today’s downward trend of interest rates, a decade after the 2009 financial crises, I wanted to write a short post summarizing the major downsides of prolonged and artificially low interest rates. To keep this discussion at a breezy pace, I am not going into the wonkish mechanics of monetary policy nor will I burden you (or me) with any more numbers or statistics. I will simply attempt to make the case that the side effects of low rates are vastly underappreciated, and highly damaging

However, before we start discussing these downsides, indulge me for a very quick and high level primer on the theoretical rationale for low rates.

The rationale for low rates

Economists and central bankers are trained to believe the following: when the economy is experiencing negative or slow growth, and when unemployment is unacceptably high, it is the central bank’s duty to lower interest rates. Why, you ask? Because companies are more likely to invest in building a new factory or opening a new store when the funds needed to do so are cheaper. And building and operating a new factory or a new store requires additional employees. Thus by making the funds necessary for investment less expensive, central banks expect the unemployment rate to decline. Factories get built that otherwise would not have been built. Stores are opened that otherwise would not have opened. Employees are hired that otherwise would not have been hired.

In a similar way, low interest rates encourage consumers to spend more money. Like businesses, they too have an easier time borrowing to finance such purchases as houses and automobiles. In addition, low interest rates act to discourage saving since the amount earned on savings (the interest income) is reduced. In both instances, increased consumer spending acts to decrease unemployment as additional workers are needed to provide the incremental goods and services being purchased.

The simple rationale for low interest rates that I just described relies on several key assumptions. The most important is that the money that companies borrow because of low interest rates winds up being invested in job-creating ways. The second is that the money is invested in job-creating ways in the domestic economy, and not overseas. The third assumption is that interest rates can only remain low whilst unemployment is high. Once everyone who wants a job has a job (known as “full-employment” or the “natural rate” of employment), cheap money no longer works to goose hiring and economic growth. Instead, low interest rates cause prices to rise (inflation) as companies have to raise wages to hire not the unemployed, but those employed by competitors. The last major assumption is that monetary policy is a short-term solution to a temporary (not a long-term or structural) problem of high unemployment.

Unfortunately, all four of these key assumptions are false, undermining the theoretical rationale for low interest rates. The vast majority of the money invested because of low interest rates has been invested in unproductive and job-destroying ways, as we shall shortly see. Second, much of what has been invested in job-creating ways has gone overseas as U.S. companies accelerate the trend of outsourcing and offshoring. Third, the official unemployment rate in the U.S. is at record lows (currently 3.7%), far lower than what economists used to consider full employment even during high growth eras. Lastly, we’re now going on a decade of historically low interest rates (never before seen in thousands of years of recorded history) with no end in site. Clearly not a temporary solution to a temporary problem.

In short, low interest policy doesn’t do what economic theory says it is supposed to do. But who cares right? No harm, no foul? Wrong. The problem with low interest rates is not that they don’t work. The problem is that the side effects are enormous. Let’s now move on from the discussion of theory to the very practical. What are those terrible side effects? What are the most significant downsides of perpetually low interest rates?

The missing inflation

One of the things most confounding to central bankers and mainstream economists is the lack of inflation given a decade of extraordinarily low interest rates. Recall that economic theory says that once employment is full, further monetary stimulus should cause a rise in inflation. And yet the key rates of inflation such as the consumer price index (CPI) have rarely exceeded the 2% target set by central bankers. Central bankers have hence concluded that full employment must not be yet met, and further stimulus is both necessary and good.

A while ago, I wrote a lengthy post explaining the conundrum of the missing inflation which I of course encourage you to read. For our purposes here, however, we need to concern ourselves with only one aspect of that explanation. Inflation is hiding in plain sight and it is having a terrible effect on consumers and the overall economy. Why economists are blind to this is one of the unsolved mysteries of the universe.

The missing inflation can be put into three categories. The first is non-tradable goods and services or more simply stuff that can’t be made in China and other low-cost producing countries. Prime examples of that stuff? Real estate, education and healthcare. Over the prior several decades of easy monetary policy, and certainly over the past 10 years, the inflation rate on these three giant categories of consumer spending has vastly exceeded the rate of the CPI. And while this inflation hurts nearly everyone, it especially impacts younger workers who face the trifecta of enormous students loans, unaffordable health insurance and sky-high home prices. We will come back to this point later when we talk about weakening demographic trends.

The second category of the kind of inflation not found in the CPI is financial asset inflation (which also includes real estate). The damage of asset inflation is at least threefold. First, it all but guarantees lower returns going forward, in essence taking consumption, income and wealth that should be in the future, and should belong to today’s younger generations and shifting it forward to today’s older, wealthy generation. This is a generational transfer of wealth that is both unfair and exacerbates income inequality. Second, as we will discuss in further detail, asset inflation affects income inequality even further as the wealthy who hold financial assets see their wealth climb even higher. Third, asset inflation increases instability in the financial system and with it, the inevitable risk of future financial crises and meltdowns.

The third type of inflation that is missing from the CPI is something that typically keeps economists at central banks awake at night. This is the classic wage/price spiral, a cycle of continuously rising wages and rising prices that feed on each other. The difference, however, is that the inflationary wage/price spiral occurring today is not happening to the average consumer represented by the CPI. Instead, this inflation is happening to the highly skilled and to the wealthy.

Evidence for such a wage/price spiral can be clearly seen when examining the salaries of technologists in San Francisco or those of financiers in New York, and the things they buy. We’ve experienced a continuous cycle in such cities of rising income, rising real estate prices and rising restaurant and luxury hotel prices. You can also see the dramatic inflation in the skyrocketing prices of scarce and luxurious collectables favored by the wealthy and ultra-wealthy such as modern art, fine wine and professional sports teams. The higher the prices or the more stratified the goods, the higher the rate of inflation. Not only does this trend exacerbate income inequality, as we will discuss later, but it causes vast distortions to the economy as an inordinate amount of economic resources are utilized producing luxury goods and services for the few, rather than middle class goods and services for the many.

The mortgaging of the consumer

We’re already discussed the idea that one of the primary goals of easy monetary policy is for consumers to spend more and save less. And not just spend more and save less out of income, but to incur debt to spend more. The impact of both the disincentives to save and the low income earned on the little money that is saved, is an overly indebted consumer class with no rainy day funds, no retirement savings, no equity in their homes and one completely dependent on government funds for retirement. Add to the mix insolvent pensions (discussed next), soon-to-be insolvent social security, rising healthcare costs (discussed previously) and bad demographics (discussed later), and you’re left with a ticking time bomb for the consumer middle class.

The implosion of pensions

While interest rates have caused a slowly ticking bomb for consumers, the clock on pension implosion runs much faster. Pensions have to invest workers contributions in order to meet projected retirement payouts in the future. Similarly, insurance companies have to invest insurance premiums to meet their forecasted insurance payouts. Pensions along with insurance companies represent by far, the largest pools of savings in the economy.

Historically these types of institutional investors invested prudently in relatively safe assets such as government bonds. But as you of course know, returns on relatively safe assets such as government bonds are paltry given the workings of central bankers. The only way for pensions and insurers to even attempt to achieve the returns required to meet future liabilities is to take on more and more risk.

The obvious problem of taking on more risk is that it is, well, riskier. Pensions have increased their holdings of stocks in lieu of government bonds, of high yield debt in lieu of highly rated debt, and especially of so-called alternative assets such as private equity and hedge funds. When the next financial downturn hits (and in the life of a pension or insurance policy, it certainly will), riskier assets will prove a disastrous investment.

To add insult to injury, many of these pensions, specifically those of state and local governments are already woefully underfunded, and people are generally living longer. Together, you have all the makings of a crises that can only result in either drastic cuts to basic government services, drastic cuts to retirement income, or more likely, both. This inevitable outcome will not be pretty.

Insolvent pensions are actually the smaller of the two evils of too much investment in risky assets. The less obvious but more damaging result is that investors such as pensions are subsidizing investments that are directly detrimental to jobs, to the middle class and to the long-term health of the economy. Specifically, they are subsidizing financial engineering, M&A and unproductive technology companies. These are the subjects of the next three sections.

The job destruction of financial engineering

As I mentioned earlier, due to low interest rates, institutional investors have been forced to put money into riskier stocks and highly leveraged or high yield debt. If that money was in turn invested by companies in new factories and stores we would likely conclude that monetary policy is working. But it’s not, and it isn’t. Most of the money is going instead to what we can loosely categorize as financial engineering: stock buybacks, mergers and acquisitions, hedge funds and private equity.

The economic problem with investing in such financial maneuvers, is that such financial maneuvers nearly always result in fewer jobs, not more. Funds used to buyback stock could have been used for investment in new products or new services, in capital expenditures, in R&D. Acquisitions nearly always result in job cuts in the name of synergy and often in increased outsourcing and offshoring. Hedge funds, always short-term oriented, and especially the activist variety, put immense pressure on companies to downsize, to under-invest and to return money to shareholders, money that otherwise could have been invested productively. Private equity forms do the same to their portfolio companies, and with the greatly increased risk of value and job destroying bankruptcies.

The creation of M&A fueled monopolies

As we’ve already discussed, low interest rates leads naturally to inflated equity prices, riskier investments and increased financial activity such as mergers and acquisitions (M&A). We’ve also mentioned how M&A leads to the destruction of jobs. However, there is an even worse impact of the M&A activity in the cheap money era. It has led to unprecedented and massive consolidation in nearly all industries, and the creation of monopolies.

Highly consolidated industries and monopolies always result in some combination of higher prices, less innovation, worse service and (obviously) fewer choices. Facing little competition, monopolies tend to under-invest in product development, in customer service, and in basic R&D, and over-invent in political lobbying in order to maintain their market position and keep out new entrants. Moreover, size begets political power and political power begets size as monopolies lobby for regulation, tax advantages and subsidies. Lastly, industry consolidation is perpetuated as monopolies have the subsidized funds to acquire any and all companies they view as potential competition. While this has happened in virtually all industries, as I stated above, nowhere has this been more prominent than in technology.

Contrary to what is believed in conservative and libertarian circles, it is monopoly, not government that is the true enemy of the free market and the true enemy of freedom.

The parasite of subsidized technology

I’ve talked about how much of what has been invested in the era of low interest rates has gone to financial engineering such as M&A rather than used by companies to build or expand their businesses. It is certainly unfair and untrue to state that there has been no real investment in businesses. This is especially true in the technology sector. However, even more so than in finance, the investment in technology has been generally unproductive, job destroying, and altogether calamitous for the economy.

The parasitic nature of technology, specifically the internet is a topic I wrote about in depth here. We have been trained by both economists and the media to think of technological advancement as a positive for the economy and for the world. In a world of normal interest rates that might have been true. But in today’s world, the impact is decidedly negative. I’ll go as far as to say that the internet might just be the worst thing to ever happen to human beings.

Let’s take the behemoth Amazon, my favorite internet punching bag. Amazon has, in fact, created hundreds of thousands of new jobs. The problem, however, is that Amazon’s retail operations (the vast majority of its revenue) has come at the expense of traditional retailers. For every job Amazon has created, several more have been destroyed, with an outsized negative impact on small, local retailers and local commercial real estate. Job destruction of this nature is pervasive throughout the tech industry, and given its disruption of traditional industries, pervasive throughout the economy.

Naturally, there are those who say that the Internet represents creative destruction at its finest. That such job destruction is both productive for the economy, and inevitable. Such people are mistaken. The vast majority of tech companies, Amazon included, either lose enormous sums of money with unsustainable business models, or at best, earn far lower margins than the traditional companies they have disrupted (margins that would be totally unacceptable in investors in a world of normal interest rates). Uber, Netflix, Twitter, Tesla, WeWork are just a few more examples of highly disruptive technology companies that have lost, or continue to lose billions. These companies are completely subsidized by the unceasing flow of cheap money.

As destructive as the technology industry has been to jobs, we are unfortunately just scratching the surface of the damage these companies have done to society as a whole. By making information free, by violating copyright, by failing to police their content as a traditional publisher would, by abusing their monopolistic and political power, technology companies have among other things, incited hatred and terrorism, undermined elections, destroyed privacy, decimated journalism, undermined truth and fact, increased loneliness, reduced a sense community, cut attention spans, killed intelligent thought and debate, popularized and brought into power fringe politicians and caused irreparable damage to democracy and freedom.

Every day another article or academic study appears in one form of media or another about the damage that technology companies and the internet have caused. A consensus seems to be building that both the underlying cause and the solution to society’s internet problem are anti-trust laws. That lax anti-trust enforcement allowed these tech companies to become so powerful and that only strict enforcement can now restrict that power. This consensus is wrong. The real culprit is the subsidy of easy money. These companies would never have become as powerful without the perpetual low interest rates. And the only way to reduce their power is to take away that subsidy.

The cancer of income inequality and the demise of democracies

As I’ve written about in great depth and in summary form, the central banking policy of low interest rates is the single most important cause of the drastic increase in income inequality of recent decades. To best understand this trend, it helps to separate income inequality into two types: the decline of the middle class and the rise of the super wealthy.

The story of middle class decline is as follows. The entrance of China and other low wage manufacturing countries to the global economy caused wage pressure in high wage countries such as the United States. Regulations, unions and legacy pension costs together wouldn’t allow wages to fall to remain competitive. Instead factories, businesses and entire industries went bankrupt and closed. Laid-off workers were forced to take jobs in much lower wage service jobs, as second-class citizen contractors lacking benefits, or more recently, in the gig economy.

Meanwhile, while prices of goods from China (and elsewhere) did decline, the overall price level did not. Why not? Because the central bank kept their foot on the monetary gas pedal. Partially because of their illogical fear of deflation, partially due to their naive desire to eradicate the business cycle and partially because of a series of Wall Street bailouts. The money had to go somewhere.

As we discussed earlier, what prices went up the most? Real estate, healthcare and education. These all became unaffordable to the middle class. Further, low interest rates allowed more borrowing in order to fund those purchases of Chinese goods, and outsourcing and offshoring were expanded due to M&A transactions and hedge fund short-termism. All of these factors acted to accelerate the globalization trend farther, and with far more impact on U.S. jobs than it otherwise would have had there been normal or market interest rates. In short, declining wages, fewer jobs, rising prices, and over-indebtedness created middle class despair.

The second, and even worse type of income inequality that the entire world has experienced is the the wealth accumulation by the rich and the super rich. Here the blame lies even more so at the door to the Federal Reserve and the world’s other central banks. The low interest rates that have caused the subsidies to financial engineering such as M&A, and to growth and risky investments such as technology have caused the income and wealth of public company CEOs, tech entrepreneurs, venture capitalists, hedge funders, investment bankers, private equity partners, and many others to skyrocket. With risk and correspondingly growth subsidized, all of the economy became a “winner take all” game. And of course the inflation in financial asset prices due to low interest rates perpetuates the growth of inequality.

Combine these two trends of a sputtering middle class and the wealthy getting wealthier and more politically powerful and you sow the seeds of populism, socialism, unrest and ultimately revolution.

The strangulation of economic growth

Central bankers, politicians and nearly all voters focus only on short-term results. This is one of the inherent problems of democracy. In this last main section, I want to talk about long-term economic growth. In many ways, this serves as a summary of everything we’ve covered before. We’ve discussed how low interest rates have shifted spending forward by decades which will result in lower economic activity in the future. We’ve talked about how pensions won’t have the money to pay retirees. We’ve also mentioned how asset inflation sets the stage for future financial crises. But those are not the only, or even the worse impacts on tomorrow’s economy.

Over the long-term, economic growth is essentially a function of two basic factors: productivity growth and population growth. Low interest rates kill both. As we’ve discussed a number of times, subsidizing risk has led to under-investments in basic R&D and long-term product development. It has led to stagnating monopolies and unproductive technologies.

But perhaps most importantly, it has led to under-investment in employees and their skills. With layoffs and downsizing always around the corner, lifetime employment dead, independent contractors and gig workers flourishing, employers no longer invest in their employees, and employees no longer invest in their employers. Productivity is a difficult concept and misunderstood by many. While technological breakthroughs like the cotton gin, electricity, the automobile or the computer get the headlines, they are rarely what drives productivity growth. What more typically drives productivity growth are the small, incremental improvements made by long-term employees increasing their skills and knowledge, and invested in their jobs and careers. Without such long-term employees, these kind of productivity improvements wither away.

The second variable of long-term economic growth is population growth, which is itself a function of two factors, child birth and immigration. And yet again, monetary policy has negatively impacted both. Young people crushed by student loans, sky-high real estate prices, unaffordable healthcare and poor job prospects delay getting married and having kids. Or worse, fail to do so entirely. At the same time, middle class despair together with technology-fueled misinformation and nativist sentiment supports populist-style leaders and erodes support for immigration.

Taken together, population growth is stifled. This is exactly the situation of Japan where three decades of suppressed interest rates have caused a demographic implosion, in essence, a dying country. The United States and Western Europe are following in Japan’s ever diminishing footsteps.

Conclusion

Well meaning but unwise central bankers have backed themselves into a corner as the world has become addicted to cheap money. Unaware of the consequences, blind to reality, slave to erroneous theory, and bullied by political pressure, economists in power have suppressed interest rates, causing immeasurable economic distortions and societal damage.

Continue current policy and yes, you likely delay the inevitable restructuring of the worldwide economy. In the meantime, the middle class continues to be crushed, income inequality keeps rising, productivity stagnates, monopolies flourish, demographics worsen, political power consolidates, truth and fact lose significance, populist and socialist leaders surge, democracy and freedom crumbles.

But assuming we survive all these things – and we may not – sooner or later we will experience a crises that no central bank can print their way out of. One that dwarfs the financial meltdown of 2009. When, I don’t know. Perhaps months from now, perhaps years, perhaps decades. But for sure it will come.

Normalize interest rates now and the value of financial assets will plummet. Wall Street and Silicon Valley will crater. Unemployment will rise. The global economy will likely go into a depression. But the world that will emerge, and it will emerge, will be more productive, more equal, more fair and more free.

Making the right choice isn’t easy. It may be politically impossible. But the first step must be education. Until central bankers, politicians, the media and voters awaken to the reality that the downsides of low interest rates far, far outweigh the upsides, the U.S. and the world will continue forward, on its disastrous course.

Why Affirmative Action is counterproductive

I write this post as the admissions policies of the esteemed Harvard University are on trial in federal court for disfavoring Asian Americans. Regardless of its outcome, this case will certainly not be the last one brought to a court’s attention questioning the legality of Affirmative Action, especially with a conservative-leaning Supreme Court seemingly hostile to such policies. However, I am not writing here to opine on the legality of Affirmative Action, something I am decidedly not qualified to do.

The non-Constitutional scholar, that is the layperson, tends to make passionate arguments for or against Affirmative Action on a basis of morals. The argument in favor tends to make any of three points: 1) Affirmative Action seeks to right past wrongs, such as slavery and discrimination1, 2) Affirmative Action seeks to make up for current inequalities of opportunities between racial (or socioeconomic) groups, such as inferior schools, lack of extracurricular and enrichment programs, less education-focused neighborhoods and weaker family structures2, and 3) Racial diversity is, in and of itself, desirable, especially within a classroom3.

The primary argument against Affirmative Action (and the one being litigated in the Harvard University case) tends to be of the variety, “two wrongs don’t make a right.” That is, discriminating for one group, by definition equates to discriminating against another group. And that, opponents of Affirmative Action contend, is equally unfair.

Having said all that, just as I am not addressing its legality, I am not going to discuss in this short article whether I believe Affirmative Action to be morally right or wrong (though I provide a few thoughts in the footnotes). What I am here to write about is the following. Whether or not the purposes of Affirmative Action are morally and ethically justified, whether it is well-meaning or not, its implementation is counterproductive. Stated more bluntly, Affirmative Action promotes racism, not mitigates it.

The reason for this, however, is not necessarily the obvious one, that of resentment. To refer to the current Harvard case, I do not believe that Affirmative Action promotes racism because Asian Americans who do not gain admission resent the applicants of racial minorities who are favored. If anything, they resent the (mostly white) members of Harvard’s admissions staff who discriminated against them.

No, the reasons that Affirmative Action policies in education (and professionally) promote racism are twofold. First, by having a lower academic standard for individuals that share something in common (i.e. their skin color or their heritage), schools actively promote the belief that, on average, individuals who share that skin color or heritage are less skilled and less qualified.

Say, for instance, I am in a class of 100 students with two members of a minority group that are, on average, equally qualified as the other 98. Yes, I may acknowledge that there are fewer students of that minority than others but this is likely a passing thought. More importantly, I have every reason to believe based on my experiences with my peers, that in the general population, different groups or races are generally equal, with similar abilities and qualifications.

On the other hand, let’s now propose I am in a class of 100 students with 10 members of a minority group, of which 2 are as equally qualified as any others, but eight are clearly below average. I still may have the passing thought that there are fewer students of that minority than others, but now, based on personal observations, I am likely to conclude that my peers of that minority are less qualified than average. And importantly, I am naturally inclined to extrapolate that conclusion beyond the classroom to the general population, a much more pernicious viewpoint. This is especially likely to be the circumstance if these are my first close experiences with that minority group (as is often the case given how segregated most communities are within the United States).

The second reason that Affirmative Action breeds racism is that by devaluing the meaning of the credentials of an organization, it forces me to consider race when evaluating an individual with that credential. Let’s say I am a hiring manager reviewing resumes from Harvard University students. If Harvard did not employ Affirmative Action4, then I could safely assume that, on average, all students that I am considering for employment are more or less equally qualified. That is, the Harvard degree means the same thing for all Harvard graduates.

But with Affirmative Action, I must consider the fact that students of certain minority groups have lower qualifications. Hence, when reviewing a resume, I am effectively forced to look at the student’s name, his or her club affiliations, etc. to try to determine whether the applicant is of minority group status5, and adjust my resume evaluations accordingly.

In other words, by having separate sets of admissions criteria, Affirmative Action breeds racism by forcing a hiring manager to consider race. And this, of course, makes a hiring manager less likely to hire members of that minority group, at least absent further Affirmative Action within the hiring organization.

To reiterate, Affirmative Action promotes racism because firstly, it encourages students to both think about racial differences and to draw pernicious conclusions from those differences. Second, it essentially forces hiring managers to consider race as an employment factor. Affirmative action is bad policy, not because of its morality (or lack thereof), but because of its outcomes. Affirmative Action inevitably promotes the kinds of stereotypes that its well meaning proponents are desperately trying to eradicate. A policy designed to reduce and mitigate racism and other forms of bias breeds and strengthens exactly those biases and that racism.


1 You cannot make up for yesterday’s wrongs experienced by thousands or millions by helping a handful of their descendants today. This solves nothing except to make members of the majority feel less guilty.

2 If you want to close the achievement gap between different minority groups and the majority, you must do so at the youngest ages, not in college. As difficult (or impossible) as this may be, this is where efforts should be devoted.

3 Diversity in the classroom is enormously overrated because classroom discussion is enormously overrated. Students should not be in school to listen to their uneducated peers pontificate. They should be in school to learn from their educated teachers. I realize this is probably an unpopular and minority viewpoint, but in my many years of education (as both a student and a teacher) I have found classroom discussion almost always to be a colossal waste of educational time.

4 Schools like Harvard should put an end to all forms of Affirmative Action, including for athletes, the highly wealthy and legacies. Our education system, including higher education, is abysmal. One of the (many) reasons is the amount of resources spent on things other than education, such as athletics.

5 Opponents of Affirmative Action correctly point out that this significantly disadvantages the minority students that are fully qualified and who would have gained admittance regardless.

Is Bitcoin the future of money?

The one question I’ve been asked more than any other over the past few months is what do I think of Bitcoin and other cryptocurrencies. Specifically, are they the future of money? The answer is no.

There are a host of reasons why cryptocurrencies like Bitcoin will probably not supersede government fiat money. Lack of security, hackability of accounts and price volatility are foremost. Perhaps these are solvable. However there are three reasons why Bitcoin definitively cannot and will not ever be considered mainstream money.

1. The environment impact of Bitcoin (and similar blockchain based cryptocurrencies) is atrocious because the electricity requirement for Bitcoin mining is enormous. Sooner or later (and this has already begun), municipalities, governments and regulated utilities will prohibit and/or tax cryptocurrency mining due to its environmental impact. Further, private individuals will be shamed from using such currencies as the environment impact becomes more widely known.

2. There is unlimited supply of cryptocurrencies. To be clear, the supply of any single currency, such as Bitcoin, is not unlimited. In fact, for Bitcoin the ultimate supply is capped (in and of itself another problem since the quantity of money should more or less grow with the economy). But there are literally thousands of such currencies that exist today and untold more that can exist tomorrow. With the underlying technologies public knowledge, there are no barriers to entry to prevent new cryptocurrencies from being created, and no limit to the supply of money.

3. The foundation of cryptocurrencies and blockchain technology is that they are decentralized. That is, accounting and transaction records exist not on a single computer (i.e. of a government or a bank) but on many private computers. The purpose, of course, is to keep transactions private and out of the view of government agencies. How long do you think governments would allow this to continue if more and more financial transactions became untraceable, unregulatable and untaxable? Not very long, methinks. Simply put, the governments of the world will not allow cryptocurrencies to become mainstream money.

If not mainstream money, what will cryptocurrencies like Bitcoin be useful for in the future? Same uses as today. Black market activity and speculation. Nothing more, nothing less.

The Last Jedi ruined Star Wars and I am sad

With this post I take a break from my regularly scheduled economics programming to express some sadness. The subject of my sadness is Star Wars, specifically The Last Jedi. Star Wars is one of my few guilty pleasures. I grew up with the original trilogy, and I’m a fan. Not an Ultra Passionate Fan, as Mark Hamill would say, but a fan nonetheless. Like many, I went to see The Last Jedi on opening night. Like some, I left the movie theater disappointed, angry and sad. Admittedly, far more disappointed, angry and sad than a grown man should be after a movie. Alas, that was the power of Star Wars, or perhaps the weakness and immaturity of me.

I’d like to try to explain to all of you why some people (including myself) are so angry about this movie as I believe I am more or less representative of this demographic. For full disclosure, I am a 42 year old, clean shaven (no neckbeard), married (I do not live in my parent’s basement) white male. You can call me a fanboy or not, I have no idea what that really means.

This is not meant to be a movie review. There are countless of those out there on the internet, some made by folks far more knowledgeable about movies and far more knowledgeable about Star Wars than I. Having said that, let me begin with the following (with minor spoilers). As a standalone modern sci-fi/fantasy/action big budget blockbuster, The Last Jedi was good though certainly not great. It had some major plot issues, underdeveloped characters, a boring 2nd Act, misplaced humor and too obvious political correctness. But it had fewer issues than most other contemporary sci-fi/fantasy/action big budget blockbusters.

On the flip side, it had some terrific scenes, awesome visuals (Superman Leia notwithstanding), great music (mostly legacy melodies) and was far more thoughtful, surprising and original than most other sci-fi/fantasy/action big budget blockbusters. In a nutshell, it makes perfect sense that critics and most audience members are overwhelmingly positive. (As the middle chapter of a trilogy, there are some additional and significant issues with The Last Jedi, but they are not pertinent to this discussion.)

So why the intense hatred? It has nothing to do with Snoke’s lack of a backstory or Rey’s anonymous parents, as some have suggested. It does, however, have to do with fan expectations as others have surmised. In short, the movie mocks the Star Wars I grew up with. It makes fun of the mythology, it negates everything about the original trilogy (The Force Awakens is equally, if not more guilty here). Our heroes are back where they started, having accomplished nothing. Most importantly, The Last Jedi obliterates the character of Luke Skywalker that I’ve known for 40 years. In doing these things, it mocks, makes fun of, negates and obliterates a small part of my own life. Director Rian Johnson isn’t just saying that the Jedi are fools. Or that Luke had been a fool. Or simplistic good vs evil is foolish. He is implicitly saying that I have been a fool for being a fan. That hurts.

Now, you can tell me that I’m immature, that I need to grow up, that Star Wars is just a meaningless B-movie/space opera with cheesy dialogue. You would be right. But that doesn’t negate the fact that after seeing the The Last Jedi, I do feel an emptiness and a betrayal for having been a Star Wars fan for my entire life.

I need to explain what made Star Wars so special to so many (more special than any other movie or book series in entertainment history). There are three things: 1) the enormous world building/universe, 2) the mythology/religion that is as interesting (and of course as unrealistic) as any mythological stories here on earth and 3) the characters. The first two are obvious but let me explain the characters.

Star Wars created perhaps the greatest villain in movie history (Darth Vader), perhaps the greatest, and second greatest Wisemen (Yoda and Obi Wan) and one of the greatest AND most relatable superhero protagonists in Luke. That he is relatable is crucial. All of us were at one point (or perhaps for some readers of this site, will someday be) whiny, impatient, impulsive, bored teenagers who need to grow up. We ALL wanted to be Luke. (As a side note, this is not so for Rey and it has nothing to do with her being a woman. She is NOT relatable because she is already perfect.) Luke is a landmark and should have been treated as such.

Having said all that, I suspect that the majority of fans with similar views to my own are about my age, and grew up with the original trilogy and not the prequels. The reason is that the prequels (which were generally lousy movies) already began the process of changing (ruining) Star Wars. It hurt the mythology (midichlorians,etc.) but more importantly it damaged two of the three great characters of the original trilogy. It made Yoda (and the Jedi) seem like idiots. And it turned Vader into a whiny brat (or more accurately, turned a whiny brat into Vader for no good or comprehensible reason).

But, and this is important, it did two things right, and this is why we can forgive or ignore or even enjoy the prequels. It did not impact our real hero, Luke. And more crucially, the movies enormously expanded the Star Wars universe (not always, but mostly for the better). And this is why the prequels, as bad as they were, are still an integral part of Star Wars.

The Force Awakens (which was really a reboot, not a sequel) and more importantly The Last Jedi fail to be Star Wars, in my opinion. Or at the very least, they substantially decrease the essence of Star Wars rather than increase it. They make the universe seem small, not large. Most planets are copies of those seen in prior movies. The First Order and Resistance are essentially just smaller versions of the Empire and the Rebellion. Second, the new movies (specifically The Last Jedi as I already mentioned) blatantly destroy the mythology of Star Wars and the Jedi. Third, the characters are weaker, and less relatable copies of the originals. Lastly, Star Wars is meant to be escapism from the real world (even if it has subtle or not so subtle underpinnings of social commentary). The Last Jedi’s self awareness and cynicism were exactly the opposite.

I am sad after having seen The Last Jedi. I don’t think that’s the emotion that the director intended for many fans to feel. I do think this movie was as big of a failure for the Star Wars universe as it is a commercial success. I will probably see Episode IX out of curiosity but I have zero anticipation for it, something I never would have thought I would say after a Star Wars movie.

Finally, and for what it’s worth, I don’t place the full blame on Rian Johnson. He was dealt the hand that JJ Abrams left him, and he clearly wanted to go in a different direction and make his own movie. I get that. I do, however, blame Disney/Lucasfilm. They decided to make these movies before they had any stories. They clearly had no plan. For all the hundreds of millions of dollars invested in making these movies, and for the billions of dollars of revenue they generate in movie tickets and merchandise, Disney should have had a good story and a real plan for the trilogy. Most importantly, they should have been respectful of what came before and they never should have let fans of 30 or 40 years down the way they did. There’s simply no excuse for that.

Utility and a theory of emotion

In my last post I discussed at length the question of rationality. I concluded that contrary to the opinion of behavioral economics, humans do make decisions that they believe to be in their best interests, in my view the correct definition of a rational decision. In that post, I first had to define what “best interests” means, a concept we called “utility.” In this post, I want to do two things. First, I want to repeat (apologies) the definition of utility and expand on what it means for a person to try to maximize utility. Second, I will use our model of utility to hypothesize a theory of human emotions.

Before we begin a quick note. I mentioned in my previous article, and I reiterate here, that for the most part, my definition of utility is not groundbreaking. However, I believe my view of where emotions come from and their relation to utility may be unique. At least I am not aware of anyone who has espoused a similar idea.

What is utility?

To many philosophers and economists, utility is a measure of, or a proxy for, happiness. As we will see shortly, utility and happiness are absolutely related but they are not the same thing. Utility is a metaphorical basket comprised of all the different things evolution and biology and our genes have made us humans desire. I believe that the components of utility can be grouped into three categories: basic life necessities, social desires and entertainment/leisure. I’d go further and say that these three categories are listed in order of importance. In other words, basic life necessities are the strongest contributor to utility, then social desires and then entertainment.

Basic life necessities are things such as water, food and good health. Other things equal, my utility at any given moment is higher if I’m not thirsty, not starving, and not sick. Like many of the Earth’s species, humans have evolved to be social animals. We desire such things as love, friendship, companionship, sex and status. Other things equal, my utility is higher if I love and am loved, have friends and consider myself to be superior (higher status) to my peers. Lastly, we desire all sorts of entertainment or leisure, the third category of utility. Other things equal, my utility is higher if I am entertained, having fun, not bored. And keep in mind entertainment means very different things to different people. To couch potatoes, mindless TV watching. To adventurers, sky diving. To intellectuals, reading articles on EconomicsFAQ.

Before moving on I want to clarify a few things. First, the three categories are not perfectly discrete. There can be overlap. For instance, food is nourishment (basic necessity) but can also be entertainment. Similarly, hanging out with friends or having sex can also contribute to multiple categories: social desires and entertainment. Wearing fancy clothing can provide warmth (basic necessity) and status (social desire).

Moreover, every person, based on their genes, will have somewhat different weightings for these three categories, not to mention all the different human activities that make up these categories. For example, an extrovert will likely favor friendships and human interaction more than an introvert. Someone with a “Type A” personality might favor the “status” component of utility more than a less aggressive person. In addition, each person’s individual weightings will almost certainly vary over time. For instance, “status” seeking probably peaks when seeking a mate and declines as we age.

What does it mean to maximize utility?

So far, we’ve defined utility as best we could. At every given (conscious) moment of time, each of us have some level of utility. When we say that we humans make decisions in order to “maximize utility” what we precisely mean is that we make decisions in order to maximize the present value of the sum of our probability weighted future utility over our lifetimes (or longer, if you believe in an afterlife).

By the term “present value,” we mean that the same amount of utility today is worth somewhat more than that amount of utility tomorrow. How much more depends on some “discount rate” which will also vary from person to person and may vary from moment to moment. Furthermore, we implicitly weight the utility we will experience in the future by their probabilities of occurring. That is, an event that has a higher chance of happening will contribute proportionally more to the present value of my utility.

Going forward, I’m going to use a shortcut for the sake of brevity and readability. When I use the word “utility” I mean the present value of probability weighted future utilities. So when I say “maximize utility” I really mean maximizing the sum of the present value of probability weighted future utilities.

A few points before we move on. You might be skeptical that every time we are faced with a decision, we actually map out the rest of our lives and make some incredibly complex calculations. You’d be right of course, sort of. Our incredibly complex brains have evolved to do this for us. That is, we do this subconsciously. Also, keep in mind that we make only one decision at a time, that the vast majority of decisions we make have negligible effects on our lifetime utility, and that most decisions involve a small number of choices (often just two).

We also have evolved to use rules of thumb (“heuristics”) to help us forecast the future and make decisions. One of the most powerful, I believe, is to favor decisions that maximize our ability to make future decisions. In other words, to keep our options open. Perhaps we humans are naturally optimistic creatures. I might land my dream job! I might win the lottery! I might marry a supermodel!

I find it helpful to think of life as a giant decision tree. Each decision results in our tree dividing into two or more branches. We choose the branch that we expect will result in the highest utility (the largest leaf if you will). But as I said, we tend to favor choices that maximize the size of our tree, the number of branches, even if some of those branches have quite a low probability. We try hard to avoid making decisions that will significantly prune our decision tree. More than anything else, I believe that this accounts for our natural desire for freedom. The more freedom, the larger our decision tree, and the larger our decision tree, the greater number of “high utility” branches.

I want to be very clear about this idea of a tree. Strictly speaking, maximizing the size of your tree is just a heuristic for maximizing utility and is not always the correct one. Consider the following. A long time ago you’ve committed a crime. Hanging over your head is the chance of jail time, almost certainly a low utility branch of your tree! But now the statute of limitations for your crime has run out and the threat of prison has been eliminated. In this example, your tree has been pruned, normally something to avoid. But because this was a low utility branch, your lifetime utility is almost certainly higher. Mostly, however, we humans like to maximize our possibilities because some of them will lead to high utility outcomes. Keeping our options open is good, because we can always choose the one with the highest utility.

Before we move on, I want to make one last very important point. When we make a decision, only the future matters. Absolutely we take our past life experiences into account to help us make our decisions. And certainly, things we’ve done in the past can result in future utility, for example, the memories of a loved one or the sentimental value of a favored object. But, when we try to maximize the present value of future utility, we do not factor in the past into our “calculation.” We only look forward. We will return to this crucial idea when we talk about emotions, and also regarding winning a lottery.

A few decision making examples

To make the discussion of utility a bit clearer, let’s discuss a few examples of decisions I might make and see how they will (or will not) impact my utility.

Say tomorrow I wake up and it’s time for breakfast. I look into my kitchen cupboard and I find two cereal boxes: Corn Flakes and Rice Krispies. Assume they both have the same nutritional value, cost the same amount of money, and I like them equally. I have a choice to make, but this choice will have a negligible effect on my utility. I say negligible but not zero because perhaps there’s a slightly greater chance of choking on the slightly larger Corn Flake. Or maybe there’s a minuscule chance that the “snap crackle pop” noise of the Rice Krispies will wake up my sleeping child. But point being, I’m not going to give this decision much thought because it is not going to have much of an impact on my utility.

The next day I wake up and again go to the cupboard for breakfast. However on this day I look a little more closely inside and realize that there is a third box of cereal: Lucky Charms. Now I have a slightly harder choice to make, and one that will likely have a bit more impact on my utility. Do I eat one of the healthier, boring cereals (Corn Flakes or Rice Krispies)? Or, do I go for the Lucky Charms, the sugary, marshmallowy, unhealthy one?

Lucky Charms may very well give me more current utility (I’m ignoring the guilt I might also feel which would lower my utility). But, it may also lower my future utility by making me fatter, raising my chance of developing diabetes or heart disease, perhaps lowering my life expectancy. Here I must make a choice that has a trade-off. Eat the Lucky Charms and have more utility now but less later, or eat healthy and have less utility now and more later. There’s no necessarily right or wrong answer that works for everyone for every morning, but there is a right or wrong answer for you for that particular morning. Choose the cereal that results in the greatest lifetime (present valued) utility.

Of course in the grand scheme of things, choosing what to eat for breakfast is a pretty small decision, and one that very likely has a tiny effect on my lifetime utility. Let’s go to the other extreme, decisions that might have very large impacts, for example which college to attend or what career to pursue. These are decisions worth obsessing over. Let’s discuss the decision of whether or not to marry your current girlfriend or boyfriend.

The decision of marriage is one of life’s decisions that probably impacts future utility more than just about any other. Before we begin, recall that I said earlier that most decisions we make involve a very small number of choices, often only two. That is the case here. We are not choosing who to marry among hundreds or thousands or millions of eligible bachelors/bachelorettes. We are making a binary (yes or no) decision, to marry or to stay single. Let’s think about how marriage affects my future utility.

If I choose marriage, the upside is love, companionship, children, etc. Sure I may experience those contributors to utility even without marriage, but the probability is much higher with marriage (perhaps close to 100% probability, at least for the foreseeable future). On the other hand, choosing marriage substantially “prunes your tree.” That is, you give up (pending your views of polygamy or adultery), the fun of dating. You give up the chance to meet someone even “better.” You give up all those branches of your tree that might just lead to that supermodel or trophy husband. What to do? Keep your options open? Or prune the tree for the substantially greater probability of all of those social components of utility?

Most of life’s decisions are like the breakfast ones. They have very limited impact on utility. Occasionally however, we face a big one like marriage, a decision that has a huge influence on our utility, and on our emotions.

A theory of emotions

I hope that by now the concept of utility, and how we make decisions to maximize utility, is reasonably clear. Now I am going to turn our attention to the topic of emotions. I believe that emotions are derived from utility, specifically from changes to utility.

Just like we spend some time specifying the concept of utility, we need to properly define the term emotion. This is tricky because at least in the English language, we tend to associate the word “emotion” with “feelings.” However, the word “feeling” or “to feel” is quite ambiguous. We often say we feel hungry or feel hot or feel loved or feel happy. In my view, neither hunger nor cold nor love is a proper emotion. Of these four “feelings,” only happiness is a true emotion.

Hunger (or being satiated), being cold (or warm) and experiencing love (or loneliness) are all direct contributors to utility. Recall that we grouped utility into three categories: basic needs, social desires and leisure. Eating and keeping warm fall into the basic needs bucket. Love is in the social desire basket. In fact there are many such social desires that we think of as emotions but directly contribute to utility. For example, jealousy (when we perceive our own social status lower in comparison to someone we know) or guilt (when someone we know views our social status as low because of something we did) or schadenfreude (when we view our social status as superior to someone else because of something they did).

On the other hand, happiness is a real emotion because we “feel” it when we experience a change to the present value of our utility. In other words, the definition of an emotion is what we feel when there is a change to utility. I will argue that there are, in fact, only two real emotions: positive (call it “happiness”) and negative (call it “sadness”). All other emotions are just variations of happiness and sadness as we will discuss in a moment. As we mentioned, happiness occurs when our utility increases. Sadness occurs when our utility decreases.

To drive the point home, let me suggest two other ways to contrast feelings like “hunger” and “cold” and “love” with the true emotions, happiness and sadness. If you are verbally inclined, think of the former as some of the “nouns” of utility. True emotions are the “adjectives.” Alternatively, if you are mathematically disposed, think of the true emotions as derivatives of our utility function. That is, they result from the change to utility, not from the components of utility itself.

As I said, happiness and sadness represent what we feel when the sum of the present value of future utility increases or decreases, respectively. Of course, we use many more words to describe our emotions. Where do all these other words come from? I believe that there are two types of variations to these two basic emotions. The more obvious first variation relates to the strength of the emotion, or more accurately, the degree to which utility increases or decreases. Emotion is a spectrum. The second variation relates to time period. That is, is the change to utility in the past, the present or the future. Let’s discuss each of these in turn.

If my utility increases a modest amount I feel happy. If my utility increases a larger amount I feel thrilled, elated, ecstatic, euphoric. Find $10 on the ground and I am happy. Win the $100 million lottery and I am ecstatic. Both increase my utility but I can do far more with $100 million than I can with $10. Hence, my utility increases substantially more having won the lottery, and thus my positive emotion is much stronger. Similarly, a small decrease in utility results in sadness. A larger decrease results in feelings of despair, devastation, depression. Lose $10 and I am sad. Lose my life’s savings and I am devastated. Both decrease my (present valued) utility, the latter much more than the former.

The second variation to emotion relates to the time period with which I experience the change to utility. I feel a slightly different set of emotions depending on whether the change to utility happens now (the present), whether I am remembering about a change to utility in the past, or most interestingly, whether I anticipate the change to utility in the future.

We’ve already discussed what we feel when the change to utility happens in the present. We feel those variations of happiness and sadness (stronger or weaker) depending on the magnitude of the utility change. However, when we recall changes to utility we feel slightly different positive and negative emotions. When we reminisce about positive changes to utility, we are proud, content, sentimental, perhaps relieved. On the other hand, when we recall a decrease in our utility we feel such emotions as regret or anger (especially anger if we can place blame).

As I said above, I think the most interesting variations of the positive and negative emotions (especially the negatives ones) occur when we predict changes to our utility. That is, when they happen in the future (when changes to utility happen that we do not predict, we feel “surprise” which can naturally be positive or negative). When we look forward to or expect a positive change we feel anticipation or excitement. When we contemplate a decline in future utility we experience powerful emotions such as stress, anxiety and panic. In fact these variations of negative emotions seem to have outsized effects on our bodies and our immune systems, something I want to discuss a bit further.

To understand how we anticipate a decline in utility I think it is helpful to use the the model of the decision tree. Recall that we can view the size of our tree as a proxy for our utility (though remember also that this is just a proxy, it does not always hold true). The more choices we have going forward (generally speaking) the greater our utility. What happens if we take away choices, when we prune our tree? Generally speaking again, our utility is lower and we feel negative emotion. What happens when we anticipate or expect or fear the pruning of our tree? We feel stress and anxiety, perhaps even panic.

Think about a time when you had a take a big test, perhaps the SATs. If you score well your dream college might be in your future. But, if you score poorly, there goes Harvard, and with Harvard goes your great career and the billionaire life ahead of you… Now think about your tree. If you do poorly on the test, a substantial (and high utility) section of your tree has been pruned. Anticipating this pruning (technically, anticipating a reduction in the present value of your future utility), you feel a negative emotion and that emotion is anxiety or stress.

Let’s take an even more extreme example. You are alone in an elevator and it gets stuck. What might go through your head? Will I ever get out? Will anybody save me? Will I be stuck here forever? Will I die in this elevator? All of a sudden, your whole life’s tree has been dramatically pruned, your utility dramatically reduced (until the elevator starts moving again) and you feel the most extreme form of anxiety or stress, panic.

Winning the lottery

Before moving on, I wish to discuss one last point. Psychologists have performed studies of lottery winners and have concluded that after an initial burst of happiness (perhaps lasting several months after winning), lottery winners tend to be no happier (and sometimes even less happy) than they were prior to winning. To economists this seems surprising. Clearly winners are richer, so why shouldn’t their utilities be higher? The problem here is the confusion between happiness and utility.

Using our models of utility and happiness, we can shed light on this apparent paradox. Both psychologists and economists are correct that lottery winners have greater utility and are happier immediately after winning. We of course understand that winners are happier because they have experienced a significant increase in the present value of their future utility. Their “tree” is much larger, and indeed it takes time to think through all the possibilities that the newly won money can be used (or how it can contribute to future utility). Naturally, as utility continues to grow, happiness continues too, though at a reduced emotional amount. After a period of some months, utility ceases to grow because the winner has substantially completed the process of factoring in the new wealth into his or her future utility. Since happiness is derived from an increase in utility, absent this increase, there is no happiness. That is why the level of happiness tends to revert to what it was prior to winning. There is no longer a change to utility. In fact, sometimes utility actually decreases leading to negative emotion (less happiness) as money is squandered or as the winner is made to feel guilty by all sorts of friends and family for not being more generous.

Conclusion

So there you have it. Utility is made up of three baskets of goods: basic needs, social desires and leisure. We humans attempt to maximize the present value of the sum of our future utility. That is what drives our decision making. Emotions are derived from changes in our level of utility. When we experience an increase in the present value of utility, we feel positive emotion (happiness). When we experience a decline in our utility, we feel negative emotion (sadness). There are are differing strengths of both positive and negative emotions which correspond to the size of the change in our utility. Finally, we also feel variations of these positive and negative emotions depending on if the change to utility happens in the present, if we are reminiscing about it, or if we are anticipating it.

Thinking about the subjects of utility and emotions brings up a great number of fascinating questions. While each could easily be the subject of its own post, here are a few that I wanted to briefly address. I don’t pretend to have all the answers.

Can utility be measured?

I do not believe that utility is likely to be quantifiable. That is, I don’t think you can put a number on it, specifying, for example, that right now my utility is at a 67, before it was a 59 (obviously such a scale is also artificial). However, I concede the possibility that I could be wrong. Perhaps someday the technology will exist (an implantable device?) capable of reading brain waves or measuring levels of certain brain chemicals. Maybe these brain waves or chemicals are indicative of an individual’s utility and could be monitored constantly and in real time. I’m skeptical but time will tell.

Does everyone have the same utility scale?

Let’s for a moment assume that you could put a number of utility. Does every human have the same scale of utility, the same upper and lower bounds? Say my utility scale goes from 1 to 100. Does that mean yours does too?

I think the utility scales (whether measurable or not) among humans would indeed differ, though by only a relatively small amount. I’d speculate that some people are naturally more positive (happier) and probably have a somewhat higher upper bound to utility. Similarly some people are naturally less positive/more negative and have a somewhat lower, lower bound.

Do animals other than humans have a concept of utility?

There is no question that many animals exhibit the same sorts of emotions as do humans. Happiness, sadness, pain, boredom (technically, pain and boredom are primary factors of utility like hunger or love and not true emotions), stress, anxiety, depression and many other emotions have been observed in animals, and not only in highly intelligent creatures such as great apes or dolphins, but in “lower” species as well. Since I argue that emotions are derived from utility and animals clearly exhibit emotions, I must conclude that animals do possess a concept of utility.

The question then is do animals try to maximize the present value of their future utility as humans do? I don’t know for sure, but I am included to think that they do, at least the more intelligent ones. And perhaps most animals do. Others might say that what differentiates humans from animals is that we can think ahead, that we can anticipate our future, and maximize utility. I think there’s no such separation. Like evolution itself, this is a matter of degree, not of absolutes.

How can we explain loss aversion? Why do negative emotions feel stronger than positive emotions?

As I discussed at length in my last post, one of the key insights of psychology and behavioral economics is a concept known either as loss aversion or the endowment effect. These two related ideas show that the pain of losing is more powerful than the pleasure of winning. No doubt this is true, buy why?

There are two possibilities. The first is that there is a real asymmetry between a  gain and a loss. I won’t repeat the detail here, but briefly, I may rationally view an object as more valuable once I own than before I owned it. Hence, losing it reduces my utility more than gaining the object increased my utility. Because utility is greater on the downside, so too is the emotional response to the change in utility.

The second possibility is that emotions are just stronger when utility is reduced than when utility is increased. In other words, negative emotions are just more powerful than positive emotions. Theorists have proposed that this is a consequence of evolution and nature. That is, the most negative consequences of life’s activity (death) are greater than the most positive consequences of life. Therefore, evolution (through our genes) has programmed us to feel worse with loss of utility (death being the ultimate loss of utility) than with gains to utility.

Both are plausible explanations for loss aversion and for the apparent asymmetry of emotions, but I favor the first.

Why I am not a utilitarian

I’ve thought a lot about utility and human decision making. This is a topic that I first became interested in during college (more than 20 years ago) though I never took a class in philosophy or psychology. In fact most of the ideas contained on this article date back to my college days (I’m finally getting around to writing them down!). I firmly believe that human beings rationally make decisions in an attempt to maximize their own utility. I also believe that by understanding utility, we can explain how emotions are derived. I do not, however, consider myself a utilitarian. Let me explain why.

Like everything we do, let’s start with a definition. Of course this is easier said than done. For utilitarianism means differing things to different people. In fact, there are different factions of utilitarianism advocated by various philosophers. I am going to use what I believe to be the most common and the most colloquial definition of utilitarianism: a system of morals or ethics where decisions are made to maximize the sum of total utility. As many philosophers have pointed out, there are a number of issues that arise that are not easily answerable. Here are some of the major ones.

First, who’s utility are we maximizing? Fellow American (or pick your country) citizens? All humans currently alive? What about future humans not yet alive? How about animals (recall our discuss above about animals and utility)?

Second, should everyone’s utility count equally? Should a good samaritan count the same as a murderer? An elderly person the same as a child? An Einstein or Beethoven the same as an average Joe or Jane? Are we indifferent to twice the population at half the average utility? How about ten times the population with one tenth the utility?

Third, how can we possibly measure utility? Does everyone have the same scale? Is what good for me, the same as is good for you? Do we all have the same pleasures and pains?

Fourth, is it all realistic to sacrifice my own utility to help others as the philosophy demands? Would this not be, dare I say it, irrational? Must I give away all my money? Is helping to feed a starving child in Africa the equivalent of preventing my neighbor’s kid from being hit by a bus? Is it the equivalent of saving my own child?

For these four sets of reasons I find the philosophy of utilitarianism to be lacking. It is a good philosophy, in the sense that it is better than most alternatives when it comes to ethics and morals. But, it is too unrealistic and has too many flaws to be taken seriously.

An alternative (libertarian) philosophy

I propose an alternative philosophy. A system of morals and ethics that is more realistic, easier to implement, and based on our biologically evolved decision making system. I don’t pretend that my suggestion is in any way original or without flaws. But I do think it is better, and worth contemplating.

Each individual should attempt to maximize their own utility (as they do now), with one crucial caveat: that the individual’s decision does not impact anyone else’s utility (positively or negatively) without the other party’s consent. In essence, this is the true libertarian philosophy.

The most significant advantage of my proposed philosophy over utilitarianism is that it is implementable. Under utilitarianism, I must be able to measure and predict everyone else’s utility function. This is positively impossible. I can never know what is in someone else’s utility basket (which can change from moment to moment). I can never know the weightings of each utility component (which can change from moment to moment). I can never know the discount rate someone else uses to present value future utility (which can also change from moment to moment). Now multiply what is already impossible by 7 billion people (plus animals!).

Under my philosophy, all I need to know is that my actions do not affect anyone else’s, absent their consent. This is not a completely trivial matter, but it is infinitely easier than what is asked of me under utilitarianism. The libertarian philosophy is also far more realistic because it is based on the same utility maximizing decision making framework we have inherited from evolution. In that sense it is a far more “natural” philosophy.

The main criticism that someone would likely promulgate about my philosophy is that it is amoral, selfish and antisocial, rather than moral, selfless and social. I disagree for a number of reasons. Most importantly because a decision you make should never effect anyone else (again, without consent), you can never hurt them (lower their utility). Strict utilitarians would allow hurting one individual to help two.

Furthermore, a superficial glance at my philosophy would lead one to believe that by maximizing one’s own utility, there is no rationale to help others (the selfish, antisocial critique). However, this ignores the fact that a major component of utility is the basket of social desires. As I’ve said from the outset, we humans being social creatures have evolved to live in societies where we do indeed receive utility from helping fellow human beings. We don’t necessarily need a utilitarian morality nor religion nor a tax incentive to give to charity. Charitable behavior is ingrained in us (obviously, in some more than others). There is also no reason that society cannot further emphasize and encourage charity and goodness.

Utilitarianism is not only a philosophy of individual decision making but also a moral or ethical code for government. Specifically, it is the goal of government to maximize the sum of each person’s individual utility. Just as it is impossible for you or me to try to calculate each other’s utility, it is equally impossible for government to do so for all of its citizens. This is an analogous argument to the one Frederick Hayek made in opposition to socialism. Regardless of your view of socialism’s moral or ethical merits, it is simply impossible for government to make all of the economic decisions in an economy with even a trace of efficiency. Similarly, regardless of utilitarian morals or ethics, it is equally impossible for government to make decisions with the goal of the greatest good for the greatest many (maximizing the sum of utility).

So what instead should government do? Consistent with our libertarian philosophy, government should aim to maximize freedom. I fully concede that maximizing freedom is complicated with a great many tradeoffs (the subject of a future post or maybe a book!), but I think it is a lot less complicated than maximizing total utility. Moreover, I would suggest that maximizing freedom will lead to a substantially higher level of societal utility than would trying directly to maximize utility. Recall the heuristic of our decision tree. Having more choice in my life (freedom) is, most of the time, consistent with higher utility. Having our choices reduced is indicative of lower utility (with the accompanying emotions of stress, anxiety and panic).

To be fair, a libertarian philosophy of government does raise many of the same interesting and debatable questions as utilitarianism. Specifically, whose freedom should be maximized: citizens, residents or all humans? I advocate for all humans, but given the jurisdictional limitations of government, the way to achieve this is with open immigration. Do animals count? Yes they should, but I have no idea how. Do future humans count? I’m not certain. Should everyone’s freedom be counted equally? Yes, but not because everyone is equally deserving. Instead because to do otherwise is too complicated and ripe for corruption (simplicity counts in philosophy).

Let me end this article with something I consider very important, and very misunderstood. A libertarian philosophy of government is naturally one that will tend to favor smaller government. It will also tend to favor not doing over doing. However, the proper goal of libertarian philosophy is not, contrary to popular belief, to minimize government per se. The goal is to maximize freedom.

We are not irrational: Nobel Prize edition

Richard Thaler was recently announced as the recipient of the 2017 Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel (aka The Nobel Prize) “for his contributions to behavioural economics.” Thaler is essentially the father and most noted proponent of the field of behavioral economics as well as its offshoot, behavioral finance. By most accounts (and my own reading of many of his published works and his memoir, Misbehaving), Thaler is also quite a nice guy – and rather un-arrogant. A refreshing contrast from most economists. But is he deserving of the Nobel?

The main criticism of the work of Thaler (and of other behavioral economists) is that its conclusions are plainly obvious, if not to economists blinded to reality, then to generations of psychologists, marketers and hucksters. And very true, such work would probably not be quite as agreeable to the Nobel committee for awards in physics, chemistry or medicine.

But leaving aside the question of whether this silly prize should even exist, I’d say Thaler is deserving. In helping to introduce psychology into economics, and in shepherding behavioral economics from its backwater infancy to mainstream acceptance and political prominence, Thaler has done more to influence the field of economics than perhaps any other academic economist in the past few decades. A lifetime achievement award as it is. Congratulations.

Obvious or not, the key insight of behavioral economics is that we the people, do not behave the way economists and their models thought we did. Or think we should. In a nutshell, we do not always make decisions that maximize our wealth. And by showing evidence of this fact time and time again using simple experiments, questionnaires and financial data, a Noble Prize was won. I have no problem with that.

But I do have a problem, and the problem is this. By not conforming to the way economists think we should and by not consistently maximizing our wealth in all our decisions, we, human beings, are labeled “irrational.” And this is a conclusion stated in virtually all articles about behavioral economics, whether in the mainstream media, or in economic journals. In fact as we’ll see shortly, evidence of “limited rationality” was one of the stated reasons justifying Thaler’s prize by the Nobel committee.

Irrationality is nonsense and the need to “correct” irrationality should not be used to justify government intervention in the economy or in our daily lives.

Rationality and utility

Now we must get slightly technical. What exactly does it mean for an individual to behave rationally? The colloquial definition is something like, “to make decisions using reason.” Let’s first make note of the fact (which we’ll return to later) that rationality implies making a decision. Next, let’s ponder what it means to use reason (i.e. sound judgement, good sense, logical arguments) to make a decision, or to make a “reasonable decision.” I’d say a reasonable decision (or a decision based on reason) is one that I believe or expect will be in my best interests.

Take note again, that I use the term “believe or expect” to indicate that at the time of a decision, my information is incomplete. That is, I don’t know the future. So, rationality does NOT imply a decision that winds up resulting in a good outcome. It solely implies that my intent in making a decision was in my best interests given available information. Equivalently, irrationality does not imply making a decision that winds up being a bad one, provided that I thought it’d be a good decision at the time I made it.

Finally, and most importantly, what do we mean by the words, “in my best interests?” Here we will use a term very familiar to, though often misinterpreted by economists. And here is where we will really begin to deviate from the economic mainstream. The economic term for “my interests” is “utility.” Correspondingly, the economic term for “in my best interests” is “maximizing my utility.” So what the heck does “utility” represent?

Now I must be less precise because nobody, neither philosophers nor economists, have agreed upon what exactly constitutes utility. Some say it is a measure of happiness or pleasure. Some punt and just say, it is whatever it is that I maximize. I think we can shed a bit more light.

I’m going to say that utility is an aggregation of all the stuff that evolution and biology have made us humans desire. These include, first, basic life necessities such as water, food and good health. For example, other things equal, my utility at any given moment is higher if I’m not thirsty, not starving, and not sick. Given that humans are social animals (with strong incentives to reproduce), our utility is also made up of social desires such as love, friendship, companionship, sex and status (status is something to which we will very importantly return again and again). And while not necessarily an exhaustive list of the various components of utility, I’ll propose a third category of all sorts of entertainment (or that which prevents me from being bored).

At any given moment of time (or least when we are conscious), each of us have some level of utility. The components of utility will clearly vary from person to person and within a given person, from moment to moment. Even though utility is not necessarily quantifiable, each of us are capable of judging or estimating whether a given action will likely result in greater or lesser utility to us.

When we say that we humans make decisions in order to “maximize utility” what we precisely mean is that we make decisions in order to maximize the present value of the sum of our probability weighted future utility over our lifetimes (or longer, if you believe in an afterlife). So two final but crucial wrinkles that we need to discuss.

By the term “present value,” we mean that the same amount of utility today is worth somewhat more than that amount of utility tomorrow. How much more depends on some “discount rate” which will also vary from person to person and may vary from moment to moment. And lastly, note that we weight future utility by the likelihood of the events occurring that would result in that level of quantity of utility. Our brains do this implicitly, kind of the same way a baseball player can calculate the optimal path to run in order to catch a fly ball without knowing the first thing about physics or parabolas.

Before we move on, I will admit a few things. First, from the standpoint of philosophy (rather than economics), nothing about my idea or loose definition of utility is all that controversial. Second, my definition is essentially what is known as a tautology. That is, a sane adult will only make decisions they believe to be in their best interests. In other words all decisions made by sane adults are by definition, rational. Third, up until now, I am mostly talking about definitions and semantics. But that is not really the purpose of this article. Its purpose is to argue that a bad definition should not be used to justify government action. Fourth and finally, the concept of utility is clearly nebulous and extraordinarily difficult, if not impossible to quantify. Hence, economists don’t use it in their models or when analyzing experiments. And herein lies the problem.

So what do economists use to predict decision making and to pass judgement on whether a particular decision or set of decisions is rational or irrational? They do one of two things. Mostly they use a much more quantifiable metric as a proxy of utility: money (or wealth or income). That is to say, rather than maximize the sum of the present value of probability weighted utility, I should maximize the amount of money that I have. If I don’t consistently maximize my wealth, then I am making bad (irrational) decisions. As we’ll see shortly when we discuss Thaler’s research, it is this error of using money as a proxy for utility that mostly accounts for economists’ misinterpretation of rationality.

Then there is the second methodology that economists use to analyze decision making and to judge (ir)rationality. Based on the work of game theorists, economists sometimes utilize an alternative and non-colloquial definition of rationality. Rather than rationality being defined as a reasonable decision made in my best interest (utility maximizing), economists set forth a set of technical rules for which rational decisions must satisfy. For example, decisions must be logically consistent (if I prefer coffee over tea and hot chocolate over coffee, I must always prefer hot chocolate over tea). Decisions must also be time consistent (if today I prefer pizza over a hamburger, I must also prefer pizza over a hamburger tomorrow). Moreover, it is assumed that individuals have a perfect and instantaneous ability to calculate mathematical probabilities. If any such axioms are violated, the individual is thus judged irrational.

There are a number of problems with this approach to rationality. First, as we spoke about when we made our own definition of utility, there is no reason to expect that preferences need be static over time, including the discount rate (how we value utility today versus utility in the future) with which we implicitly use to discount future utility. Clearly it is not irrational to decide to have pizza today and a hamburger tomorrow. Nor is it irrational to today forgo dessert (value my future health more than the instant satisfaction of fat and sugar) but tomorrow partake in dessert (value instant satisfaction more than future health).

It is also not at all reasonable to assume that individuals are perfect calculators. Very few people get perfect SAT scores (some of what passes for the questions asked of participants in behavioral economic studies and research very much resemble SAT questions). Should not having a perfect score be viewed as irrational behavior? I think not. Neither miscalculation, mistake, lack of education nor even stupidity should be considered the equivalence of irrationality. Remember, we said that a rational decision is one that I believe results in the best decision, not one that actually does.

Lastly, I want to strongly reiterate that this type of technical definition of rationality is not the colloquial one. I suppose that in a purely academic setting it is sort of okay to misuse common knowledge words, provided that you define your misuse. The problem, however, is that this alternative definition has not been confined to academic journals and seminars. Instead, it has carried over into the mainstream (media) where a now popular and pervasive belief in erroneous irrationality is used to encourage and support government policy to “correct” human irrational behavior in all sorts of markets and sectors of the economy.

Thaler and his Nobel Prize winning research

We’re finally ready to discuss Thaler’s research, and its application, or misapplication, to human rationality. As described by the Nobel committee, Thaler won his prize for his contributions to four components of behavioral economics: 1) limited rationality, 2) lack of self control, 3) social preferences and 4) behavioral finance. Let’s take a look at each in turn and examine whether it is appropriate based on Thaler’s research to conclude that we humans are indeed “irrational.”

Note that all of the quotations below, unless otherwise noted, have been taken from the Nobel Prize Committee’s press release or accompanying background material.

1 (a). Limited rationality: mental accounting

“Thaler developed the theory of mental accounting, explaining how people simplify financial decision-making by creating separate accounts in their minds, focusing on the narrow impact of each individual decision rather than its overall effect.”

One of Thaler’s research topics was, in his words, to try to understand, “how do people think about money.” Economists assumed that money is money is money, or in technical terms, “fungible.” Thaler noticed that people often think about money in a very different way, in what he called “mental accounting.” People might separate their savings into different pools of money, either purely mentally or with separate banking accounts or money jars. For example, individuals or families might have a separate pot of money for housing, food, clothing, vacations, long-term savings, etc. In fact, most of us do this in one form or another.

Most economists, Thaler included, view this kind of “mental accounting” as less than fully rational behavior. For example, let’s say my “food money jar” is running low and is insufficient to buy this week’s groceries for my family. Let’s also assume that there is plenty of money in the “vacation money jar” that won’t be needed for a while. Assuming money is money, economists would say the rational thing to do is take money from the vacation jar and use it for food. But this is not what many people do. Instead they might choose to work extra overtime this week, or take the time and sell some stuff on eBay in order to fill up the food jar so as not to take from the vacation jar.

Is this behavior irrational? If rationality means simply maximizing wealth than clearly the answer is yes. But I don’t believe that is what rationality means. Let’s think about how money impacts my utility. I cannot drink money or eat money nor does looking at it provide much entertainment. Hence, money does not directly result in utility. It indirectly results in utility in at least three ways.

Most obviously, money allows me to consume goods and services in the future that will result in utility down the road. In other words having money now increases my future utility, which (present valued) is what I am trying to maximize when making decisions. Second, having money now contributes to my feeling of “status” which I believe is one of the primary components of utility. That is, feeling wealthy (or wealthier) makes me feel superior (or less inferior) about myself. Third, having money provides a sense of freedom and reduces stress and anxiety. Technically speaking, having money increases the ability for me to make choices in the future, some of those choices which might lead to higher utility.

Let’s now return to the question of money jars. I don’t take money from the vacation jar to use for groceries because I’d feel badly or guilty doing so. In other words, the status or self-worth component of my utility would be lower if I did. I’d rather preserve my status or self-worth and give up some leisure time to make money some other way (e.g. overtime or eBay). In my view, this is perfectly rational behavior.

But, I don’t blame you if you’re not yet convinced. You might be thinking, whoa..wait a minute! I’m making a decision based on “feelings” and not “reason.” Isn’t that the textbook definition of irrationality? No. Consider the following.

Lots of people choose to drive a BMW instead of a Honda or a Chevy even though all three vehicles provide virtually identical utility when it comes to their primary purpose of an automobile, transportation. That is, they all will generally safely get you to the same place at the same time. BMWs of course cost more so should I be considered irrational to spend extra money to own one? No, because they provide additional utility beyond simply transportation. The owner of a BMW derives utility from the sportier drive, the plusher seats, the better sound system, and most importantly, from the status or superiority that such a luxury brand provides. In other words, they feel better in a BMW.

The same arguments can be made for the decision to live in 10,000 square foot mansion rather than a small house even though both can equally provide necessary shelter. Or the decision to wear fancy designer clothes rather than last season’s basic hand-me-downs even though both can equally provide necessary warmth and coverage. Or the decision to eat meals at 3-star Michelin restaurants rather than neighborhood diners even though both can equally provide necessary nourishment.

All of these instances of everyday human activity show that people make decisions all the time that choose “feelings” over wealth. Just like with the money jars. And if you are going to argue that all of these kinds of decisions are irrational than you will wind up maintaining that nearly all decisions that humans make are irrational and that humans should never spend any money at all except for the most basic necessities (or for investments that will provide future basic necessities). Which is also essentially saying that you have zero insight into human decision making. A useless endeavor.

In a similar analysis to the jars of money, Thaler pointed out evidence of mental accounting and concluded “limited rationality” from the observation that many people have both outstanding credit card debt AND money in their savings accounts. Isn’t it irrational to pay high credit card interest rates when you could partially or fully pay off your debt by using your savings? Again, the answer is not necessarily.

Thaler has argued that perhaps the reason for this is self control, or lack thereof. That is, having credit card debt prevents me from spending even more. If I paid off my credit card debts with my savings, I might spend excessively and once again rack up credit card debt. This is a plausible argument and one that, in my opinion, is evidence of rationality, not irrationality. However Thaler would likely argue that the fact that we humans do have issues with self control is in and of itself irrational. We will return to the topic of self control shortly.

I will propose some alternative reasons for why people might carry high interest credit card debt while having money in their savings account. Perhaps it is considered socially acceptable to have credit card balances (something encouraged by credit card companies) but not socially acceptable to have zero money saved. I might feel guilty (lower social status, lower self worth) telling a friend or family member, or even just knowing that I have a zero savings balance. I do not necessarily feel the same level of guilt having credit card debt. Since I view social status as I primary component of utility, I view this behavior as perfectly rational.

Alternatively, I might rationally view my savings as more valuable than my credit card debt. To an economist this would not make sense since money is money and net worth is net worth. But my credit card can be maintained indefinitely provided I pay the minimal interest payments. Once the savings is gone, its gone. I may feel that I have more certainty, more flexibility or more of a safety net knowing that I have savings AND that I can continue to maintain a credit card balance.

A final rationale for having credit card debt and savings is that many people don’t realize or don’t understand the high interest rates they are paying to service the debt. To me, this is pretty stupid behavior. But it is not irrational behavior. Remember that to be rational is to “believe or expect” that a decision is in my best interests. Not understanding the amount of interest I must pay is certainly dumb but also means that it cannot be considered an irrational decision. Thaler and other behavioral economists would argue that in this sort of situation government needs to step in (see “nudging” below). If you want to make this argument (I would not), at least be honest. Government is intervening to correct stupidity, not irrationality.

A third often cited example of so-called mental accounting and limited rationality is a study performed of New York City taxi drivers. The study showed that drivers tend to target a certain amount of income each day (what Thaler refers to as a “reference point”). If it is a good day and they can make their target income early, they stop working. If it is a bad day they work later until they meet their target. This behavior is viewed by Thaler and others as irrational since drivers drive less on days with high demand and more on days with low demand, contrary to the laws of basic supply and demand learned in Econ 101. But is it?

First, understand that the supply part of “supply and demand” refers to firms not individuals. The implicit assumption is that firms always maximize profits and when experiencing high levels of demand, firms will expand their production capacity and new firms will enter the industry until some “equilibrium” is met. However, here we’re dealing with individuals, not firms, and the key insight of this article is that individuals do not necessary maximize profits (wealth). Moreover, unlike textbook supply and demand circumstances, taxi prices are not allowed to rise (or fall) with changes in demand (unlike Uber, for example, with its surge-pricing), nor can industry capacity (more taxis) easily be increased.

So, supply and demand clearly has limited relevance here. But what about the fact that economists call taxi driver behavior irrational because they are valuing money (work) and leisure (non-work) differently on different days, depending on the demand for taxi rides. Or in other words, why aren’t they increasing their own capacity (driving more hours) in response to high demand and lowering their capacity in the face of low demand?

Thaler maintains that the study of taxi drivers shows that they tend to have an income goal (a “reference point”) for each day’s work. I agree. The question is, is there a possible rational explanation for this?  I think so, and once again I refer to the component of utility comprised of status or self worth. On high-demand days, I feel like I got a good deal. I can go home early and enjoy my leisure time, unlike other drivers (or any other workers) who may still be working. On low-demand days, I feel good because I worked harder (longer) and still made my target income. Had I not worked longer, I might feel like a quitter or even a failure. Moreover, I’d argue that for taxi drivers, having a daily income goal (rather than just maximizing income) in and of itself contributes to utility because it makes a stressful and lonely work day more palatable (or less unenjoyable).

Before moving on, I’ll say one more thing about the taxi driver study. Indulge me for I will now make a completely unscientific guess. I will speculate that taxi drivers with spouses exhibit this income goal (reference point) behavior more than those taxi drivers without spouses at home (though they still exhibit it too). If I return home late, I am met with something like, “Why are you late for dinner?” If, on the other hand, I come home with low pay, I am met with an even more serious, “Why didn’t you make more money today?” Either way, I am made to feel guilty (those without spouses make themselves feel guilty, but to a lesser extent). The feeling of guilt is tied to low status/self worth and lower utility. In my view, it is perfectly rational to sacrifice a small amount of income, or work a longer day, to avoid being made (or making myself) feel badly.

The final area of Thaler’s study on the subject of mental accounting that I wish to discuss is the observed fact that individual stock investors are more likely to sell winning stocks and hold on to losing stocks. This is known as the “disposition effect” and is related to some of what we will discuss later on when we turn our attention to behavioral finance. Part of the insight of behavioral economists is that investors treat each stock as its own mental account, rather than try to maximize their entire portfolio or net worth as they assume a rational being should.

I have little doubt that this so-called disposition effect is indeed correct. And once again, the behaviorists have successfully shown that we humans are not simply wealth maximizers. But surprise, surprise, they have not shown that we are irrational. As an individual investor, I get utility from owning a stock and especially from making a winning trade that goes far beyond the monetary value of the gain. I feel smart, brilliant even! I tell all my buddies at the bar, and strangers at cocktail parties, and my spouse, how great a stock-picker I am! Utility is not money. As we’ve talked about many times, utility includes my feelings of status and self worth. I’ll gladly (and rationally) trade a few bucks in exchange for the world to think I’m the greatest investor since sliced bread. No difference from trading a few extra bucks to be seen in a BMW instead of a Chevy.

Of course, if I wait too long to sell the stock, there is a chance its value might go down. I may even lose money! Better to book the gain and be brilliant than risk losing my perceived brilliance. Are my friends going to think I’m more brilliant because I have a 32% gain rather than a 28% gain? Probably not. Better to leave a little money on the table rather than risk losing all the gain (and all my perceived brilliance). In other words, booking the brilliance provides me more utility than the additional monetary gain. And what happens if I hold on too long and the gain is lost? Now I will feel all sorts of guilt (low self worth) from myself and others for not being smart enough to get out when I should have. As we’ll discuss shortly in the section on loss aversion and the endowment effect, this hurts even more.

Similarly, I have lower utility from a loss beyond its monetary value. As long as I hold on, there’s always the chance of reversing the loss. As soon as I sell, I’m an idiot and my utility is lower. But as long as I hold on, I’m not. The chance to not be an idiot outweighs the monetary loss of a further decline in the stock price. For this reason, and another of the subject of Thaler’s research (though not directly cited by the Noble Committee) sunk costs can be viewed as perfectly rational behavior.

1 (b). Limited rationality: the endowment effect

“He also showed how aversion to losses can explain why people value the same item more highly when they own it than when they don’t, a phenomenon called the endowment effect.”

In 2002 a psychologist named Daniel Kahneman was awarded the economics Noble Prize for a set of ideas he (along with the deceased Amos Tversky) derived about human decision making called prospect theory. The most important component of prospect theory is something called “loss aversion” or the idea that losing something lowers our utility a greater amount than obtaining the object had raised our utility. In other words, an asymmetry exists whereby losses are more painful than gains are pleasurable.

Richard Thaler had been the first economist to apply prospect theory, and specifically loss aversion, to the realm of economics. In fact, he essentially relabeled the idea as the “endowment effect,” noticing that people tend to value things they own more than they valued them before they owned them. Loss aversion and the endowment effect are no doubt correct. But are they indications of irrationality, as Kahneman and Thaler and many others would have us believe?

One of Thaler’s most well known pieces of research on the endowment effect is his coffee mug study. Briefly, he gave half of a class of college students free mugs (retail value $6) and allowed them to trade with the other half of the class that did not receive the mugs. As it turned out, very few trades were made and it was observed that the median price at which sellers wanted to sell a mug was about twice as high as what buyers were willing to pay for a mug. In other words, those students who were given mugs valued them twice as much as those who were not given mugs. The endowment effect in action!

There are at least three reasons why I think it is perfectly rational to value something more when I own it, compared with its value before I owned it. The first and most important reason is that once we own an object, we now gain additional utility from the good memories, sentiments and emotional attachments that the object brings. For instance, every time I drink coffee out of that mug, or even see it sitting on my dorm room shelf, I will derive utility from the memories that I won a free mug from an economics professor! How fortunate! How cool! How many students can say that?

In other words, valuing the mug before I owned to after I own it is not an apples-to-apples comparison. While the physical mug has not changed in any way, its usage has changed, and hence its value. It is no longer simply a receptacle for hot liquids. It is also a receptacle of good, and status-increasing memories. It is no longer a mug. It is now my mug. It is no longer identical to hundreds of other mugs sold in the campus bookstore. It is unique.

I know what you might be thinking. What a ridiculous argument. Why should an everyday object magically change in value from one moment to the next just because I own it now? To be clear, it is not that the object has changed, it is that the object’s value has changed to me. Consider the following.

Let’s say you are a huge basketball fan. Let’s say you are walking down the street late one evening and you run into Lebron James, and he’s so friendly that he gives you a jersey that he wore in that night’s game. Assume that nobody witnessed the gift and that you have no certificate of authenticity so you could never sell it to a collector as a game-worn jersey. Would it be worth more to you than the same #23 Cleveland Cavaliers jersey you could buy in any sporting goods store? An economist would say no. In fact, an economist would probably say its value (and hence, utility) as a piece of clothing is actually lower than a brand new shirt, precisely because it has been washed and warn, and therefore has a shorter useful life.

But of course, you, a huge basketball fan, will value it much higher than a new shirt. Wearing it will make you feel special even if strangers have no idea who once wore it. Your friends will be jealous. You can daydream about passing it down to your future kid some day and telling him or her the story of how you obtained it. Point being, you get much more utility from the jersey than any other otherwise identical shirt you could have bought at a store. And because of that, its value is greater to you. And that is totally rational. Same for the coffee mugs.

A second reason why the endowment effect can be considered rational behavior has to do with how much time and effort you spend predicting an object’s value to you before and after you possess it. Before you own something, there is, obviously, a less than 100% chance you will come to own it. It is therefore rational to limit how much time and effort you expend anticipating the object’s use to you. Naturally, the greater the chance of ownership, the more time and effort you are likely to expend. Once you own an object, it now makes sense to expend additional time and effort to analyze the object’s potential usage.

Yes, I know that’s a bit confusing. Let’s use the mugs to make it clearer. Before I own the mug, I may give it a quick thought and say, “That would be a great mug for coffee or tea or hot chocolate.” Once I own the mug, I may upon further thought say, “not only can I use the mug for beverages, but it would also be a great holder for pens or spare change, or be a paperweight, or just look pretty on my desk, or maybe I can re-gift it…” I see more possible uses for the mug, more future utility, and hence more value.

I would argue that this value discrepancy pre-ownership and post-ownership due to the differential certainty of ownership is exacerbated when making decisions about objects of small value. Think about it. How much time is it really worth investing in thinking about the uses for a mug before I own it? In my view, this is especially true in academic behavioral economics research where questionnaires or even artificial trading do not involve real decisions, only theoretical ones. We’ll return to the issue of the applicability of research studies very shortly.

The third reason supporting the rationality of the endowment effect relates to the role of “status” in determining one’s utility. This is something we’ve discussed previously and will return to again and again. Thaler’s coffee mug study does not simply involve the respective valuations of the mug by owners (sellers) and potential buyers. It also involves the decision on both sides of whether or not to make a deal.

If I was one of the student’s fortunate enough to receive a free mug, I’m probably going to be reluctant to sell it for much less than its $6 retail value for fear of being viewed (by other students, or even by my own self) of making a bad deal. Now, think about the mindset of a potential buyer. I know that the student who received the mug got it for free. Why should I pay full price for an item that my fellow student received for free? I could just as easily buy it for full price from the campus bookstore. Just like with the seller, I don’t want to feel like or be deemed by others a “loser” for making a bad deal. There is also an element of “fairness” involved. Why should I pay for something someone else got for free? We will talk about more about fairness when we discuss Thaler’s research on social preferences. However, I think this holds true even if the seller of the mug had to pay for the mug in the first place.

All of us, and economics students especially, are trained to “buy low, sell high.”  The perception I have (my status or self-worth) as a good trader (making a good deal or avoiding a bad deal) might be worth more than the few dollars differential of a coffee mug. I might even think that that unstated purpose of the exercise is indeed to measure my trading prowess, further biasing the analysis of mug value.

This is one of the limitations of artificial academic studies (something we will return to very shortly). Decisions about buying and selling mugs do not only capture the value buyers and sellers place on mugs. The study’s results are also affected (biased) by other factors, notably in this instance, how participants view themselves as smart traders vis a vis their fellow classmates. This is similar to our discussion of locking in stock gains and avoiding stock losses so as to be viewed as smart, something that contributes to my utility.

Think again to automobiles. Most of us know that the value of an automobile drops significantly as soon as it leaves the dealer’s lot. The car hasn’t really changed, other than perhaps a handful more miles on it. Why then, would you not buy it from me for anything close to what I paid? Naturally, I probably wouldn’t sell it to you either for anything less than I paid. Kind of like the mugs. The point here is that buy low, sell high is ingrained in most of us. When we violate this, we feel like idiots, which lowers our utility.

Finally, just like we talked about sentimentality which increases the value of the mug because it is my mug, and not just any old mug, there is also sentimentality to how I obtained it. Say for example that I acquired it in a trade from fellow student for a low price. I got a great deal! Going forward that mug will bring me utility as I will recall the brilliant trade I made with another (i.e. inferior) student. This is distinct from the utility I might get remembering that I got it free from a professor.

The fourth and final rationale I will make with regards to the rationality of the endowment effect relates to the issue of sunk costs. As I mentioned earlier, taking into account sunk costs when making decisions is viewed as irrational behavior by economists. I disagree. Admitting a loss lowers my feeling of status or self worth and thus my utility. I don’t want to sell the mug once I own it unless I get a very high price, because doing is an admission that I made a mistake. I would rather tradeoff a small amount of lost money rather than be viewed (or viewing myself) as making an error. In my view, therefore, sunk costs can therefore be considered a component of rational decision making.

In any case, enough talk of coffee mugs. Let’s move on to death and disease. A second well known study on the endowment effect is one that Thaler is a survey given to students in a classroom setting. The following two questions were asked in the survey:

A) Assume you have been exposed to a disease which if contracted leads to a quick and painless death within a week. The probability you have the disease is 0.001. What is the maximum you would be willing to pay for a cure?

B) Suppose volunteers would be needed for research on the above disease. All that would be required is that you expose yourself to a 0.001 chance of contracting the disease. What is the minimum you would require to volunteer for this program? (You would not be allowed to purchase the cure.)

A typical answer given by students (in 1980 dollars) was about $20 for the first question (how much would you pay for a cure) and $10,000 for the second question (how much would you require to be exposed to the disease).

As I’m sure you have noticed, the probabilities of your death are equal in both questions. Either way you have a 0.001% chance of dying. So from a purely mathematical standpoint, valuing death the same, one should give the same answer for answers A and for B. Thaler and others concluded that an endowment effect is at work. That is, people are willing to sell health for way more than will spend to buy health. And of course, they infer that this large discrepancy is strong evidence of irrationality.

I want to hold off on addressing the question of rationality for a moment. We said a moment ago when discussing mugs that there is an element to bias in an academic study of decision making that renders conclusions questionable or even invalid. In the mug case, my skill as a trader and the utility gained from making a good deal might trump the economic value of a mug. Here, the study is much more unrealistic. In fact, this study is so far fetched that I’d argue its conclusions are essentially meaningless.

Let’s say you were participating in this research study. What might go through your mind as you read the two questions? I know what would go through mine. If I choose A, why can’t I still get the cure if and after I learn I have the disease? If I choose B, why can’t I also get the cure? How do you know the disease will be fatal? How do you know that the disease will kill me in exactly one week? How do you know it will be painless? How do you know the exact probabilities of contracting the disease?

The point I am trying to make is that a survey like this is so unrealistic, so unbound to reality, that deciding between questions A and B has little relation to real-life decisions. The even more important point I want to make is that the choice of A or B has zero effect on my utility, other than perhaps a small impact if I infer the survey is some kind of test of my intelligence (as we saw with mugs). When I take such a survey, I am more likely to think, what is the right answer? Which answer will make me look smart? What is the point of the survey? What I am probably not thinking all that much about are the realistic possibilities of my own death, which is of course the intent of the study.

My criticism of this type of behavioral research study is not unique. Many others before me have shared the view that much of the research in behavioral economics is unrealistic and does not require the test taker to make a true decision. Thus, how can it be used to opine on the question of rationality? Certainly studies have shown that people might be bad at calculating probabilities. But as we’ve said before, a lack of math skill is not the equivalent of irrationality.

Having said all that, for the sake of argument, let’s take the survey and its conclusion at face value, that loss aversion or the endowment effect is absolutely true. The question that follows is why might it still represent a rational decision? The answer, in my opinion, is that I would feel like an idiot, and others would consider me an idiot if I caused my own death. I know what you are thinking. That makes no sense because either way I made a decision that resulted in my own death (either by not paying for the cure or by risking the disease). I don’t think this is exactly true.

To use one of the favorite tools of a behavioral economist, let’s restate or “re-frame” the two choices.

A) You may have a disease. Do nothing and you will probably live

B) You don’t have a disease. Do something and you might catch it and die

Do these feel equivalent? Of course, I’ve simplified the statements and left out the probabilities, but the point I am making is that how a question is framed has an enormous impact on most people’s decisions. Behavioral economists would no doubt agree, as framing is one of the most researched areas of decision making. But, what a behavioral economist would conclude is that the way a question is framed should not affect a rational individual’s choice as long as the outcomes are equivalent. If it does affect my choice, I am irrational. As I’m sure you can guess, I disagree.

There are a number of alternative ways I could have framed this choice but the point I am trying to make is that in the first choice, either I already have the disease or I do not. I am not giving myself the disease. I am only deciding whether it is worth to pay for a cure. In the second choice, I do not have the disease. I am making the choice whether to risk being exposed. In other words, I make the choice whether to give myself the disease or not. In B, I kill myself. In A, I don’t kill myself, I just don’t save myself. Yes the outcomes (death) are the same, but from a decision making standpoint they are not at all equivalent.

Why does this matter? Let’s think about what happens to me given both choices if I do get sick and do not have the cure. Either way, my last week on Earth is going to suck. Presumably I am quite upset, facing certain death. But there’s something else. My guilt will be far greater had I chosen to risk exposure (choice B) than had I opted to not pay for the cure (choice A). Think of all those wrenchingly sad goodbye conversations with my loved ones if I chose B. “How could you have risk exposure to a deadly disease for a bit of money?!?!?” Whereas with choice A, it seems perfectly reasonable to not pay for the cure given the very low chance of disease. The point is that since guilt is a key component of status or self- worth, and since status or self-worth is a key component of utility, my utility will be lower in choice B, than in choice A. And since my utility will be lower for that last miserable week, it makes perfect, rational sense to require a lot more money to make that choice, exactly what the study showed.

Let’s discuss one final example of the endowment effect. In one of his early research papers, Thaler wrote about a gentlemen referred to as “Mr. H.” who mows his own lawn. A neighbor’s son offers to mow Mr. H.’s lawn for $8 (1980 dollars) but Mr. H. continues to mow his own lawn. Mr. H. is then offered $20 to mow his neighbor’s equivalently sized lawn but Mr. H. declines.

On the one hand, Mr. H. is saying that mowing a lawn is worth no more than $8. On the other hand, Mr. H. is saying that mowing an equivalent lawn is worth no less than $20. How can this be? Naturally, Thaler concluded that there is an endowment effect going on, that the price a person is willing to buy a good or service can be significantly lower than the price at which they are willing to sell the same good or service. No disagreement here. But what about the issue of rationality?

To understand why Mr. H.’s behavior can be considered perfectly rational we need to think first about the consequences to utility from mowing one’s own lawn versus not mowing one’s own lawn. When I mow my own lawn, there’s a sense of pride and accomplishment in my beautiful lawn-mowing job, which contributes to my status. I also avoid the negative status that stems from the guilt I receive from my spouse’s disappointment in me not mowing the lawn. I might want to demonstrate to my children the responsibility of chores. I also may avoid the guilt that I feel if I shirk my responsibilities as a homeowner. Finally, perhaps there is entertainment value in the actual mowing, being alone and with nature. All of these may contribute to my utility and may be worth far more to me than a small amount of money. Perhaps they are even priceless. That is, my neighbor might offer to mow my lawn for free, but I would still mow it myself.

Next, let’s discuss why I might not want to mow my neighbor’s lawn. I don’t get the same status from it. I don’t care that my neighbor’s lawn is beautiful. I don’t feel the guilt from my spouse or from myself for not doing it. It’s not my job as a homeowner since its not my home. Mowing my neighbor’s lawn is just a business transaction. Mowing my own lawn is not. Hence, they are decidedly not equivalent, even if the time and effort required for mowing are. In short, it makes perfect sense to mow one’s own lawn given that I get additional utility from it and it makes perfect sense to not mow my neighbor’s lawn for more money since I don’t get the same utility.

Before we leave the topic of the endowment effect, I want to point out two important conclusions that have been demonstrated by research. First the endowment effect is much weaker, if it exists at all, for goods that have easily defined and known monetary value. This should make sense since 1) cold hard cash or its equivalent has little sentimental value, 2) we think about the value of money all the time so there should be little difference in our predictions for it use before and after it is obtained, and 3) it is highly unlikely for a trade to be considered good or bad when the value of the item to be traded is obvious to both parties. The second conclusion of behavioral research is that professional traders generally do not exhibit the endowment effect. This also makes much sense since professionals tend not to become sentimentally attached to the objects they trade.

2. Lack of self-control

Thaler has also shed new light on the old observation that New Year’s resolutions can be hard to keep. He showed how to analyse self-control problems using a planner-doer model, which is similar to the frameworks psychologists and neuroscientists now use to describe the internal tension between long-term planning and short-term doing.”

The second area of study that the Noble Prize committee cited as reasons for Thaler’s award is his research on the lack of self-control, and in Thaler’s opinion, government’s responsibility to correct people’s lack of self-control. Here, more than anywhere else in the article do I disagree with Thaler and his followers.

Thaler has stated that one of the things that first got him interested in studying decision making was the somewhat bizarre behavior of his academic colleagues and friends that tended to occur at dinner parties. That curious behavior involved bowls of nuts. Specifically, cashews.

Here I quote from Thaler’s book, Misbehaving:

“Some friends come over for dinner. We are having drinks and waiting for something roasting in the oven to be finished so we can sit down to eat. I bring out a large bowl of cashew nuts for us to nibble on. We eat half the bowl in five minutes, and our appetite is in danger. I remove the bowl and hide it in the kitchen. Everyone is happy.”

Thaler used anecdotes like this, and later, research studies to conclude that human beings lack self control, a conclusion that is surely true. But he also concluded that this lack of self control represents irrational behavior. Dinner guests say they are happier when the cashew bowl is removed. They knew it was ruining their appetite for dinner. Yet, they could have just stopped eating! This certainly seems irrational. Moreover, how could anyone be happier with less choice (no cashew bowl)? This is something known by behavioral economists as the “paradox of choice.”

I am now going to give two very different explanations for why I think that eating cashews should not be considered irrational behavior. I think the first is the stronger argument. As you’ll see, it is also a very different argument than I have used so far in this essay.

Eating the cashews is not an irrational decision because it is not a decision at all. Think about yourself in a similar circumstance. Do you decide you’re going to have another cashew and then eat it? Or does your body just do it without you deciding? Hand goes to bowl. Hand picks up nut. Hand goes to mouth. Repeat. Did you consciously make a decision to pick up a nut and put it in your mouth? No. This kind of action is unlike, for instance, deciding how much money to buy or sell a mug or whether to mow your lawn. Those require thought. Eating cashews from the bowl in front of you does not. Simply put, there is no decision being made.

Recall our definition of rationality: to make a decision that I believe to be in my best interests. Absent a decision, we cannot conclude rationality or irrationality. Eating a cashew in this case is little different from breathing, an involuntary activity of your body. You could also call it an addictive behavior. Either way, it is not a conscious decision, and therefore not an irrational one. It is also not an example of the “paradox of choice.” Because I don’t make a choice when eating the cashews, removing the bowl is not the same as removing a choice.

Thaler also noted that when the cashew bowl is far away (say, at another table on the other side of the room), people do indeed refrain from eating the nuts. They do not get up, walk across the room and grab a nut. That would require a conscious decision, and therefore could be considered an irrational one. But people don’t do this.

That was the first answer for why cashew eating is not irrational. A second possible answer is that eating the cashews actually does increase my utility even if I don’t want to admit it. People might say that they would rather eat a healthy dinner than a bowl of nuts, but perhaps they are lying. They might even be lying to themselves. Why would they lie? Because it is not socially acceptable to ruin one’s appetite by eating unhealthy snacks. It is not considered acceptable to have a dinner of nuts. That is considered by society to be weak and childish. And who wants to be considered weak and childish by one’s peers? Or even one’s self?

Evolution has given us humans a desire to eat fatty and salty foods. Hence my utility is higher. Similarly, why ever eat ice cream? Surely I can get the equivalent calories in a healthier package. But the fat and sugar of ice cream makes me feel good, it increases my utility. Admit it or not, cashews for dinner might not be the healthiest decision, but there’s no reason it can’t be the rational one.

Take your pick whether you prefer the answer that cashew eating is not a decision or that cashews are better than meatloaf. Both are probably true.

The planner-doer model

“Thaler used his research on self control to propose a model of human behavior he called the planner-doer model.”

Thaler hypothesized that a person has two selves, the planner and the doer. The planner tries to maximize the present value of lifetime utility. The doer is only concerned with current utility.  Naturally there is conflict between the planner and the doer, but sometimes the planner can override the doer if sufficient willpower (some kind of cost) is used.

In my view Thaler’s planner-doer model is Ptolemaic, or maybe Freudian.  It is confusing, unnecessary and wrong. First of all, if I’m deciding between a healthy fruit cup for breakfast or a chocolate doughnut do I really have a devil (doer) on one shoulder and an angel (planner) on the other? How exactly do they duke it out to make a decision? Second, how long does the “doer” have to make a decision until the “planner” kicks in? Is it instantaneous? What exactly is meant by current utility in this context? Isn’t breakfast in the future anyway?

Are chocolate doughnuts always the choice of the doer? Are they always disallowed by the planner? Do they always reduce my long-term utility? What about a chocolate doughnut once per week? Once per month? May I eat one once a year even? When I’m old can I eat one? Age 60? 70? 80? On my deathbed? Ever? What if I plan to eat a doughnut so it’s not an impulse decision? Would that be okay? Yes, I think I’ll plan to eat one for breakfast every day, starting tomorrow. That must be allowed since it’s a decision made by the planner in me, not the doer.

I’m obviously being a bit silly here. But the point I’m trying to make is that when you really think about the planner/doer model, it completely falls apart. There’s no obvious way to differentiate between the two decision makers unless you say something like the “doer” makes me fat, unhealthy and poor and the “planner” keeps me thin, healthy and rich. But that’s not a useful or valid model for an economist or for any other half-intelligent person.

We humans maximize the present value of our (probability weighted) future utility. That’s how we make decisions. How we weight the difference in value between current and future utility is exactly measured by the discount rate we implicitly use. No angels or devils, planners or doers needed. Of course, how we derive our discount rate is a good question, but one that I will not address except for this. Our discount rate can, and will change from time to time, contrary to the assumption of economists. Remember as we’ve stated before, there is nothing irrational about having that fruit cup today and that doughnut tomorrow.

Before moving on, let me cut Thaler just a bit of slack here. His definition of the “doer” is very vague. But, to the extent that the “doer” is a proxy for our cashew-eating involuntary decision making system of the brain, then I agree. Neuroscience research has indeed shown that there are multiple decision making systems of the brain. At the very least, there is one that involves voluntary (thinking) decisions and one that governs involuntary actions. However, as we stated above, actions made by this second involuntary system should be considered neither rational nor irrational since they do not involve conscious decisions.

Nudging

“Succumbing to shortterm temptation is an important reason why our plans to save for old age, or make healthier lifestyle choices, often fail. In his applied work, Thaler demonstrated how nudging – a term he coined – may help people exercise better self-control when saving for a pension, as well in other contexts.”

Thaler used his planner/doer model to infer that individuals tend not to act in their own best interests. That is, they favor the short-term over the long-term. Thaler co-authored an influential popular book called Nudge and a paper entitled, “Libertarian Paternalism is Not an Oxymoron” where he argued that individuals should have choices made for them by governments (or other entities) in situations where they make decisions believed not to be in their best long-term interests.

The concept of “nudging” has been applied to a number of areas where academics (and government officials) believe people make irrational decisions that favor short-term benefits in lieu of long-term interests. These include healthy eating, smoking cessation, education and organ donation. However, the area that has had more research and probably the most real world implementation is one that Thaler is most known for, retirement savings.

Specifically, Thaler posited that most people undersave for retirement since they do not have the (planner) willpower to override their (doer) urges to spend the money now. Implicitly, Thaler’s view is that this constitutes irrational behavior. Thaler’s research on retirements savings also demonstrated that many people do not participate in voluntary employee or government sponsored retirement programs, and thus miss out on valuable tax deductions and/or employer matching funds. This too he considered irrational.

To compensate for such short-term, irrational thinking, Thaler (and others) suggested that enrollment in retirement funds be made automatic. That is, instead of people having to fill out paperwork and choose to enroll in a savings plan (“opt-in”), they would be automatically enrolled unless they filled out paperwork stating their desire not to enroll (“opt-out”). Further, Thaler argued that funds be automatically invested in some sensible diversified portfolio (a default portfolio) rather than the individual having to choose the investments since individuals tend to pick irrationally. Thaler also suggests that contributions to retirement plans automatically increase as an employee’s salary increases.

Thaler labeled this libertarian paternalism or the more user-friendly, “nudge” and successfully advocated many companies and governments in the U.S. and U.K. to adopt such plans. On the surface, libertarian paternalism or nudging feels reasonably benign. It is not coercive because individuals can always opt-out. In Thaler’s view, a fully rational individual should be indifferent to a traditional opt-in retirement plan or a nudging opt-out plan since, either way, they have the ability to make the same choice (to save or not to save).

I, however, find four significant issues with this concept of nudging. First, how does Thaler or the government or anyone else know that my utility is higher if I save more? Second, if government does assume that long-term interests always trump short-term ones, there are an infinite number of situations where nudging could be applied. Where do we stop? Third, how do you prevent special interest groups from co-opting otherwise well-intentioned policies? Fourth, is nudging (libertarian paternalism) really consistent with liberty and freedom, at least as recognized in the U.S.?

1. Is utility really higher?

The most crucial assumption that Thaler and other nudgers make is that favoring the short-term over the long-term is a mistake. That, for example, spending today instead of saving for tomorrow is irrational. Is it really?

There’s no question that other things equal, people prefer to spend money rather than save it. To use the technical term, people have a high discount rate when present valuing their future utility. In Thaler’s view, this discount rate is far too high (“hyperbolic” in his words). Hence, utility today is valued too high, and the value of utility tomorrow (or, say, 30 years from now in retirement) is too low.

Let’s start with something easy. It may indeed be the case that most people do not understand how much money they will need when retired and hence how much they need to save for retirement. They may not understand the concept of compound interest. They may not understand the financial markets at all. Thaler’s view is that stupidity equals irrationality. Said differently, anyone that lacks the necessary education or knowledge or mathematical ability to make the same decision that would be made by a highly educated PhD economist should be considered an irrational being. As you know by now, I do not share this view. A decision should only be considered irrational if it is made knowing that it won’t be in your best interest. Not not knowing.

Second, let’s consider what Thaler would consider an irrationality “no-brainer.” If I don’t contribute to my 401k retirement plan, for example, I lose the tax deduction that the federal government (in the U.S.) grants me. In Thaler’s view, this is money lost and why would any rational human ever choose to lose money? But it’s not that simple. If I put money into a 401k, there are significant limitations and penalties if I want to use the money before I retire. The money is not free for me to spend as it would be if I put the same money in a savings account, or a normal (non-retirement) investment account. So yes, I lose the tax deduction but I retain access to my money. There is a trade-off. It is not necessarily irrational to give up the tax benefit in order to keep my own savings accessible.

The next question to ponder is why is it irrational to value the certainty of consumption now a lot more than the uncertainty of consumption some decades down the road? The answer of course is, that it’s not. Will I be alive in 30 years? Don’t know. Will my social security checks be sufficient to meet my financial needs? Maybe. Will I even care about status when I’m old the way I care about status now (remember that most consumption is really for the purpose of increasing our social status or self worth)? No idea.

Now we must get a bit philosophical. Take you, dear reader. I’m going to take 1% of your income and force you to save it for retirement. You can consume less now and you might be able to consume more later, much later. Is the present value of your utility higher now? Well, is it? Are you better off? I have no idea. Maybe it is. Maybe it isn’t. I don’t know. And that’s the key point. Neither does Richard Thaler.

Thaler says we should save, not consume (incidentally, this is the exact opposite behavior encouraged by the Keynesian policies espoused by nearly all mainstream economists). When we reach retirement, is it okay to consume then? Why? Should we save even longer? Just like with breakfast, is it ever okay to eat the chocolate doughnut? Do I ever get to enjoy my savings? Should I wait until I’m too old and feeble? What is the point of wealth if not to spend it? How do you know that spending later is better for me than spending now?

The same arguments we can make about retirement savings we can make about other areas for which nudging has been advocated. Take healthy living, for instance. Thaler would argue that individuals should be nudged to live healthier lives. But how does he really know that a person’s utility is indeed higher giving up soda or junk food or even cigarettes just to potentially live a little longer? Why would anyone assume that maximizing life expectancy is the equivalent of maximizing utility? Clearly evidence points away from this. All humans engage in behavior that reduces life expectancy in exchange for near-term utility. Some just go further than others. Where do you draw the line?

Recall also that addictive behavior, like our cashews, is neither rational nor irrational because it does not represent a decision. As much as I personally find smoking to be abhorrent (and addictive) behavior, I do not recognize it as irrational behavior.

Lastly, I want to address the point that Thaler makes that a fully rational individual should be indifferent to opt-in or opt-out. His view is that either way, an individual can participate or not participate. Therefore, from a utility standpoint, they represent identical choices. I find this argument unpersuasive. Firstly, many people might not know they have the ability to opt-out. To Thaler this, in and of itself, is stupid and irrational. To me, only stupid. Second, many will feel pressure to not opt-out since big-brother (either government or their employer) has made participation the default option. Peer (or big-brother) pressure affects status and self worth and hence utility so even though opt-in and opt-out both allow participation nor non-participation, their effects on utility are not necessarily equivalent.

2. The slippery slope

There is no question that we humans make choices that favor the short-term over the long-term. If government views this is bad, how far should government go to correct this behavior? Let’s return to retirement savings. Perhaps by default, 5% of my salary should be saved for retirement. Maybe 6% would be better. Or 7%? 10% of my income? Maybe 20%? Where does government draw the line? This is the slippery slope problem. Once you start down the nudging path, where do you stop?

How about healthy living? Clearly many people eat too much dessert and drink too much alcohol. We get fat, we get diabetes, we get liver problems, we don’t live as long as we might have otherwise. Maybe government needs to nudge. Perhaps when I go to a restaurant, it should be illegal for the restaurant to present to me a wine list or dessert menu unless I specifically ask for one. Perhaps there should always be a default order: green salad and grilled chicken. That’s what I get, unless I specify otherwise (maybe in writing, to make it even harder) for the steak and fries.

Maybe grocery stores should be mandated to put unhealthy foods on high, out-of-reach shelves. Maybe they should be in a separate section of the store, a section to which I need to (in writing again!) ask for entrance. Perhaps I should be automatically enrolled in a health club membership. Maybe a personal trainer should automatically stop by house every day to encourage me to exercise. Maybe they should even have a key to my house so they can get me out of bed in the morning to exercise.

Frankly speaking, these are not unreasonable debates. But the key point I am trying to make here is if you are going to advocate nudging, how do you decide where to nudge, and how do you decide how much to nudge?

3. Nudging and special interests

Now I am going to talk about an issue that affects all government intervention, not just nudging. That issue is special interests. For every government action, some entities are helped and some are hurt. There are always unintended consequences, and very often (perhaps even 100% of the time), those unintended consequences ultimately dwarf the intended ones. Said differently, when government gets involved, the cure is often (usually) worse than the disease.

Let’s return to retirement savings. If retirement funds increase, who benefits? Where is my 401k money going? To a money manager. To Wall Street. To financial services firms. To the stock market. Nudging retirement accounts has the effect of subsidizing Wall Street and financial markets. Is that really a good thing? Might it not lead to more power to Wall Street and the financial markets? Might it not lead to a greater likelihood of Wall Street bailouts down the road since government is really made the decision to put my money into Wall Street? Now they can’t let it decline?

Might now the financial industry lobby for even more nudging of retirement savings since these firms benefit? As I wrote earlier, if 5% savings is good, why not 6% or 10% or 20%? And of course, in all of this some industries have to lose. Perhaps traditional local and community savings banks where I would have otherwise put my money to save. Perhaps retail stores or restaurants where I would have otherwise spent my money.

Let’s say government wants to nudge towards healthier eating. Encourage certain foods, discourage others. Some companies gain, others lose. But which foods are even the healthy ones, which ones the unhealthy ones? Frankly, scientists have no idea. Eggs used to be good for us, then they were bad for us, now they are good for us again. Butter is bad, margarine is good. Now margarine is bad, butter is good. Fat kills, so eat carbs. Now, carbs kill so eat fat. And its not just food. The entire healthcare system suffers from such uncertainty. So why should government take sides, unless the evidence is absolutely overwhelming (as it is with smoking).

The real problem is that government involvement is ripe for decisions encouraged by special interests. Big companies with big lobbying budgets at the expense of small businesses without. These special interests almost always trump the best interests of the people. And that assumes that government officials and politicians even have the best interests of the people at heart, something of which I am skeptical.

4. The oxymoron

As I mentioned above, Thaler co-authored a paper called “Libertarian Paternalism is Not an Oxymoron.” I am by no means the first to argue that this title is emphatically wrong. Thaler clearly does not understand what the term libertarianism truly means. The essence of libertarianism is not that I will do something to you unless say no. The essence of libertarian is that I will not do to you unless you want it done.

Allow me some latitude to solidify this argument. Consider the issue of sexual consent. I will have sex with you unless you say no. Or the alternative: I will not have sex with you unless you say yes. At least in the U.S., both societal norms and the legal system have moved towards the latter statement. That is, sex requires affirmative consent. When it comes to the violation of our bodies, most of us clearly seem to prefer it this way. However, Thaler’s idea of libertarian paternalism espouses the former (consent is assumed absent a “no”). Just some food for thought.

Before moving on from nudging, let me say three final things. While I personally would not often advocate nudging by government, there are arguments that can be made in favor. Government is (for better, or worse) collectively the largest health insurer in the U.S. (through Medicare, Medicaid, public employees, veterans, etc.). It is therefore reasonable to argue that nudges in favor of healthy living, and hence lower medical expenditures are warranted, given the government’s economic stake in our health. Similarly, it is not unreasonable to argue for nudging with decisions that affect children, since the decision making processes of children’s brains are not yet fully developed. But what I do ask of those who, like Thaler, advocate for nudging is that they not base their arguments on human irrationality. For that is a fallacy.

Secondly, I am in agreement with the behavioral economists that yes, most people make lots of mistakes. They certainly do make decisions that wind up being not in their best interests (though they do not realize this at the time of the decision and thus they are still rational decisions). For the most part, we need to let people make mistakes, not have government correct them. People learn from making mistakes, and that’s how society improves. Obviously there are limitations here. But, I would argue government should only get involved not simply when the benefits are greater than the costs, but when the benefits are an order of magnitude greater than the costs. The bar must be set higher. The special interests, the unintended consequences, the inefficiencies of government involvement are just too great in too many circumstances. The cure must never be worse than the disease.

Lastly, I point out that to the extent a case for government involvement in markets or personal lives is overwhelming, government has four different ways in which to act. First it should educate. Only if that education fails should it incentivize, for example through sin taxes (to discourage undesirable behavior) or tax credits (to encourage desirable behavior). Only if incentives fails should it nudge. Notice that nudging is the third option, not the first. And finally, only if nudging fails should government force or coerce behavior.

3. Social preferences

“Thaler’s theoretical and experimental research on fairness has been influential. He showed how consumers’ fairness concerns may stop firms from raising prices in periods of high demand, but not in times of rising costs. Thaler and his colleagues devised the dictator game, an experimental tool that has been used in numerous studies to measure attitudes to fairness in different groups of people around the world.”

The third area of study cited by the Noble Committee is what they refer to as “social preferences,” which mostly means fairness. That is, people don’t always act selfishly, as naive economists, or at least their models, think they should. Another example of irrational behavior. Not so. Let’s talk about evolution for a moment.

Evolution works at the level of genes and our genes have one primary purpose – to replicate themselves. But genes can’t reproduce on their own. They are dependent on their host (e.g. us humans) to reproduce. Fortunately they have quite an influence on their host, as they provide their carrier with its basic programming. In other words, in order to maximize the chance that a gene reproduces, it programs its host to seek food and avoid danger and attract a mate, among many other things. The host is rewarded. It “feels good.”

As we stated at the very top of this article, it is this genetic programming that influences what constitutes our “utility.” Eating and being healthy and having sex clearly (other things equal) contributes to utility. And like many other animal species, we humans have been programmed to be social. That is, it is a lot easier to obtain food and stay healthy and find that mate if we interact with other members of our species. Long story short, while our genes might be totally selfish (they “care” only about reproducing), us humans cannot be. In order to survive and reproduce and raise our children, and have our children reproduce, we must interact with other humans. And very often engaging in social activities requires making decisions that appear to economists to not be in our best interests. But to someone who actually understands human behavior, these decisions are perfectly rational.

We chase wealth not for wealth itself but to attract a mate, to be “alpha-male” (or “alpha-female”), to feel strong and powerful and superior. We buy fancy cars and live in big houses and wear big jewels to signal our superiority to others the same way a gorilla pounds its chest or a peacock flouts its feathers. At the end of the day, it is not net-worth that contributes to our utility, as economists believe, but self worth. Net-worth is just a component of self-worth.

Of course, we cannot be solely selfish. Or at least most of us cannot. Society wouldn’t survive and most of us (and our genes) wouldn’t reproduce. We cannot chase wealth and power at all costs. If I steal from the grocer, true I may get a free meal. But if I continue to do so, the grocer may wise up. Forbid me from entering his store, one way or another. Now where will my food come from? Worse off will I likely be.

Selflessness also matters. We treat people kindly so they will return the favor. This is the oldest form of insurance there is. And our genes reward us for this. It is in their interest and ours. We feel good about it, and we are rational to do so. We punish the jerks among us (or at least try to) so that they will learn and correct their behavior and if not, leave our community altogether. And again, our genes reward us for doing this. It is in their interest and ours. And we feel good about it and we are rational to do so.

Let’s now take a look at some of Thaler’s research on human social behavior and fairness, beginning with price gouging. We will see how humans behave in a manner inconsistent with economics, but perfectly consistent with what evolution has made our genes, and with how our genes have programmed us.

In the 1980s, Thaler performed a study that showed that the majority (82%) of people found it “unfair” for a hardware store that sells snow shovels to raise the price of shovel from $15 to $20 the morning after a large snowstorm. Let’s examine two things. First, is it rational behavior for the hardware store owner to raise prices? Second, it is rational for snow shovel consumers to find this behavior “unfair?”

Economics 101 teaches us that prices should rise (other things equal) in circumstances of rising demand. The assumption here is that demand for snow shovels increases after a bad snowstorm. Hence, a simplified understanding of Econ 101 implies that the hardware store has justification for raising the price of shovels. But, as we’ll discuss next, consumers may very well be turned off by this “price gouging” behavior. Their distaste may lead them to avoid shopping at this store for any goods in the future. The may be so incensed that they arrange a boycott of the store. The point being that, from the shop owner’s standpoint, to raise prices is really a question of short-term gain versus potential long-term loss. A business that depends on steady, long-term relationships is probably best served (and rational) not to raise prices. A business that caters to one-time customers (say, tourists), may benefit from price gouging. There’s no right or wrong answer here, except that either decision can be considered “rational” (and long-term profit maximizing) depending on the circumstances. Of course, a socially-minded business owner (and much less likely a big public corporation) may decide that fairness trumps profits regardless. As we’ll see shortly, this too can be considered rational behavior (contrary to the beliefs of economists).

On to the more interesting question. Are consumers of snow shovels rational for finding this price gouging behavior unfair? I can certainly sympathize, for there is a feeling of being cheated. Why should the shop-owner benefit from the dumb luck of a random big snowstorm? Worse, why should the shop-owner benefit extra from my misfortune of having to expend time and money clearing my driveway? As we’ve described before, here’s an example where self worth trumps net worth. If I overpay for the shovel, I’ve been taken advantage of by a fellow human being. Put simply, I feel like a sucker. And I will continue to feel like a sucker every time I walk into that store from now until the end of time. Better to pay $20 to a neighborhood kid to shovel than to give an extra $5 to that greedy hardware store. Now every time I walk into that store, I’ll know, even if they don’t, that I didn’t let them cheat me! I’m happier, I feel better, my utility is higher, and therefore I’ve made an entirely rational decision.

Around this same time as his price gouging study, Thaler and his collaborators invented an experiment that has become known as the dictator game. In this study, students were asked to divide $20 between themselves and a random and anonymous fellow student. They had two choices:

1) Keep $18 and give away $2, or

2) Split the $20 evenly, keeping $10 and giving away $10

Clearly, the selfish, and (to an economist) rational decision is to keep $18. However, as you may have predicted, it turns out that the majority of students (76%) decided to split the $20 evenly, demonstrating that for many people, social considerations are more important than money. Why might this be?

First of all, I would suggest that the study’s assumption of anonymity is a faulty one. As a participant, might I not be questioning that assumption? What if the recipient somehow finds out that it was me, the greedy one, that only gave them $2? What if the teaching assistant finds out, or the professor? Do I want my T.A. or my professor thinking I am a jerk? If he or she thinks poorly of me, might that not affect my grade in this class? As I have mentioned before, this kind of artificiality, ambiguity, unrealism or bias, is why I find many of the conclusions of behavioral economics to be questionable.

But for the sake of discussion, let’s now assume that anonymity is not an issue. Let us assume that there is absolutely, positively, no way for anyone knowing who gave $10 and who gave $2 other than the individual who made the decision.  Why might it still be rational to be “fair” and not “greedy?”

Here we return to the conclusion that self-worth trumps net-worth. My genes have programmed me to feel good to treat someone else fairly and to feel guilty to treat someone else unfairly, even if my actions aren’t known to others. As the study showed, most of the participants gave up $8 to feel good about themselves, and/or to not feel badly about themselves. As we’ve stated many times before, this behavior is little different than spending more money for luxury items to feel good about myself, or to give to charity, or even hold a door open for a stranger (which expends some small amount of energy).

Thaler and his colleagues decided to test another aspect of human nature. They extended the dictator game to include a second round with a third player. The third player had the following two choices:

1) Receive $5, give $5 to a (fair) student who had split the original $20 evenly in round 1 and $0 to a (greedy) student who had kept $18 in round 1, or

2) Receive $6, give $0 to a (fair) student who had split evenly in round 1 and $6 to a (greedy) student who had kept $18 in round 1

Just like in the first instance, the “economically rational,” wealth maximizing decision is choice #2, to take $6 over $5. But the majority (74%) of students choose #1, that is to give up the $1 difference in order to reward Round 1 players who were “fair” and punish Round 1 players who were “greedy.” Are we irrational beings because we are willing to sacrifice $1 to punish a jerk?

I don’t think so. Our body’s social programming has taught us that punishing a jerk is a type of investment. We are (or least attempting to) train the jerk to not be a jerk next time. Having fewer jerks in a community is a good thing. Perhaps the punishment now in an economics class will prevent the jerk from becoming another Bernie Madoff some day and getting more screwed later on. Certainly, these are long odds, but might it be worth $1 now to potentially save myself from getting cheated out of millions? Why not? Moreover, I get satisfaction (we call it schadenfreude) from the punishment. I feel superior, a better person. I have a higher sense of self-worth, and thus, greater utility.

Lastly on the subject of social preferences is a question that is frequently raised by economists who study decision making. Why do people voluntarily leave tips at restaurants they never intend to visit again?

Start with the fact that tips are not really voluntary. Gratuities may have started out that way, as a way to reward good service, but at least in the U.S., they have become a meaningful (sometimes majority) component of the pay of waiters and restaurant staff. That is, they are expected. But of course, there is no legal obligation to provide one, only a moral obligation. Not doing so violates a social contract.

As we’ve discussed, our genes reward us for maintaining the social contract and punish us for violating the social contract. For some, doing the right thing for its own sake feels good. I feel good treating people nicely. Others feel good knowing a stranger, as in the waiter, thinks highly of them. Yet others seek to avoid the guilty feelings that contribute to a sense of low status/low self-worth. I don’t want the waiter to think of me, until the end of time, as a cheapskate. Every time I go into a restaurant I may be reminded of my cheapness. What if I run into the waiter again someday, even if I don’t intend to return to that restaurant? Do I really want to take that risk? I will never get that out of my mind… For most people, all three of these social components of utility contribute to decision making. That’s why we tip.

4. Behavioral finance

“Thaler was one of the founders of the field of behavioural finance, which studies how cognitive limitations influence financial markets.”

The proliferation of computers in the 1980s allowed economists to become number crunchers. As a wise person once said, best to fish where the fish are. Best to number crunch where there are lots of numbers. And where are there lots of numbers? Financial markets. Specifically prices and trading data of common stocks and other liquid financial assets.

As economists turned their attention to financial market data, analyzing decades of stock market data with simple statistical tools, they noticed something. They noticed anomalies. Up until then, the prevailing assumption held by most economists and finance professors was that markets (at least highly liquid ones like the stock market) were perfectly efficient. That is to say, all available information is immediately priced into a security. The only way to outperform the overall market (or a market index) is to 1) take more risk, 2) have non-public information or 3) get lucky.

These anomalies seemed to show that by analyzing certain historical data, investors could in fact outperform the overall market without taking on extra risk. To most, this observation clearly contradicted the view that markets are perfectly efficient. It also led to a dramatic increase in the study of financial markets by academics (analyzing data on a computer in your office is a lot easier than running experiments on college students in your classroom or lab!) and spawned entire new asset classes such as quantitative hedge funds, smart beta funds, ETFs and factor investing.

Thaler co-authored one of the first prominent studies of market anomalies in 1985. Thaler compared stocks that had dropped in value over the prior few years (“losers”) with those that had increased in value over that same time period (“winners”). He found that the loser stocks subsequently outperformed the winner stocks. In other words, investors could generate positive risk-adjusted returns (“alpha”) by buying a portfolio of losers and selling short a portfolio of winners. Similarly, an investor could out-perform the overall market by simply buying a portfolio of losers.

Thaler offered up a behavioral explanation for the anomaly he uncovered. Based on earlier published psychological research, he posited that investors must “overreact” to information. In other words, after a company’s stock has declined because of poor financial performance (or some other bad news), investors hold the view too long that the stock is a bad one and that the poor performance will continue. Consequently they are too slow in reevaluating or reassessing if the news and/or the company’s financial performance has improved (or regressed to the mean). Similarly, investors overreact to good news and good financial performance and are too slow tempering down their positive views of a stock.

Over the years many academic papers on financial markets have been published and many such anomalies have been uncovered. However, most of these anomalies do not persist. That is, the out-performance tends to disappear after the publication, either because it was spurious to begin with (the product of data-mining or data-snooping) or because the strategy is traded upon and the profits get “arbitraged out” by investors once the anomaly becomes widely known. A prominent example is the so-called “January effect,” whereby stocks were thought to increase in price during the month of January after having fallen in December. This was believed to occur due to investors selling stocks in December in order to capture the tax benefits of capital losses (to offset capital gains). It is pretty much a given that the January effect no longer exists, if it ever even did.

There are, however, market anomalies that have seemed to persist even though they have been widely known for decades. The two most important are the value effect and the momentum effect. The value effect is an extension (and essentially a renaming) of Thaler’s discovery of overreaction discussed above. Recall that Thaler uncovered that stocks that had declined for the prior few years tended to outperform the market and stocks that had increased over that time period tended to underperform. Later research concluded that stocks that are cheap by some metric such as Price-to-Book Value or Price-to-Earnings (called “value” stocks) tend to outperform stocks that are expensive (usually referred to as “growth” or “glamour” stocks).

The second prominent anomaly still thought to be in existence is the momentum effect. Whereas Thaler looked at three years of historical data to determine whether a stock should be considered a winner or a loser, other researchers looked at shorter time frames, say 6-12 months. What they found was quite the opposite of Thaler’s conclusions. Stocks that have outperformed the overall market (i.e. increased) over the prior 6-12 months tend to continue to outperform (increase) over the next 6-12 months. Similarly, stocks that have underperformed the market (i.e. decreased) over the prior 6-12 months tend to continue to underperform.

Researchers labeled this the “momentum effect” and concluded that investors could do very well by buying a basket of short-term winners and shorting a basket of short-term losers. Similar to Thaler, researchers posited a behavioral explanation for their anomaly. Whereas Thaler said that investors overreact to longer-term good and bad news, momentum researchers argued that investors also underreact to shorter-term good and bad news. That is, it takes time for good news and good performance (and bad news and bad performance) to become fully appreciated by investors and hence, fully priced in to stocks.

Note that while the value effect and the momentum effect appear at first glance to be contradictory, this is not so. They are measured over different time periods. In fact, many quantitative hedge funds (and later ETFs and other “smart beta” products) have used the combination of these two “factors” as the basis of their investing strategies. Buy a basket of value (long-term cheap) stocks that have exhibited strong momentum (short-term gains) and short a basket of growth (long-term expensive) stocks that have exhibited weak momentum (short-term losses).

Why the long digression into quantitative investing factors and hedge fund strategies? As I stated, these two strategies or factors, value and momentum, along with a number of other less prominent strategies or factors seem to prove that market anomalies do exist. Here I agree.

The question then, and one we’ve asked many times in this article, is do the existence of such market anomalies prove that investors are necessarily irrational in their behavior, or as the Nobel Price committee stated, are they evidence of “cognitive limitations?” Most say yes. I say no.

There are at least five possible explanations for market anomalies. The first is that the anomalies are not real. They are spurious, the result of data mining or data snooping, the product of overeager PhDs desperate for an article to publish or an interview at a quant fund. No doubt many of the anomalies (factors) found over recent years fit this type. But, for factors such as value and momentum it is hard to make this case. These two anomalies have persisted for decades after discovery and have been confirmed in multiple asset classes (not only stocks) and in the financial markets of many different countries.

The second possible explanation is risk. That, for example, value stocks outperform growth stocks because they are inherently riskier (perhaps a greater risk of distress or bankruptcy). But, the argument goes, this implicit risk does now show up in traditional risk metrics such as volatility or Beta. In other words, yes, value outperforms growth but not on a risk-adjusted basis, if risk were measured properly. Some economists (especially those inclined towards efficient markets) have indeed made this argument in response to the research of Thaler and others.

I am sympathetic to this argument, though actually even more so for the case of the momentum anomaly than for value. I would argue that stocks with strong momentum are far riskier than traditional risk metrics imply. Said differently, what is considered to be “risk” is vastly underpriced. The primary reason for this is the implicit backstop of financial markets by central banks (something we will return to shortly when we discuss financial bubbles). Central banks over many decades have engaged in bailouts of financial markets repeatedly, and ever more strongly. Time and time again they have prevented prices from failing. Intuitively, the stocks that have risen the most (those with the strongest momentum) are likely to decline the most absent the backstop of central banks. Someday when a financial crises comes that central banks are unable (or unwilling, but much more likely unable) to curtail, the momentum anomaly will disappear. It will be shown that these momentum stocks were riskier all along.

The third explanation for market anomalies is one that Thaler has also extensively researched, and one for which he is heaped praise by the Noble Committee. This is something called the “limits of arbitrage.” One set of anomalies that researchers have unearthed occurs when two securities with the same underlying assets have prices that differ. This violates what is known as the “law of one price.”

One of the most famous examples of such an anomaly is one that Thaler discusses at length in his book, Misbehaving, the 3Com/Palm spinoff. Very briefly, 3Com was a tech company during the first dot com bubble. In 2000, 3Com decided in would spinoff a subsidiary, Palm, by initially selling a fraction of its stake in Palm to the public (about 5%). Then, months later, each 3Com shareholder would receive 1.5 shares of Palm stock so that 3Com would divest all of Palm. During those months between the IPO of the 5% shares of Palm and when 3Com divested the rest, the market value of Palm was substantially higher than the market value of 3Com. In other words, the stock market was valuing 3Com’s business excluding Palm at a substantially negative value! Given that the lowest a stock price can be is $0, this made no sense.

Thaler argues that, in situations such as 3Com/Palm, two things are going on. One, irrational (mostly individual) investors are driving up the price of Palm to irrational levels. Two, something prevents the smart (mostly institutional) money from correcting the mispricing. In the case of 3Com/Palm (and in many similar cases), even though everyone knew about the mispricing (it was widely reported on in the mainstream media), it was virtually impossible to short Palm stock given how few shares were outstanding. Without being able to short the stock, investors could not arbitrage the difference in value between 3Com stock and Palm stock and therefore could not “fix” the mispricing. This is known as a “limit to arbitrage.”

I want to hold off for a moment on the discussion of whether Palm stock was so highly valued because of “irrational behavior” or some other reason. But I do want to make the point that the violation of “the law of one price” is not in and of itself evidence of irrational behavior precisely because of the limits of arbitrage. That is, investors are trying to arbitrage out the value difference. They are trying to fix the mispricing. They are trying to enforce market efficiency. They are quite simply unable to do so because of structural limitations to markets (i.e. the inability to short a stock). We will come back to this issue again when we discuss financial bubbles.

The fourth possible reason for why market anomalies exist is indeed the one favored by behavioral economists like Thaler: irrational investor behavior. Economists tend to assume that most stock market investors are what they call “noise traders.” As Thaler has pointed out, another (less polite) name for them used by some economists is “idiots.” The idea here is that most individual investors do not buy and sell stocks based on fundamental data or rational analysis, but on emotions or animal spirits. I freely admit that being an idiot might qualify as a “cognitive limitation.”

It should not surprise you, however, that I do not concede that such investing behavior is necessarily irrational, for three reasons. First, as long as I believe that I can sell for higher than I bought, I am making a decision that, to me, is rational. I may not have done much, or even any, fundamental analysis. I may not even be capable of doing such analysis. I may not even know what fundamental analysis is. It doesn’t matter. I’m still making a trading decision that I believe to be in my best interests. And recall from our discussion earlier about the momentum effect. Momentum usually works. So losing myself to “animal spirits” by buying a high-flying stock, regardless of so-called fundamentals or valuation has very often been a profitable enterprise. We’ll return to this point.

The second reason why such investor behavior is not necessarily irrational is because maximizing my utility is not necessarily the same thing as maximizing my wealth, or the value of my stock portfolio. Other factors that makeup utility must be considered. I won’t say more about this yet, but we will also come back to this idea shortly.

To understand the third reason why economists are confused about irrational investor behavior requires busting one of the most basic myths of all of economics and finance. There is no such thing as fundamental value. All value is relative. The value of a stock, or any other liquid asset is what someone else is willing to pay for it. Even what is known as fundamental analysis (a discounted cash flow, for instance) requires implicit and explicit estimates of other market (or relative) variables.

I am going to give you what I believe to be a better answer for why market anomalies exist. But before I do that, one comment on the mainstream view of “cognitive limitations” or investor irrationality as an explanation for such anomalies. As I’ve said, prominent anomalies such as the value and momentum effects are believed to have persisted for a long while now. However, they do not always work. That is, there are long (multi-year) periods where one or both of these anomalies do not work. This seems to me to be inconsistent with the thesis of cognitive limitations. Why would investors be cognitively limited in only some years and not others? Our brains work in some years, not others? We do analysis in some years, not others? We are more emotional in some years, not others? I don’t get it.

Now, on to the fifth and final rationale for market anomalies, and as I stated just above, the one I find most compelling. I do believe that anomalies are indeed due in good part to investor behavior, consistent with mainstream theorists. I just don’t believe that this behavior should be viewed as irrational. This is of course consistent with the main theme of this entire article. Most of the studies published by Thaler and other behavioral economists are essentially correct. They rightly show that most decisions made by individuals are based on calculating factors other than purely what will maximize wealth. But for the umpteenth time, this is not irrational.

As we’ve alluded to a number of times, we can segregate investors into two broad groups, institutional investors and retail investors. Institutional investors are generally considered the “smart money” and retail investors, the “dumb money” or the “noise traders” or the “idiots.” In order to explain why investor behavior should be considered rational, I need to talk briefly about the motivations of each of the two investor types.

When we talk about institutional investors, we refer to entities that manage money for some group of investors. These include mutual funds, hedge funds, insurance companies, pension plans, endowments, and others. But at the end of the day, there is always a person (or persons) responsible for making the day-to-day decisions of what stocks (or other assets) to buy and what to sell. These are the portfolio managers and the analysts. Even in the case of quantitative funds, there have to be programmers who code the algorithms. The point here, and it’s a crucial one, is that “institutions” don’t make decisions. People make decisions.

What motivates people? They want to maximize their (present valued) utility of course. So what motivates portfolio managers? First and foremost, they probably want to keep their job. Second, they probably want to get paid a lot of money. Third, they probably want to feel smart (or not feel stupid) compared to their peers. Etc. Obviously, having really high stock market returns is likely to help you keep your job, get paid well and make you look smart. But, going after high returns generally involves taking a lot of risk. This is not the strategy employed by most professional money managers.

Instead, most portfolio managers act to minimize the risk of losing their jobs. And they minimize the risk of having their salaries cut by avoiding the kind of poor performance that gives reason for investors to pull their money out and have their assets under management (AUMs) decline. The end result is that the vast majority of institutional managers aim to hit a benchmark rather than maximize gains, and they become what we call “closet indexers” where their holdings mimic an index such as the S&P 500.

In other words, the primary goal of a portfolio manager is to do what everyone else is doing. That way, you keep your job, keep your AUMs and keep your nice salary. It’s okay to lose money as long as everyone else is also losing money. Similarly, it’s not worth taking high risk to shoot for the moon. Hence it is incredibly difficult to be a contrarian investor. You run too high a risk of losing the patience of investors, losing your AUMs, and losing your job.

This mindset of those that manage institutional money more than anything else probably explains the momentum anomaly. Because of their need to track benchmarks and to not be wrong, institutional money managers exhibit momentum investing probably even more so than “dumb” individual investors. Even professional investors want to look smart by holding a portfolio of winners, or avoid looking stupid holding a portfolio of losers.

For example, it is widely known that many mutual fund managers will buy expensive popular stocks towards the end of the mutual fund’s fiscal quarter or year. Why buy high? So that investors, who look at the fund’s list of holdings (published quarterly or annually) will see winners and think highly of the brilliant portfolio manager, even if the fund did not participate in the price run-up of that stock. Are mutual fund investors irrational for behaving this way? Not necessarily since mutual funds don’t disclose all of their trades.

The vital point here is that economists assume that all market participants are trying to maximizes trading gains. To do anything otherwise is irrational. But that’s not how the “smart money” works at all.

Now let’s talk about the other big class of market participants, retail investors. Retail investors are regular people, middle class or wealthy, who invest their own money in the markets. Typically they buy stocks directly in the stock market or purchase shares of mutual funds. Yes, they are generally (though not always) less sophisticated than institutional investors (“idiots” remember?). But like the portfolio managers at institutions, they buy and sell securities for reasons other than purely maximizing performance. They try to maximize utility, not simply wealth.

To many, stock market investing (really speculating) is also entertainment, not simply a way to save. Like going to Vegas, I pay for the entertainment. But unlike Vegas, where whether I win or lose is mostly based on pure chance, with investing, it is at least perceived to be based on skill and smarts (whether it really is, is another story). So, I get utility from the entertainment of picking stocks. And I also get utility from the status effect of picking winning stocks.

Let’s now try to explain the value anomaly. Recall that over time, value stocks have often outperformed high growth/high glamour stocks. In my view, this is because investors, especially retail investors, get utility from owning such stocks. This utility more than compensates them for the small loss of money they could have had if they had owned boring out-of-favor stocks. It is fun to own the stock of Disney or Apple or Google or Facebook. These are stocks I can talk about at cocktail parties and water-coolers. Anybody want to talk about the insurance company or electric utility stocks that I own?

Even the smart money exhibits this behavior. They go to parties too, and industry conferences. More fun to talk about the high-glamour winners my fund owns than the boring losers. Plus, money managers sometimes get invited to corporate events hosted by the companies whose stock they own, or analyze. Which companies are likely to have more events and better events? The highly valued popular companies, or the struggling, perhaps cash-poor unpopular ones? Finally, the glamour stocks are more likely to be part of indices such as the S&P 500 which most funds track. Stocks that do poorly tend to get kicked out.

Let’s return to the 3Com/Palm situation. We explained that because of the inability to short Palm stock, the two securities could not converge to one price. This was the idea of “limits to arbitrage.” We did not, however, discuss why the price of Palm was so high to begin with. Should it be attributed to irrational investor behavior? No. Palm was one of the most glamorous of all the glamour stocks back at the absolute height of the dot com bubble. Importantly, there was a very limited number of shares outstanding (remember than initially, 3COM only issued 5% of PALM to the public).

In fact, a small number of shares was generally true for many of the technology stocks back then. Given that lots of people wanted to talk about owning these tech stocks at cocktail parties, there was high demand, and thus high valuations. This was a fun time. Investing it tech stocks was also a hobby. There was utility to be gained beyond just the monetary amount of the trading gains. And remember that it is not irrational to think I can buy high and sell higher. That was normal. And rational. We’ll return to this point shortly when we talk about financial bubbles.

Financial bubbles and irrationality

Before leaving the topic of behavioral finance, I want to address one final misconception believed by virtually all behavioral economists not to mention financial journalists. That the existence of financial bubbles proves the irrationality of investors. While financial bubbles are not really a topic of research by Richard Thaler, they have been a major research topic of another behavioral economist, Robert Shiller, who shared the Nobel Prize in 2013.

There is no question that financial bubbles exist. To list a few, there was the famous Dutch tulip bubble in the 1600s, the stock market bubble in the 1920s that preceded the Great Depression, Japan’s stock market and real estate bubbles in the 1980s. More recently, the late 1990s dot com bubble, the mid 2000s real estate bubble and today’s bubble in nearly every financial asset. Most economists admit that financial bubbles do happen (they have no idea why) but they deny that bubbles can be identified until after they have burst. I find this view ludicrous. The whole world knew we were in a tech stock bubble in 1999. Many wrote about a real estate bubble in 2007. And many more hold the view that virtually all financial assets are in a central-bank fueled bubble today. I do believe, however, that the timing of when a bubble will burst is impossible to predict.

As I said above, the mainstream economics view is that the existence of financial bubbles is proof of irrational behavior on the part of investors. I beg to differ. For at least four reasons, the existence of financial bubbles is absolutely not evidence of irrationality.

1. Market participation is NOT optional

In 2007, at the peak of the credit boom that shortly thereafter became the 2008-2009 financial crises and the Great Recession, Chuck Price, CEO of Citigroup famously said, “When the music stops, in terms of liquidity, things will be complicated. But as long as the music is playing, you’ve got to get up and dance.” What he meant was that as a major financial institution (one of the largest in the world), Citigroup had to take the same kind of risks, and go after the same businesses as its competitors. If it didn’t, its revenue and profits would lag its peers and investors and Wall Street analysts would call for Chuck Prince’s head. Though wildly criticized for his statement, Prince was right. Citigroup had to dance.

We have already talked about the mindset of institutional money managers. It is okay to be wrong as long as everyone else is wrong. It is not okay to under-perform one’s peers, or under-perform one’s benchmark. That is to say, if the market is going up and everyone else is taking the ride, as a portfolio manager, you need to take the ride too. You cannot sit in cash or other less risky assets. You play the momentum game or you lose your job.

Even more obvious is that fact that certain institutions are essentially legally obligated to take risks. Pension funds and insurance companies, being highly regulated for instance, must have investment returns high enough to meet future liabilities. But with safer assets at historically low (5,000 year lows, that is) rates of return thanks to the monetary policies of the world’s central banks, these institutions have absolutely no choice but to buy riskier assets such as equities or high yield bonds. This is even true for individual investors saving for retirement. While not legally obligated to take risk to meet return targets like pensions, individuals too cannot afford to invest safely in today’s environment. Earning 0 to 1% in savings accounts or CDs just doesn’t make the retirement math work. They too have no choice but to take more risk.

The point I am trying to make is that in nearly all cases, market participation is not optional. Even if you think valuation metrics are high, or asset prices are expensive, you still have to be invested in those assets. And there is nothing irrational about trying to keep your job, keep your salary, keep your retirement funds growing.

2. Buy high, sell higher usually works

We’ve already talked a lot of about how momentum investing, buy high, sell higher, usually works. For at least the past three decades, this has generally been true for nearly all financial assets including stocks, bonds and real estate. While past performance is no guarantee of future success as they say, past performance is an argument for momentum investing constituting positively rational decision making.

If I see my neighbor make a fortune flipping houses, why shouldn’t I do the same? I probably consider myself to be smarter, and perhaps more highly educated. When can my neighbor do that I can’t? Sure the music might stop some day, but probably not tomorrow. But by the time it does, I’ll be rich too. To you and me, this might seem like irresponsible behavior, but is it irrational? I don’t think so, especially since it usually works. And recall that when deriving utility, money is mostly just a proxy for status. Why not take the risk if my neighbor is taking the risk? If he or she gets rich and I don’t, I’ll regret it. My social status, and hence my utility will suffer.

3. Central banks always come to the rescue

As I’ve alluded to a number of times, one vital lesson that the last several decades has taught the world is that central banks always come to the rescue of the financial system and financial markets. Even more so than just low interest rates or printing money, it is this backstop and the promise of bailouts that encourages (subsidizes) risky behavior. And this is exactly the primary cause of financial bubbles. Why not take risk if there is little downside? Take the risk and make the millions (or billions). If things go bad, the government and the central bank will clean up the mess. What’s so irrational about that?

4. Contrarian limits to arbitrage

Earlier we discussed how limits to arbitrage can lead to violations of the “law of one price” and to what economists wrongly assume to constitute irrationality in financial markets. On a much larger scale the same concept of limits to arbitrage can prevent financial bubbles from being naturally curtailed and from forming in the first place. Recall that everyone in the world knew that the price of Palm stock was too high relative to its parent 3Com. However, nobody could effectuate the arbitrage because Palm shares were impossible to short.

Similarly, as I’ve mentioned before, plenty of participants in financial markets have recognized bubbles well before they have ultimately popped. But not knowing when exactly the bubble will pop (as I also said earlier, a forecast I believe to be impossible) prevents them from shorting the market and “correcting” the bubble. In simpler terms, it is virtually impossible to be a contrarian investor in today’s market environment (and in the market environments of the past few decades).

Being contrarian risks margin calls. I can’t hold my shorts if the market continues to go up. Being contrarian risks underperforming the market. I lose my AUMs as impatient investors pull out. I risk looking like an idiot compared to all the other brilliant managers playing the momentum game and closet indexing. Bottom line is that I may know the market is in bubble territory and will some day correct, but unless I know the timing of that correction (which I cannot), it is far too risky to bet against the market, and to “fight the Fed.” No less than 3Com/Palm, this should be considered a limit to arbitrage and is our forth reason why financial bubble are not evidence of irrational behavior.

That market participation is mandatory, that momentum investing works, that central banks subsidize risk, and that being a contrarian investor is virtually impossible, are together the four reasons that give us much of an understanding of the causes of frothy markets and financial bubbles. But as we’ve noticed, all four of these stem from rational actors in financial markets making what to each actor are perfectly rational decisions.

In 1996, Federal Reserve chairman Alan Greenspan made his famous “irrational exuberance” speech, commenting on the seemingly high valuation of common stocks (they were to go much higher, rising until 2000). Greenspan might have been correct about the “exuberance” part but he was wrong about it being “irrational.” Market participants were acting rationally, maximizing their own utilities. Investors were simply reacting to the incentives laid out, most importantly, and mostly unknowingly, by the unwise risk subsidizing policies of the Federal Reserve.

Conclusion

Until the ascendance of behavioral economics, mainstream economists held the naive and erroneous belief that human beings always set out to maximize their income or wealth. And so it was said that “homo economicus” was a rational creature. But then came Richard Thaler and others who showed that this belief was indeed naive and erroneous. The models were wrong. Human beings are not always, or even typically, wealth maximizers.  We are instead, well…human beings, shaped by eons of evolution and biology. And in helping to show this, Thaler and the community of behavioral economists no doubt deserve some credit.

But the behavioralists made the same two mistakes as those economists they attempted to supplant. First, they forgot to question their own assumptions. They maintained the same non-colloquial definition of rationality, and kept the same proxies for utility they inherited, without a bit of thought as to whether they made sense, or had any basis in reality. Like the economists who came before them, they demonstrated that they too have very little understanding of what factors truly drive human decision making.

Then their work escaped the chambers of academia and became popular. Their studies were easy to understand, common sensical and not too mathy. Fun even. “Irrationality” made great headlines and journalists and the mainstream media ate it up. Bestsellers were published. TED talks were given. Politicians began to listen. The cookie jars of government started to open up to the behavioral economists. Money and power! Power and money!

And naturally, with the money and the power comes the arrogance. And this is their second mistake. The mistake that since the beginning of time has been made by those smart, but not wise. The mistake that til the end of time will be made by those smart, but not wise. Humans are irrational, but we the enlightened ones have the fix! Nudge! Libertarian Paternalism! Economists to the rescue! Government to the rescue! The people must be saved from themselves!

Causes of income inequality: the short version

I recently published a very (very) long post on the subject of income inequality (which I encourage you to read!). Given its length, I thought it might be helpful to readers to publish a shorter, Cliffs Notes version. Here goes.

The dramatic increase in income inequality over the past several decades is one of the most important issues facing the U.S. and the world today. Rising income inequality has led to the election of President Trump, the Brexit outcome in the U.K. and the growing popularity of populist, isolationist, fascist and socialist leaders around the world. It has also resulted in a backlash against capitalism. And while the topic of income inequality is increasingly at the forefront of both mainstream media coverage and economic study, I believe that neither economists nor journalists have correctly identified its underlying causes.

However before we can get to those causes, we need to recognize that there are two separate trends going on contributing to rising income inequality. The first type of income inequality is what we’ll refer to as the decline of the middle class. This is primarily a phenomenon in the U.S. and Western Europe (globally, the middle class has grown enormously over recent decades). The second type of income inequality is the rise of the wealthy and the super-wealthy, which is truly a global phenomenon. These two aspects of income inequality share many of the same underlying causes, however their stories differ and we will discuss each of them in turn.

What do we mean by the decline of the middle class? For starters, real (inflation adjusted) wages for many workers have declined or stagnated. The middle class’s share of national wealth has also shrunken, while middle class indebtedness has risen sharply. Workers also face far more job insecurity than ever before. And while the headline unemployment number in the U.S. is very low (currently under 5%), this statistic does not reflect the high number of able-bodied people out of the workforce, nor the magnitude of underemployment, further exacerbated by the so-called “gig” or “sharing” economy. Finally, and most scary, these trends are affecting young people more dramatically and leading many to debt, to despair and to drug addiction.

In my view, the decline of the middle class has been caused by a combination of three trends: globalization plus regulation plus monetary policy induced financialization. Here’s our story. Beginning in the 1980’s and 1990’s, China, along with many other countries joined the global economy. With abundant low-skilled labor, they put pressure on manufacturing wages in the U.S. and other developed countries. Due to regulations, unions and legacy retiree compensation, manufacturing companies were not able to lower their costs of labor. Instead jobs were outsourced, off-shored and lost while companies went bankrupt and entire industries disappeared.

Meanwhile, in its naive belief that all inflation is monetary, and seeing no apparent inflation due to the effects of global trade, the Federal Reserve printed money, kept interest rates low, engaged in repeated bank and financial market bailouts and created a three decade long financial bubble. The middle class’s cost of living went up instead of down. Real estate, education and healthcare became unaffordable. The only way for consumers not to suffer in the near-term was to take on debt, debt that can never be repaid. More jobs were lost as technology disruption and automation was subsidized. Wall Street grew at the expense of Main Street. Monopolistic crony capitalism came to rule the economy. And last but not least, long-term productivity and the future prospects of the middle class were even further mortgaged as retirement savings, pensions and insurance policies were bled dry.

As damaging as a declining middle class is to society, the shocking rise of the wealthy and super wealthy is even worse. This trend has the same three causes that explained the decline of the middle class: globalization, regulation and monetary policy fueled financialization. However, here the story is different. The rise of the wealthy is primarily a result of monetary policy and financialization. It was then exacerbated by regulation and exacerbated even more by globalization.

Cheap money, subsidized capital and low interest rates led to rising prices for all financial assets which greatly benefited the wealthy who naturally own these assets. Loose monetary policy also led to a wage/price inflationary spiral for the wealthy as prices of luxury goods and services accelerated leading to higher wages for high-skilled (i.e. 1%) workers, not to mention pricing out the middle class from cities like New York and San Francisco.

Monetary policy also led directly to the dramatic growth of Wall Street and financialization of the economy. As banks and financial markets were subsidized, Wall Street grew, and grew, and grew. Increasing revenue brought increasing profits and increasing profits brought increasing bonuses for investment bankers, traders, institutional salespeople, private bankers, quants and others. Hedge funds minted billionaires even while their performance lagged safer investments. Tech startups also created billionaires as cheap capital subsidized growth and valuations, fostering a winner-take-all mentality and unprecedented consolidation and monopoly power in the technology industry.

CEOs of public companies went from earning 30 times the average worker to more than 300 times thanks to monetary policy, government regulation, tax policies and crony capitalism that together favored and subsidized stock options, short-termism, growth, financial engineering and consolidation. Finally, globalization exacerbated all of these unfortunate trends as central banks throughout the world executed the same easy money playbook. Cheap money flowed across borders, asset bubbles sprung up everywhere and corruption allowed the wealthy to amass not just more wealth, but ever greater political power.

Can we solve the problem of income inequality? In theory the answer is yes. In practice the answer is probably not, as nothing that I am about to suggest is realistic given today’s toxic and corrupt political system. There is absolutely no realization among the economics profession, the mainstream media or the political community of the disastrous consequences of “modern” central banking. Nor is there any reason to believe that those in power who have benefited so much from decades of easy money and crony capitalism will change their viewpoint.

But let’s try anyway. The first thing we must do is to normalize interest rates, or better yet, get central banks out of the business of managing the economy and out of the business of bailing out the financial sector. We must make it clear that risk will no longer be subsidized and financial firms will no longer be bailed out.

If we do this, money will dry up and Wall Street will shrink significantly, along with bonuses and financial services jobs. So too will the technology sector contract as tech valuations plummet. 1% cities like New York and San Francisco will once again become affordable for doctors and lawyers and teachers and police officers. The era of the hedge fund billionaire and the tech mogul will be at an end. Jobs that serve no social purpose will disappear, like most (but not all) investment bankers, consultants, hedge fund analysts and private equity professionals. Technology disruption and automation will be slowed. Companies that don’t make money will disappear. Companies that actually make money will thrive and will relearn how to invest in their own employees. Smart people will once again become doctors and scientists and engineers and teachers.

Meanwhile, while the financial system goes through a reset as almost (and should have) happened in 2009, we must also reduce the massive regulations that hinder hiring, that put a floor on compensation, and that support and subsidize big business, big labor and crony capitalism. We must un-monopolize our monopolistic and failing education system. Ditto for our healthcare system. We must find a way to upgrade our infrastructure and to restructure the promised pensions and retirement costs that will ultimately bankrupt our governments.

What must we not do? We must not give in to the populists, socialists, fascists and isolationists of the far left and the far right. We must not abandon capitalism. We must not give up on global trade, if for no other reason than abandoning trade will lead to war (though there are many other good reasons in favor of trade). We must not turn our backs on immigration for it is the only way to achieve significant economic growth, to afford our (hopefully shrinking) welfare state and to bring back manufacturing. We should not try to fix income inequality through re-distributive taxes, or through more regulation or more unionism. Each of these will make things worse, not better.

In summary, we must let the free market work. Let companies succeed that deserve to succeed. Let companies fail that deserve to fail. Let people work who want to work. While it may take a generation or more to re-orient our economy, it is the only way to increase productivity, to revive the middle class, and to preserve our way of life.