Over nearly 15 years, investors have grown accustomed to a “new normal” of zero interest rate policies. These policies, when first initiated in response to the Global Financial Crisis, were an emergency remedy for economic collapse. Aging populations, subpar productivity growth, and chronically weak demand in the United States, Europe, and Japan muted GDP growth, and kept rates stuck at ultralow levels for most of the decade following the crisis. When the Fed returned its benchmark borrowing rate to near zero and resumed large-scale buying of corporate and government securities in 2020—in response to a once-in-a-century global health scare—economists predicted that it would be at least another decade before interest rates could return to levels seen in the 1990s and early 2000s.
Of course, a prolonged period of low interest rates is not normal. Keeping rates near zero hurts savers and encourages excessive risk taking. An extended period of low interest rates can damage the profitability of banks and life insurers, and force pension plans to take larger risks. Over time, the expansion of (effectively) free leverage in the system leads to significant distortions in financial markets. How many companies pushed to optimize operations for a low-interest-rate environment by using bank loans for core operating functions like payroll? How many pensions levered their portfolios to meet long-term targets that had formerly seemed out of reach? How much risk did banks take to find higher yielding loans? It looks as though we are about to find out.