2 Don’t Know Much About GDP
As a guy who never paid enough attention to professional sports—I love watching skilled athletes, but there are too many commercials and time-outs—to this day I get anxious and uncomfortable when people start talking about sports. For one, I don’t know enough of the vernacular. (For example, pre-Google, I didn’t know what a “triple-double” was in basketball. When I finally asked a friend, he was so convinced I was kidding that he wouldn’t tell me.)
A lot of people feel that way about economic terms of art, so this section presents the skinny on some of the biggies. Be forewarned: These are not Google or Wikipedia definitions. Along with the basics, I try to impart a sense of what these statistics reveal about how well our economy is working, and not necessarily from the perspective of the financial market analyst but rather from that of working families throughout the income scale.
My hope is to both guide you through a painless tour of ideas and concepts you’ve been dying to get under your belt—like what does GDP stand for, anyway? And what’s up with the Federal Reserve?—and provide some commentary on how they relate to the three principles that frame much of our analysis.
I hear a lot of talk about gross domestic product, or GDP. What is it? Should I get excited when it goes up and depressed when it doesn’t?
GDP is the dollar value of the economy—sort of like what you’d have to pay for it if you wanted to buy it. It includes all the goods and services we buy, like cars, haircuts, health club memberships, books, lattes, and so on. It includes investments in factories, businesses, and homes. It includes government spending at all levels (federal, state, and local), and it includes net exports (the value of what we sell abroad minus what we buy from foreigners).
That much is standard accounting. Boring, I guess, but I still find our system of national accounts—the way we conceive of and track this stuff—to be a great intellectual triumph of economics (go to the U.S. Department of Commerce’s Bureau of Economic Analysis Web site [http://www.bea.gov] to see the hundreds of tables with millions of data points that make up the national accounts; it’s like a porn site for data nerds). What’s interesting is what’s left out.
First, it’s “gross.” Not as in “disgusting,” but as in “not net.” Thus, any spending we engage in shows up in GDP, even if it’s to replace stuff that fell apart or was destroyed. This way of scoring growth can have a perverse effect: If there’s a hurricane or a flood somewhere, chances are that place’s GDP will show up as having grown because spending will increase there to replace what was lost. In 2005, the year of Hurricane Katrina, GDP was up 3.2 percent while net domestic product (net means gross minus stuff that was destroyed or used up) was up 2.5 percent.
Similarly, GDP often gets a big boost from wars, because we increase government spending to create arms and then often replace the stuff we use up. Were the Iraq War to end in 2007–08, defense contractors would still be billing the government for 10 years. That may not sound like productive spending, but it will show up as faster GDP growth.
GDP also fails to reflect environmental degradation. To the contrary, if you destroy coastal flood plains to develop a port, you boost GDP. You add more to GDP if you sell a fleet of expensive SUVs than if you sell a fleet of fuel-efficient but cheaper hybrids, because “gross” product does-n’t deduct the environmental harm of the one versus the other. However, as discussed later, some farsighted economists are now taking seriously the economic threat to growth from global warming.
The national accounts—the nation’s spreadsheets wherein we calculate all this stuff (see the BEA Web site, referenced above)—also leave out the value of nonmarket work. If I pay somebody to watch my kids, it shows up in GDP. I do it myself, it doesn’t. It’s an interesting example of how our measures of national wealth leave out “home production,” creating a considerable bias against what economist Nancy Folbre calls “caring labor.”
Do all these shortcomings mean that GDP doesn’t paint an accurate picture of the economy? Yes and no. To the extent that you consider these omissions important and large—and I do—GDP as measured certainly doesn’t give you an accurate level of economic activity. It overcounts our national wealth by failing to “net out” losses and environmental degradation. It undercounts by leaving out the value of caring labor. But, accepting its limits, GDP still provides us with the best available indicator of the trend—even with the aforementioned shortcomings, GDP reports reliably tell you whether and how fast the economy is growing. And since GDP growth has a profound impact on economic activities we care a lot about, like jobs and incomes, this is something we want to keep a close eye on.
That said, we should develop an alternative measure alongside the official one to account for the big omissions—most important, to account for bad environmental developments. Thanks to some enterprising, ethical scholars, the Green GDP movement has made some nice progress. Yes, it would be controversial, but our government statistical offices should give them a boost.
Crunchpoint: Though GDP—the dollar value of our economy— leaves out some important stuff, it is still a useful measure of the size of the economy. In particular, we need to watch its trend—whether and how fast it’s growing or shrinking—because much else flows from that: most important, jobs and incomes. When GDP growth slows below its trend, unemployment rises, which (a) isn’t pretty and (b) raises our next question.
What is unemployment? And why does low unemployment seem to spook certain economic entities, like the Federal Reserve and the stock market, so much?
To be unemployed, the way the officials describe it, is to be seeking work. If you’re not looking for a job, even if you gave up because you couldn’t find one, you’re not counted as unemployed (such “discouraged workers” are not in the workforce). Same if you’re “underemployed,” which is a disease I take up in the next question.
So that’s the definition. How many people does it describe? For the year 2007, there were 7.1 million unemployed persons, generating a rate of 4.6 percent, which is actually pretty low in historical terms. The average over the 1990s was 5.6 percent; over the 1980s, it was 7.1 percent, due to some ugly high-unemployment years in the early ’80s.
All of these numbers and rates raise the question, when is unemployment too high? Shouldn’t we aspire to zero percent? Most of us want the lowest rate possible—there are those who would take exception to this sentiment, for reasons discussed in a moment—but zero is not realistic. There will always be at least “frictional” unemployment: people between jobs and those coming into the job market shopping around. Overall unemployment has never fallen below 2.5 percent, though the rate for college-educated workers is typically in that range or lower.
In fact, there’s a big difference in unemployment rates by education level; in mid-2007, the rate for college grads was about 2 percent, while for high school dropouts it was about 7 percent. There are also large racial disparities; unemployment among African-Americans is typically about twice that of whites. Again, in mid-2007, unemployment for whites was 4 percent; for blacks, 8.4 percent. Part of this relates to the educational differences just noted, but discrimination plays a role here as well. That’s one of the reasons we want to run as tight a labor market as we can: We want the demand for labor to be so strong that employers can’t afford to discriminate. When the overall unemployment rate fell to 3.7 percent in April 2000, the rate for blacks fell to 7 percent, the lowest on record.
And this goal—very tight labor markets—runs us smack into the second part of the question.
Let me start by telling you about the Saturday paradox. On the first Friday of every month, the Bureau of Labor Statistics releases its report on unemployment and job growth in the prior month.
The next day, the papers write about it, and herein lies the paradox. If it’s a strong report, meaning solid job and wage growth with a tick down in the jobless rate, the “markets”—the stock exchanges, the bond traders—often get nervous. The reason: They’re worried about an “overheated,” or inflationary, economy, where a tight job market leads to higher wages, lower profit margins, and higher prices (faster inflation). The big worry by the investor class (and no, that doesn’t include everybody—stock market wealth is highly concentrated) is that the Federal Reserve might raise interest rates and thereby slow the economy, which could cut into their racket.
So what we have here is basically a fundamental split between the aspirations of Wall Street and those of Main Street. Not that the suits on Wall Street want a recession, but neither do they want tight job markets forcing employers to bid up workers’ pay. Like I said (see principle #2), economic relationships don’t always play out like you’d expect, especially when there’s real money on the table and the question is how is it going to be distributed.
To really get the dynamics at play here, you’ve got to understand the role of the Federal Reserve, a point I devote considerable space to in the next chapter. The critical point in this context is that about every six weeks, these wild and crazy Fed officials get together and do their best Goldilocks imitation, poring though bowls of economic data to decide whether the economy is too hot, too cold, or just right. And if they think it’s too hot—if the unemployment rate looks too low to them—they’ll hit the brakes (as explained later, they do this by raising the key interest rate under their control: the federal funds rate).
OK, but what’s up with the Wall Street/Main Street split? The logic seems intact, but is it justified? Is this the best way to run a railroad?
I don’t think so, for a number of reasons. First, many economists remain wedded to the idea that there is an unforgiving trade-off between unemployment and inflation. And it’s not just that they worry that tighter job markets (lower unemployment) will kick inflation up a notch, like from 2 percent to 2.2 percent (in fact, there’s pretty good evidence in support of that relationship). No, they fear that if unemployment goes low enough, inflation will spiral out of control, from 2 percent to 2.2 percent, 2.5 percent, 3 percent, and so on, until the Fed has to slam on the brakes and trigger a recession.
These fears should have been put to rest in the latter 1990s, when the tightest job markets in decades were accompanied by slower-growing prices, but this ideology goes back to Milton Friedman, and one unfortunate aspect of economics is that too often when the data contradict the theory, economists blame the data.
A second reason for the split is a bit more prosaic: Unemployment and inflation are, in no small part, class-based concerns. Which is to say, when unemployment is too high, or job growth is too slow to keep up with the growing population, its costs fall largely on working families in the bottom half of the income scale. While very fast price growth of course hurts everyone, when inflation is ticking up, it’s more painful to wealthy households with rich assets, or big financial market players worried about a Fed rate hike.
Low unemployment has always been very important to low- and middle-income families, but it’s a lot more so now. When the unemployment rate is too high—if we settle the trade-off in favor of those at the high end of the income scale—many workers will lack the bargaining power they need to claim their fair share of the growing economy. Remember, just because GDP is growing doesn’t mean its benefits are broadly shared. In an economy with diminished union power, low minimum wages, and tough global competition, a truly tight job market (called “full employment” by economists) is the working person’s best friend. History teaches us, quite unequivocally, that in its absence, growth is less equally distributed, and now more so than ever.
There are a couple of themes here worthy of amplification in our national discussion of things economic. First, the stories told by these economic indicators—inflation, unemployment, GDP—are not as cut and dried as they might seem. Presidents will brag about GDP growth while ignoring the extent to which that growth is reaching the very people responsible for it. Newspaper stories will stress that an uptick in unemployment is a positive development since the Fed won’t need to raise interest rates, without accounting for the problems caused by higher unemployment.
Second, it’s easy and comforting to believe that the paths these indicators follow, like the paths of the celestial orbs, are outside of our control. You might support the ideas I’ve stressed regarding full employment, but getting there is outside of our control, right?
Wrong. Remember, economics is not a spectator sport, and the extent of unemployment is a legitimate concern for all of us. Tight labor markets are an essential antidote to the crunch, far too important to leave to a bunch of Wall Street suits and crusty central bankers.
Crunchpoint: The unemployment rate is the share of the workforce looking for a job—and this indicator has a lot of bearing on the extent of the crunch. It takes a truly tight job market—the kind of job market that gives workers some bargaining power (see principle #1)—to give most folks a shot at an equitable distribution of the fruits of their labor. The problem is, despite recent evidence to the contrary, some influential high-rollers in the stock market and at the Federal Reserve believe that low unemployment leads to an overheated economy with price pressures and squeezed profit margins. The other problem is that the folks on one side of this argument—the Fed—can actually do something about it, and in doing so, boost or undermine the efforts of working people. How about that? A seemingly straightforward indicator like the unemployment rate can provide considerable insight into whom key policy makers are pulling for or against.
The rules I was taught
Fail to explain what I see.
Perhaps they are wrong.
That’s all very interesting, from 40,000 feet up. Here on the ground, I may not be jobless, but I certainly don’t feel satisfactorily employed. Is there such a thing as underemployment?
Yes, there is, and it tends to run at close to twice that of the un employment rate. In 2006, when the unemployment rate was 4.6 percent, the underemployment rate was 8.2 percent, which added 5.5 million to the ranks of the underutilized.
This is an important and underappreciated question, because economists have a tendency to be quite absolute about certain concepts that are not always best understood in the context of such fine distinctions. A better way to conceive of the concept of unemployment is to think of what I’d call a “labor utilization continuum” (LUC). On one end of the continuum, your contribution to the economy through your job is at its full potential. You’re working as many hours as you desire (this is important, because lots of part-timers like it that way), and your skills are fully utilized—that is, you’re not a rocket scientist constructing “vente mocha lattes” unless that’s the way you want it.
On the other end of the continuum, you’re out of work. But the way we count it, only folks at that far end of the LUC are counted as unemployed, though they’re surely not the only workers down on their LUC . . . (sorry). In fact, there exist a number of other categories counted by the Bureau of Labor Statistics (BLS) as underutilized, and a few more, counted by me.
The pie chart below documents millions of folks whose labor isn’t being fully tapped. Data from 2006 show that after the seven million officially unemployed, the next-largest group is composed of over four million involuntary part-timers; they’d rather have full-time work but they can’t find it. Since they’re working, they’re not counted among the unemployed, but that’s probably not the way they see it. For the record, over 80 percent of part-timers are voluntary, so the underemployed are a minority. Still, there are a lot of them, and their numbers swell in times of weak job markets, though you won’t see that swelling in the official jobless rate.
The other groups of underemployed persons, the so-called marginally attached, are neither working nor looking for work (it’s the latter that keeps them out of the unemployment rate; if you’re not seeking a job, you’re not counted). The BLS judges them to be slightly, or marginally, attached to the job market, even though they’re not looking. Some are “discouraged” workers: They gave up looking because they couldn’t find work, gainful or otherwise. Others face a steep barrier between themselves and the job market, such as child care or transportation.
Then there’s a group the BLS doesn’t count, an admittedly hard group to identify: people who have jobs that don’t fully tap their skills. Especially when the job market is not particularly tight, these folks show up in all kinds of places, from the coffee bar example noted above to the Russian engineer driving your cab. Economists used to look at this type of underutilization, and they found both high levels and an increasing trend. That is, by one measure, in 1990, 20 percent of the college-educated workforce held jobs that did not require their skill levels, up from about 12 percent in the mid-1960s: In our terms, they were underutilized. With about 42 million college grads at work in 2006, that implies over 8 million highly educated workers whose potential is not being tapped. The earlier research was criticized as not being precise enough, but my own updates, which use much better data, yield numbers that are only a few percentage points lower.
Figure 2.1. Underemployment in 2006, showing levels in thousands. (Source: U.S. Bureau of Labor Statistics.)
And, of course, some underemployment doesn’t apply solely to college grads. The decline of our manufacturing sector has led to lots of displaced skilled machinists out there who may not be college graduates but whose potential is far underutilized in the low end of the services. And then there are those who, due to racial discrimination or lack of exposure to quality education, never even had the chance to discover their labor market potential. In what is a true human tragedy, their talents, never tapped or developed, are lost forever.
Any way you cut it, there are millions more underemployed persons, even in good times, than you’d know from the official measure. Such underutilization is a problem for them (less labor market income and thus lower living standards than they’d prefer), as it is a problem for us (they could and should be contributing more to output). I discuss one important way to push back against this problem in Chapter 5: Economic officials must take the necessary steps to ensure a full-employment job market. Robust job creation is the enemy of underemployment.
Crunchpoint: Unemployment as currently measured is actually a pretty extreme location on a much wider continuum, a wide range of conditions that run from working like a hamster on a treadmill to being a complete couch potato. Between these extremes, there are millions more workers, or potential workers, whose labor is underutilized in one way or another. Being underemployed—as the questioner has recognized— hurts their economic prospects and undermines our economy’s potential.
Economists and business reporters seem to go gaga over productivity growth. Why is it such a big deal?
An economist lands on a distant planet. He is greeted by alien emissaries. Anxious to learn about the planet, the economist questions the aliens. But instead of the classic “Take me to your leader,” he inquires, “What is your trend productivity growth rate?” (This scares the aliens and they zap him into dust.)
When economists talk about productivity growth, we get a faraway look in our eyes, and you can almost hear a choir sing a solemn chord on high. That’s because productivity growth, or output per hour worked, is a measure of efficiency, and economists love efficiency.
Let me explain. Suppose we have a little doughnut factory (mmm . . . doughnuts . . . mustn’t write when hungry . . . mustn’t go downstairs for doughnut . . . mustn’t buy doughnuts on days when working at home . . . must concentrate . . . doughnuts kill productivity . . . and, we’re back!), and we make 100 doughnuts per hour. Next week, with the same number of workers, we make 110 doughnuts per hour. Our productivity just went up 10 percent because we’re making 10 percent more doughnuts with the same “labor inputs” that used to make only 100. (Note clever substitution of “labor inputs” for hours worked—now you too can go on TV and sound obscure and annoying.)
You may wonder, how could that happen? What fairy dust enabled our staff to kick up its doughnut production by 10 percent? It could be capital investment, as in they bought a new, improved doughnut maker; it could be that they reorganized the way they work; or it could be a tough new boss squeezing more doughnuts out of the production staff.
However it materialized, the reason why productivity growth is so important is that it’s a primary determinant of living standards. Greater efficiencies create more opportunities. The availability of more doughnuts may not help much, outside of creating more work for heart surgeons. To take a positive real-world example, we’ve been tremendously efficient at manufacturing computers, and now computers are cheap and available to those whose low incomes used to preclude them from owning such stuff.
So, the main way society advances its living standards is through more efficiently providing the goods and services that people want and need. It doesn’t mean that folks will necessarily own more stuff, though that’s certainly how it’s played out here in America, a country where, after one of the most devastating attacks on our homeland in our history, our president advised us to get out to the malls and “down to Disney World.”
In France, on the other hand, they take their higher productivity growth in more time off. In fact, consumption makes up about 70 percent of U.S. GDP (a variable of which you now have intimate knowledge), compared with about 55 percent in France.
And that’s the beauty of productivity growth. If productivity grows 2.5 percent (about the underlying annual growth rate in the United States since 2000), that means we can either have 2.5 percent more stuff for each hour we work, or the same amount of stuff with fewer hours worked.
Therefore, faster productivity growth is pretty much an unequivocally good thing. But there is a catch—a distributional one—and it’s really important. Here, a picture is worth many words.
Because of the link between productivity and living standards, the mantra among economists is, “As rises productivity, so rise the fortunes of working families.” We generally assume that productivity is a rising tide that lifts all boats.
Now, take a look at the graph. What you see there are productivity and the real income of the median family, the one smack in the middle of the income scale—half have higher incomes and half have lower. For years, productivity and median family income grew in lockstep. It’s easy to see where people got the impression that if you played by the rules, the benefits of higher productivity would be yours to enjoy.
Whatever blew a hole in that relationship? In a word: inequality. Starting in the latter 1970s, growth started to become more concentrated among higher-income families. There are lots of reasons—it’s more Murder on the Orient Express(a mystery movie where there turned out to be multiple “perps”) than one smoking gun. But we can no longer assume that workers will get their fair share of the growing pie, even if its growth reflects their contributions.
Figure 2.2. Growing together, growing apart: productivity and real median family income, 1947 to 2005. (Sources: Productivity, nonfarm business: U.S. Bureau of Labor Statistics; family income: U.S. Census Bureau.)
That’s why they call it inequality: It’s not equitable. You help to create a more productive economy, but a disproportionate share of the gains flows to someone else.
Crunchpoint: There’s good reason to celebrate faster productivity growth—without it there would not even be the potential for higher living standards. But in one of the most important and fundamental economic changes over the past few decades, we can no longer assume this potential will be realized. When it is not, the living standards of many working families fail to keep up with overall economic growth.
My work, my value added
Is growing the pie . . .
Slice, please.
Why do economists seem to fear inflation? And why do prices always go up, never down?
It’s true. Today’s economists worry most about price growth, otherwise known as inflation.
And we’ve got some good reasons:
Higher prices mean less buying power—that is, lower “real” incomes (“real” meaning adjusted for inflation), and less real income means less consumption, investment, and growth.
We know the Federal Reserve worries about that scenario, so we worry that if inflation grows too fast, they’ll hit the brakes, as it were, raising interest rates to slow down the economy and lower the rate of price growth.
We worry that once high inflation is in the system, it can be tough to squeeze out.
Why do prices always go up? Now, don’t throw the book across the room, but they don’t. Believe me, I know all about that last trip to the supermarket and stop for gas on the way home. As stressed earlier, a lot of highly visible prices have been rising quickly, a lot faster than average. But some other prices have been rising more slowly or even falling.
As I noted in answering the question from the overworked San Franciscan in Chapter 1, the biggest price declines have been in computers and electronics. Thanks to globalization and technological advances in IT, you can get scads more computer power now for a lot less bucks than a decade ago. The question is, when you get home from your third shift of your second job, will you be up to surfin’ the Net?
Admittedly, these isolated cases of falling prices are exceptions, and most prices rise most of the time. The reason usually has to do with either economic fundamentals or some seller taking advantage of pricing power. The fundamentals are supply and demand, and a shortage of the former or a spike of the latter will nudge prices up. Most workers expect and get some increase in pay over time, such as a yearly COLA (cost of living adjustment), and firms may try to pass these wage increases along in higher prices.
A particularly interesting determinant of the rate of inflation is . . . the rate of inflation. Or, more precisely, the rate that people have come to expect. If we expect prices to rise and keep rising, then when they do so, maybe because of a supply shortage or because producers start flexing their pricing muscles, we don’t go on strike in protest and stop buying stuff. We accommodate the increase, and that signals the price setters that we can live with faster inflation. If, on the other hand, our inflation expectations are “well anchored” (firmly locked in our thinking), we will respond to price hikes with some disdain, signaling producers that if they want to see our wallets, they’d better revert back to the slower price growth regime.
It may sound like far-fetched wishful thinking—“You can have whatever rate of inflation you desire, if you want it bad enough”—but inflation watchers obsess about the public’s inflationary expectations. The Federal Reserve in particular puts great stock in this and will work very hard to convince economic actors—which is to say, people—that it will do what it takes to keep inflation well anchored, Matey.
The tautness of the job market plays a key role here, too. Economists recognize a trade-off between unemployment and inflation—when the former is very low, the latter runs higher. Tight job markets give workers greater bargaining power, as employers need to bid compensation up in such climates to get and keep the workers they need. Once again, if such employers hope to keep their profit margins intact, they’ll try to pass these prices forward to consumers.
In fact, this feared scenario—oh no . . . workers are getting raises . . . somebody do something!—lies behind the economists’ biggest inflationary nightmare: the wage/price spiral. It’s a tight job market, so workers get a raise. Firms pass the wage increase on to consumers, but lo and behold, workers also buy stuff, and when they discover that higher prices are eating up their fatter paycheck, they push for another raise. And the spiral is, allegedly, under way.
The thing is, for decades now, this has been nothing more than a scary campfire story at the economists’ weenie roast. In a global economy, and one in which less than 10 percent of our private sector workforce is covered by a union contract, workers don’t have the clout to keep pushing for raises. For that matter—and again, it’s a function of increased global competition—most firms don’t have the pricing power they used to. In fact, the last time we saw truly tight labor markets—the latter 1990s— unemployment fell to its lowest level in 30 years (4 percent in 2000), and, true to the full-employment/bargaining power story, wages rose at a faster clip than they had in years. But, and here’s the kicker, inflation de celerated—its growth rate actually slowed. So while the spiral scenario may still haunt economists, it’s a phantom menace, not a real one.
In fact, I remember when we used to worry more about unemployment being too high than too low. Lemme jest take off this ol’ wooden leg here and tell y’all about the old days, when we fretted more over getting to full employment than whether inflation was in the Fed’s “comfort zone”—low enough that they didn’t see the need to raise interest rates. Despite the recent historical record, all that wage/price spiral stuff still has economists spooked, so we’ll probably keep hearing more about price anxiety than job anxiety.
And yes, there’s a class bias in here. Full employment is most helpful to the least advantaged, while inflation erodes the value of assets held by the rich and powerful, and they don’t like that.
Crunchpoint: Economists fear inflation because it lowers the buying power of any given income level and leads the Fed to raise interest rates that slow the economy’s growth. However, there’s a strong case to be made that global forces and weaker worker bargaining power have weakened inflationary links in ways economists do not yet appreciate. While nobody likes fast price growth, we should worry a bit less about inflation and a lot more about getting to full employment. Because although prices actually drop in some categories (like consumer electronics), that’s small consolation when worker pay doesn’t keep pace with rising prices for the basic necessities.
What is a recession and why do they occur?
When you lose your job, it’s a shame. When I lose mine, it’s a recession. Like waves in the ocean, though less soothing to look at, our economy runs in fairly regular cycles, gradually rising from the trough, then hitting a peak before slowing again. The low point—the trough in the wave (less artful description below)—is the recession.
But economies aren’t natural phenomena—why should they have cycles? I’ve never seen a good answer to that, and one reason is that, thankfully, there haven’t been so many recessions that a clear pattern emerges. Moreover, some underlying economic dynamics have been changing lately, and the causes, frequency, and nature of recessions may be changing as well.
Technically, as stated by the group of economists who make the call— the National Bureau of Research Business Cycle Dating Committee (sounds like a support group for guys who can’t get dates through normal channels)—“A recession is a significant decline in economic activity spread across the economy, lasting more than a few months, normally visible in real GDP, real income [that is, income adjusted for inflation], employment, industrial production, and wholesale-retail sales.” “Visible in real GDP” has informally evolved to mean two quarters wherein GDP contracts, as in the economy actually shrinks.
Once a recession is over, the growth period following it is called a recovery or an expansion, and recession + recovery = business cycle. For example, the 1990s business cycle includes the recession of 1990–91 and the expansion from 1991 to 2000. A little makeup and you’re ready for your own cable show.
That’s the what. Here’s the why: Recessions result from some shock to the economic system. As of this writing, the last recession hit in March 2001 and was the result of the bursting of speculative bubbles in financial markets and IT (information technology) investment. Investment is a key component of GDP, and when it headed south in late 2000, it triggered a recession that was quite mild in GDP terms, though acutely felt on the jobs side. Once we got that recession out of our system, a housing bubble inflated and then burst circa 2007, and this too put a hurtin’ on the economy. (What’s a bubble and how do they come to pass? That’s next.)
In fact, the impact of the bursting housing bubble, occurring as I write these words, provides a useful, albeit unfortunate, case study. As of early 2008, an official recession has not been called, but many economists, myself included, believe we’re in or near a downturn (the cautious officials make the call after the fact).
As housing prices rose in the 2000s—and as I describe next, speculation played a big role here, just like it always does in a bubble—lots of homeowners became wealthier. After all, for most of us, our house is our biggest asset. That wealth filtered through the economy, as folks refinanced their mortgages based on the new, higher price, and then borrowed against their homes. It may sound like a fairly obscure source of economic stimulus but it wasn’t. We’re talking literally hundreds of billions of extra bucks flowing through the system.
Well, I got one word for what happened next: pop! Gravity has a nasty way of reasserting itself, and in mid-2007, home prices started falling back to earth with a vengeance. The direct impact of the meltdown have been (a) hundreds of thousands of foreclosures—that’s when homeowners default on their mortgage debt—mostly among those whose low credit ratings put the in the sub-prime end of the housing market, and, (b) much more broadly, falling home prices.
Those fallouts, by themselves, would not necessarily lead to a recession. But the damage to the economy goes much deeper, and it’s a good example of how problems can travel the economy’s circulatory system in a way that ultimately lays the patient low. The housing bubble was inflated by all kinds of creative and innovative—these are the nice words for them—lending schemes. The loan-rating agencies and bank regulators, including the Federal Reserve, were asleep at the switch, and many of these shaky loans worked their way into the financial system, both here and abroad. This led to a freeze in credit markets, and this economy thrives on free-flowing credit.
Also, as home prices fell, homeowners were much less prone to refinance their mortgages and pump some of that new found cash into the economy. Absent economic stimulus from the housing sector, we counted on the job market to provide consumers with the income they need to keep the economy moving forward (remember, consumption is 70 percent of GDP). But with related job losses in residential construction, financial markets, real estate, and so on, the job machine stalled in late 2007. Consumption stumbled next, and, again, though this was not made official as of early 2008, it was probably the last gasp for the 2000 business cycle.
Other recessions, such as those back in the 1970s, resulted from the increase in the price of oil. When the price of an important economic input like oil rises, both production and consumption are strained, and the economic results can get pretty ugly. The goodish news is that despite our “dependence on foreign oil,” as the phrase goes, the overall economy is less susceptible these days to price shocks from oil. This is partly because our manufacturing sector has shrunk—energy is a much bigger input in manufacturing than in services—and partly because we use energy more efficiently than we used to, though, yes, we’ll need to do a lot better here.
The biggest recession in modern times was, of course, the Great Depression, a deep economic contraction lasting not months but years, with tremendous and tragic human costs. That was also partly the result of speculation, but the problem was greatly exacerbated by economic officials who assumed the market would self-correct (it’s worse—they erected trade barriers and raised taxes, while the Federal Reserve, whose most important job is to offset such shocks, watched from the sidelines).
We are less likely to make those same mistakes now. Instead, we make new ones: we’ve got a bubble problem, and economists are only now beginning to recognize the damage caused by these bubbles. But the Federal Reserve has become more effective at steering through the shoals of economic shocks, and there are more automatic stabilizers, from safety nets to progressive income taxes (so your tax liability falls with your income, partially offsetting the blow), that help to keep the economic bicycle from teetering over when it slows. None of this implies the demise of the business cycle—a fine predictor of recessions is when economists say, “We’ve conquered the business cycle.” It’s just that there’s some reason to believe/hope we’ve gotten better at avoiding deep recessions.
How Can a Recovery Be Jobless?
An economic recovery without jobs? Sounds like a day without sunshine, a birthday party without a cake, a bagel without the cream cheese. But the last two economic expansions—the ones that started in the beginning of the 1990s and the 2000s—both began with many months of an economy firing on all the key cylinders except one: job growth.
What happened?
The challenge in answering that question is that we have only two cases to analyze, so file the following explanations under what lawyers call “probable cause.”
Small Dip, Weak Bounce
As ye have fallen, so shall ye rise. Historically, the economy has contracted fairly severely over recessions. During the 1970s and 1980s downturns, GDP fell around 3 percent. In the early-1990s recession, the decline was 1 percent, and in the 2000s, about zero. Like I said, thanks to globalization and a vigilant Federal Reserve, recessions are both rarer and milder. That’s good, of course.
But how does this play out in the job market? That, it turns out, has not been so good.
When the economy tanks, lots of people stop consuming and investing. Next, labor demand, which is derived from those very activities (consuming and investing), tanks too, and workers get laid off. At that point, something interesting and important happens: Demand starts to get pent up. Because their cash flow is temporarily disabled, folks put off those big- and even little-ticket purchases.
Thus, you get a bit of a pressure-cooker effect: When the downturn ends and growth starts to percolate, that pent-up demand is let loose, and you get a nice bounce back, in both GDP growth and jobs. The growth norm for the year after a recession ended used to be 6 to 8 percent. In the 1990s it was 3 percent; in the 2000s, 2 percent.
That’s just not enough oomph to get the job (creation) done.
Thanks to the aforementioned forces, we now appear to get a break from the pain of deep recessions. The downside is that we don’t get much of a party when they’re over.
Just-in-Time Inventory . . . for Workers
Another big change stems from the evolution of employers’ cyclical hiring practices—changes in how firms approach staffing and the business cycle.
The old model of recessions in the workplace was pretty simple. You’d get a hiccup in growth, a bunch of factory workers would get furloughed, they’d trudge off home for a few months, and then they’d get called back to the factory once the downturn was played out.
Over the last few decades, in part due to the shift to less-stable service sector employment, employers have gotten much better at calibrating the size of their workforce to meet the spikes and dips in demand. Technological advances and globalization have allowed firms to turn on a dime when it comes to stocking their inventory, beefing it up quickly in fat times and cutting it back in lean ones. Well, they’re apparently able to do that more and more with their workers as well these days, reacting to changes in demand by staffing up and down with much more precision than used to be the case. Some firms, like Microsoft, have embedded this practice in the structure of their workforce by having a peripheral group of temp workers around a hub of core workers.
One way to document this shift is to note that after both the 1970s and 1980s recessions, the percentage of workers who were involuntary part-timers (the unhappy folks who work part-time but want a full-time job with more hours, better pay, and fringe benefits) fell by about a point as a share of the work-force. After the 1990s and 2000s recessions, this just-in-time inventory indicator didn’t fall at all; in the 2000s case, it ticked up a bit.
This change is showing up in three ways in the new economy. First, we have the jobless recovery, a not-so-pleasant development. Second, we have observed faster productivity growth than would otherwise be the case, as firms can cut hours to match falling output more quickly than they used to. Finally, and this is the crunchy part: There’s less employment security. Widgets and their families experience little downside from just-in-time inventory practices . . . workers and their families take it much harder.
But before you threaten to revoke my “dismal scientist” card, let me note a troubling feature of the last two recessions. Well, really it was the last two recoveries: They started out “jobless.” That is, in both the early 1990s and 2000s, the recession daters declared the recession over, yet we kept losing jobs for many months.
But didn’t I say that employment growth was one of the things the recession daters look at when ringing the “Recession’s over” bell? Yes, but it’s just one of numerous indicators, and apparently they’re downplaying its significance. What’s more, there’s a class bias embedded in this weighting scheme. As I’ve stressed throughout, we all love GDP and industrial production, but given that it’s jobs that fill the wallet, these jobless recoveries can really take a toll on those least able to blithely ride out the recession: working- and middle-class families that depend on their paychecks, not their global portfolios. In fact, it took us almost four years to regain the jobs lost over the last recession/jobless recovery, more than twice the historical average.
It’s one of the reasons why recoveries don’t feel as good as they used to, and it’s part of the next discussion.
Crunchpoint: Recessions are occasional economic contractions that result from some shock to the system. Thanks in part to better macro-management, they appear to be getting shorter and further apart. But the last two recoveries have started with a long period of joblessness, so for many working families, each recovery felt more like a recession than it should have.
Seems like we’re forever blowing bubbles. What is an economic bubble, why are they bad, and can they be avoided?
The recession discussion highlighted the damaging role of speculative bubbles in our economy.
An economic bubble occurs when the price of something goes up well beyond what we would expect given the underlying fundamentals of supply and demand. If the jelly bean crop is destroyed by a sudden frost, we expect their price to rise. Or if suddenly everyone wants an iPod and the supply is fixed, their price will likewise go up. These are not bubbles yet—they’re standard price increases based on real changes in supply and demand.
But sometimes prices rise quickly because a growing number of investors believe that the value of a particular asset, from a tulip bulb to fiber-optic cable to houses, is going to grow quickly and keep growing. They may not want the asset themselves—during the recent housing bubble, people bought homes not to live in, but because they thought they could “flip” them (buy low, sell high)—so it’s not really demand driven in the iPod sense.
In fact, what’s important to bubble investors is not the real, underlying economic need for whatever investment is driving the bubble. The bubble investor cares about one thing: Do other people still believe the price will keep rising? It might be obvious to as-yet-undiscovered aboriginal tribes that investing in firms with no products or profits is rarely a good idea. But if enough people with enough money believe that enough other people think such investments are the cat’s meow, watch out: It’s bubble time. (That phrase “enough people” is important. There will always be some folks engaged in random speculation. It takes a crowd to inflate a bubble.)
What’s so bad about that? Well, for one, as we saw in late 2000, when bubbles burst, they can trigger recessions. But there is a less obvious fallout: Bubbles take a long time to mop up, and they have a lasting negative effect on economic activity.
If I asked you to name some growth industries in the new economy, “telecom” would probably be on your list, and rightfully so, given advances we’ve made in that area. But during the IT/dot-com bubble of the latter 1990s, deregulation of communications markets drove billions of dollars of speculative overinvestment in fiber-optic cable. Employment in telecom shot up about 40 percent, peaking in March 2001, the same month the recession began. Six years later, telecom jobs are down 30 percent; if you thought this was a growth sector—and these are good jobs, by the way—think again. The bubble severely injured it, and not just for a few months, but for years.
You could argue that we’re just back to where we’re supposed to be, but that’s sugarcoating. The bad thing about bubbles is that, just as the herd gets whipped up into a speculative frenzy—former Federal Reserve Chairman Alan Greenspan called it “irrational exuberance”—so do they get clinically depressed post-bursting. Investors, who maintain a key position in the economic drivetrain, get irrationally cautious, and that’s one reason why we suffered the longest jobless recovery in our history coming out of the 2001 recession.
Should bubbles be avoided? Can they be?
It’s my impression that economists and policymakers have not quite recognized just how damaging these bubbles are—how lasting their damage is, in the sense just discussed. And just as the effects of the IT bubble were wearing off after years of retrenchment in that critically important sector, the housing bubble inflated. By mid-2007, that too was deflating. As we just discussed in our “anatomy of a recession” section, the deflation of the real estate bubble has so far sliced numerous percentage points, amounting to hundreds of billions of dollars, off of real GDP growth. As many as two million people may lose their homes to foreclosure, and credit markets have frozen as investor sentiment whip-saws from ebullient to emulsified.
There are at least two ways to keep bubbles from inflating: jawboning by people with very influential jaws, and regulation.
The jaws I’m thinking about here belong to the chair of the Federal Reserve. Others, like the secretary of the Treasury, might help, too, but there’s no economist with greater clout than the Fed chair. And, as noted earlier, Alan Greenspan, who held the post as the dot-com bubble inflated, made a run at the bubble with his “irrational exuberance” comments. But he quickly dropped the language; and when pressed, he basically took a “Bubble? What bubble?” stance thereafter.
Later, when we folks asked why he gave up, Greenspan argued that the Fed has neither the ability to recognize bubbles nor the tools to deflate them. The first part of that argument is simply not credible: The chief himself recognized the irrational nature of the stock market’s climb in the latter 1990s. Regarding the lack of tools, he argued that “it was far from obvious [that the bubble] could be preempted short of the central bank inducing a substantial contraction in economic activity, the very outcome we were seeking to avoid.”
Well, we got the contraction anyway—and believe me, financial players hang on every word that the Fed chair utters. He could have talked the bubble down somewhat, and he should have.
The other option is regulation, and that’s always a lot stickier. You want to set rules that restrain obviously bad stuff but don’t kill the creative energy that’s inherent in markets. But if done with the right touch, it can work.
And there is some low-hanging fruit here. If people want to make crazy investments in profitless companies, we should let them. But what about when profitless firms pretend they’re doing great and cook the books accordingly? Deregulation during the 1980s and 1990s of banking and accounting standards contributed to this fraudulent behavior, and that helped to blow up the market bubbles. Since then, Congress passed rules to block such behaviors—through the Sarbanes-Oxley Act—but they are continuously under attack by corporate forces with short memories.
As the housing bubble started to deflate in 2007, we began to learn about some more low-hanging regulatory fruit that we would be wise to pick. Unscrupulous mortgage lenders lent all kinds of money to people who quickly got in way over their heads. Insiders in the industry called these “liar loans” because the loan agent often inflated borrowers’ earnings so that they would qualify. There were “prepayment penalties” (a penalty for paying off your mortgage early, to keep you paying interest as long as possible), interest-only loans (no down payment), and mortgages that started out nice and low, only to reset quickly (often within two years) to levels clearly beyond the means of the borrower. Not that borrowers were always innocent bystanders—it takes recklessness on both sides of the deal to inflate a bubble.
Such shell games are custom-made bubble machines. Regulating these practices—not prohibiting them (well, “liar loans” have to go), but restricting their use to keep vulnerable players out of that particular casino—is a no-brainer, but the political power of the deregulatory crowd over the past few decades has often trumped reasoned lending policies (see principle #1).
We also might want to consider ending the tax advantage for people buying expensive second homes. When the stock market bubble burst, some investors went over to real estate, and since you can deduct the interest payments on up to a million bucks of mortgage debt on your first or second home, our tax code creates a tax incentive to speculate. There was a time when incentivizing home ownership across the income scale made a lot of sense, but that time is over. We should consider ending or reducing the deduction on second homes.
How about this bumper sticker? “Houses: Nobody gets two until everybody has one.”
Crunchpoint: An economic bubble occurs when speculation raises prices well above what supply and demand would dictate. Economists, given our default position of deep admiration for market forces, have been slow to recognize both the formation and lasting damage done by bubbles. We should learn from past mistakes and start taking action— like identifying them sooner and discouraging the speculative investment on which they feed— to deflate them before they form.
What is a ”living wage” and how is it different from the minimum wage? Do either really help, or are they just a good way for well-meaning people to get slapped around by the invisible hand?
Bubbles, recessions, unemployment, the inequitable distribution of productivity growth . . . all of these play a role in the crunch many families experience today. But what can be done to ameliorate these economic pressures? In Chapter 5, I speak broadly about the anticrunch policy agenda, but a number of people asked whether so-called living wages, or wage mandates in general (rules prohibiting wage payments below a mandated level), are an important part of the solution.
These policies—the living wage is a localized phenomenon; the minimum wage is both a state and federal program—are popular among economic justice advocates and progressive politicians, and roundly hated by many in the business community, who would be obliged if we would just leave the setting of wages to them, thank you. Below, we go through the arguments on both sides.
These wage mandates are part of the solution—a small but important part. They certainly help low-wage workers, but the crunch for many in the middle class is beyond their reach. Their importance, however, goes well beyond the impact of the wage mandate.
In 1997, two very mainstream, highly respected economics professors, David Card and Alan B. Krueger, published a book called Myth and Measurement (Princeton University Press), wherein they presented a few years of careful research on the impact of minimum wage increases. As I discuss in greater detail below, econ textbooks treated the matter as “case closed.” He who mandates wages is flying in the face of the invisible hand, and it shall smite him. Raise wages by fiat, the story went, and you’ll throw a wrench in the economy with some very negative unintended consequences.
But Card and Krueger, using unique data and careful, elucidating statistical techniques, convincingly proved otherwise. The outcry against their work was predictable, but they changed some minds, and now even some of the textbooks are framing the issue more flexibly.
But what struck me and a number of other renegades as significant was the chink in the armor of classical economics that their work exposed. If the textbooks were wrong about this, what other economic relationships and assumptions should go under the empirical microscope? It’s a stark reminder of principle #2: Economic relationships often play out in surprising ways, contradicting both basic logic and textbook theory.
This is liberating stuff. We must think broadly and creatively about economic policy; the classical assumptions should never be ignored, but they must constantly be tested. They must be guideposts, not chains. The invisible hand is, well . . . invisible, and its discipline is not nearly as daunting as advertised by those who use it to keep all the goodies flowing their way. In short, we should, all of us, always be asking ourselves, “What kind of an economy do we want?” and trying new ideas, like living and minimum wages, to get us there. Believe me, for as long as we’ve had an economy, some very powerful people have been asking and answering that question. It’s time we all joined the discussion.
Now let’s get down to cases. What is a living wage, anyway?
It’s a level of wages that must be paid to people in certain jobs in cities that have living wage ordinances, as mandated and set by local government. There are two basic flavors of living wage laws. One requires that firms under contract with the city must pay the living wage, and the other requires that firms receiving some type of subsidy or tax break must pay the wage.
About 140 cities in the nation have living wage laws, and the wage levels differ from place to place, ranging from around seven bucks an hour in Albuquerque, New Mexico, to the mid-teens in some California cities.
Why would so many cities embrace this policy? Ask the advocates who did the heavy lifting (I assure you, not every city council goes gently into that good ordinance). In some cases, it’s about trying to preserve the quality of public sector jobs that have been outsourced.
In my old hometown, for example, the town government outsourced the garbage collection to a private sector firm to save some money. It even sold them the town’s garbage trucks—which, by the way, stopped coming up your driveway so you now had to take the garbage down to the end of the driveway. No biggie—I’m just saying we got cheaper service, not better service (I’d moved away by the time this happened; they wouldn’t have gotten away with this stuff if I’d still been there). Living wage laws serve to lessen the slide in pay for such jobs.
Another rationale is just good old antipoverty activism. The living wage coalitions seek something they call “economic justice”—there were no chapters on that in any econ textbooks I was ever assigned—and view the policy as a tool to raise earnings at the low end while building progressive coalitions.
OK, time to put on the economist’s hat. That’s right: Place the propeller beanie atop head, give the prop a spin or two, and . . . come up with reasons why you can’t do stuff you think might help.
Won’t raising wages by fiat lead to job losses? This is always a concern, whether we’re talking living wage or minimum wage (the difference is that minimum wage covers everyone, not just those under city contracts). In fact, it’s such a pervasive concern that it deserves its own Q&A in the sun. The evidence, which I’ve reviewed in mind-numbing detail, shows that job losses have not been a problem associated with the ordinances.
For one, living wage ordinances tend to affect small numbers of people. Even in big cities, the number of workers that benefit from living wage rules is in the low thousands. Second, as noted in the next section, when wages rise by mandate, there are lots of other ways the increase can get absorbed. Some, typically a small share, of the costs of living wages get passed back to the city through higher contract costs, but the evidence mostly suggests that contractors suck it up in one way or another (lower profits and higher productivity, for example).
And remember, the higher-wage contractors tend not to feel any pinch from living wage laws because they’re already paying higher wages. In other words, the ordinance blocks the low road.
So, are rules like these the answer to all that ails us? By no means. Living wage ordinances, by construction, reach too few to make a big dent in low-income work, but they do make a useful small dent.
But can you live on living wages? Not too well. The levels, as noted above, are not high enough to reliably pay for decent housing, quality child care, health care, and so on. It’s important to recognize that these wage mandates are a balancing act. You can’t tell low-wage employers to pay a wage that’s high enough to meet the costs of these necessities, some of which, like health care, are driven up by our dysfunctional system. You can, and should, nudge them to get a little closer, though.
That’s why we supplement low-income wages with a set of admirable policies called “work supports.” After much careful study (and I’m only half-kidding), experts have discovered that low-income working families need more money to make ends meet. The beauty part is that since they’re working—they’re playing by the rules, making a good-faith effort to lift themselves up—politicians from both sides of the aisle have deemed them worthy of extra help. So, along with living and minimum wages, we have, for example, the Earned Income Tax Credit, a wage subsidy to the tune of over $4,000 per year for a low-income working family with kids. We’ve got food stamps, and Medicaid, and SCHIP (the federal/state child health insurance program of publicly provided care for kids), and child care, housing, and transportation subsidies.
Don’t get too excited—some of these are terribly underfunded and under frequent attack (child care, SCHIP). But while we have to accept that living wages may not be really living, we don’t have to accept working poverty.
Crunchpoint: Living wages are local mandates ensuring that a small, select group of workers get a wage that’s a few bucks above the minimum, and maybe some health coverage too. They’re popular because they help scratch the itch of working poverty without generating the economic distortions their opponents worry about, but they’re too low and reach too few people to make a big difference. Federal and state minimum wage laws reach many more workers, but they tend to be set at a lower level than living wages.
Ivaguely remember my Econ 101 text asserting that government-imposed wage mandates force employers to lay people off. I must admit, it makes sense: Raise the price of something (workers), and people (employers) will buy less of it. Right?
Wrong, by principle #2 (economic outcomes cannot be assumed based on textbook relationships—they must be constantly tested, verified, and updated).
Get into a debate about passing a living wage somewhere or raising the state or federal minimum wage, and somebody’s going to tell you that you only hurt the ones you love: If you raise wages by fiat instead of waiting for the market to get around to it, employers will have to lay off workers who are, because of the mandated wage hike, too expensive.
Sounds plausible, but here’s the rub: It’s been largely decided and argued on theoretical grounds, but it’s an empirical question. You simply can’t trust the assumptions.
Most people are quick to accept that higher prices lead to lower demand. You raise the price of Snickers, I’ll buy M&M’s instead. So it is that economists—and, much more vocally, those who worry that a minimum wage increase will cut into their profits and who then hire the economists to do their shouting—argue that minimum or living wage mandates will do more harm than good.
But workers aren’t candy bars. And there are lots of other ways for a wage increase to get absorbed into the system besides layoffs. There are the three Ps, for example: prices, profits, and productivity.
Ask yourself why the U.S. Chamber of Commerce and the National Restaurant Association lobby intensively against minimum wages (that is, follow the money).
Prices: The evidence is that some small fraction of the increase shows up in higher prices of low-wage-intensive goods, so you could see the price of a burger go up a touch. But it’s never more than a few cents on the dollar, a change that hasn’t been found to register much at all with consumers.
Profits: Is it purely out of concern for the employment status of low-wage workers, a group that, by the way, hugely supports such wage increases? No, the groups that represent low-wage employers spend lavishly on an army of lobbyists to stop anything that might raise their labor costs and cut into their profits. That’s their job and they’re good at it. When they lose, and workers win, one way in which the mandated wage increase gets absorbed is through lower profits.
Productivity: Paying workers more, especially moving their wage out of the sub-basement, leads to higher-quality work, fewer turnovers, and fewer vacancies, and these efficiency gains pay for at least part of the increase.
For these reasons, minimum wage increases have not been found to lead to big job losses among affected workers. We know this because some high-quality research has tapped the natural experiments that come into focus when one state or city raises its wage and another state or city next door doesn’t. This type of pseudo-experiment is as rare as it is revealing in economics, and it has helped us to go beyond the textbook models and predictions that have driven this debate for too long.
The results find somewhere between little and no effects from the higher-wage mandate. By “little,” I mean that in no research I’ve seen, even in the most unfavorable to the policy, do you find any results coming anywhere close to finding that the policy is a net loser for affected workers. In other words, some very persuasive research on this finds no negative employment effects among affected workers, and some equally persuasive work finds small negative effects. But in every case, the benefits to low-wage workers far outweigh the costs.
Another reason you don’t get big negatives here is that the political process precludes it. It would certainly be possible to set a living or minimum wage high enough to throw a wrench in the economy, but in all my experience—and I’ve been in the fray on this one for decades—the political horse-trading always seems to serve up an increase that is moderate at best. One side might start high and the other low (as in zero, or no increase) but the compromise tends to deliver a workable result.
Crunchpoint: Your old textbook got this one wrong, because the logic is too narrow. Research on minimum wage changes across the country shows that moderate increases—the only kind the system tends to serve up—do not lead to large numbers of layoffs among affected workers. In fact, any job-loss effects hover between very little and none: the cost of the wage increase is absorbed in other ways— including slightly higher prices, lower profits, and more efficient production—and low-wage workers gain significant and important benefits from the higher wages.
We raise the minimum wage.
The low-wage worker