1 The Big Squeeze
Why do I feel so squeezed?
As I solicited questions for this book, the one above kept coming up, in one form or another. And while I’m not happy about that, it is affirming, because it is, in my view, the great, unanswered economics question of our time.
It’s not that middle-class people are sliding into poverty, hunger, and homelessness, though in an economy as wealthy as ours, too many people do face those conditions. The sense I got from questioners, a sense I’ve tried to convey in the answers I offer below, is that something is “off” in the new economy. We hear great economic news about financial markets, prices, profits, growth, productivity, and globalization, yet many of us live with a weight of economic anxiety that our parents would not have recognized. Most of us are making progress as we age, but the path seems steeper than we might have expected, with deeper potholes along the way. For some of us, things we aspire to, like secure health care or the ability to send our kids to a good college without taking on a lot of debt, are still within our grasp, but we have to reach farther to grab them, and it’s harder to hold on.
For others of us, a bit farther down the income scale, these aspirations are fading. To our surprise, we find ourselves without health coverage, or unable to afford the premiums and co-payments. We’re stuck in a house and a neighborhood we thought we’d have grown out of by now, with a school to which we’d rather not send our kids. And while we’re working as hard as ever, that paycheck is alarmingly thin after gas and groceries.
Not everyone feels that way. Raise the issue of the squeeze, and many economists and policymakers will excitedly (and correctly) remind you productivity is soaring! . . . unemployment’s historically low! . . . inflation’s down!
How do I know this? Because I’m a regular on CNBC’s Kudlow & Company, a show that focuses largely on stock and bond markets. It’s almost infectious, the way Larry Kudlow and his guests from the world of financial markets bubble over with effusive, heartfelt praise for all those positive trends just mentioned. To them, for example, globalization means a greater supply of capital and labor, “more global liquidity,” lower prices, lower interest rates, and a lot more people with whom to make trades. To millions of others, globalization means greater wage competition and less job security. They’re both right.
I’m fortunate that these financial market mavens will at least entertain a different perspective, but no matter how many times I point out that the typical working family’s purchasing power—its inflation-adjusted income—is actually down over their beloved economic boom, they can’t hear me.
Why not? Well, like they say, denial ain’t just a river in Egypt. It’s a place to which lots of economic elites retreat so that they can avoid the tough question, what’s behind the divergence between the macroeconomy and the microeconomy, between stock portfolios and paychecks, between the view from Wall Street and the view from Main Street?
Let us begin by presenting some evidence, and then tackle that critical question.
The statistics behind the squeeze are embarrassingly easy to come by. Anybody with a mouse can stop puzzling over this after a precious few clicks.
The economy grew by 15 percent between 2000 and 2006, but the inflation-adjusted weekly earnings of the typical, or median, worker were flat (down 0.7 percent; the median is the worker at the 50th percentile, right in the middle of the wage scale).
Partly due to the jobless recovery that lasted until mid-2003 (I discuss recessions and recoveries later on), the typical working-age house-hold’s income was down 5 percent, or $2,400, from 2000 to 2006. Their income was down more than their wage because they found fewer available hours of work.
After falling steeply in the latter 1990s, the share of the population that’s officially poor rose from 11.3 percent in 2000 to 12.3 percent in 2006, the most recent available data point for poverty rates.
While inflation overall has been moderate since 2000, as I point out below, the costs of some of the key components of the middle-income market basket—health care, child care, college tuition, housing—have been growing much faster than the overall average of all prices taken together.
That’s a lot of numbers, but let’s not gloss over them. Over the course of this highly touted economic expansion, poverty is up, working families’ real incomes are down, and some key prices are growing a lot faster than the average.
Now, I know you don’t hear about such numbers every day—instead, you hear about the stock market every hour. But these statistics are not secret.
It’s obviously important to document the facts, but it’s also useful to look beyond the statistics to people’s own views about the economy. Such views jump around to some extent with highly visible indicators like gas or home prices, but in one weekly poll (ABC–Washington Post), more than half of respondents have registered negative impressions about the economy since the summer of 2001. Clearly, dissatisfaction with the Iraq War dominated the 2006 midterm elections, but the economy was next in line. According to the New York Times exit poll, two-thirds of voters in November 2006 reported that they were either just maintaining their living standards (51 percent) or falling behind (17 percent). By 2007, 44 percent said they lacked the money they needed “to make ends meet,” up from 35 percent a few years earlier.
Remember—this is a critical part of the story—the cheerleaders are right, in their own narrow way. While all these unsettling poll results were coming in, the economy was expanding at a good clip and generating stellar rates of productivity growth. We were achieving efficiency gains at a rate that hadn’t been seen in over 30 years. The unemployment rate was low in 2006–07, below 5 percent. The stock market took a dive in late 2000, but by the end of 2006 it was up 56 percent from its ’03 trough. Five years into this recovery, corporate profits as a share of national income were at a 56-year high and were percolating along at a rate more than twice the average of past recoveries. Yet more than 4 in 10 told pollsters they were having trouble making ends meet.
What this barrage of percentages is telling us is that if you feel squeezed, chances are it’s because you are squeezed. Most of the indicators that matter most to us in our everyday lives—jobs, wages, mid-level incomes, prices at the pump and the grocery store, health care, retirement security, college tuition—are coming in at stress-inducing levels, but gross domestic product (GDP), our broadest measure of the economy’s health, explained later, keeps on truckin’.
Something’s wrong, something fundamental. Not Third World–poverty fundamental, not blood in the streets, massive homelessness, or Great Depression fundamental. If the problem were that obvious, it would be less amorphous, less indecipherable, less of a head-scratcher.
The name of the problem is economic inequality, and it’s been on the rise for decades. It’s at the heart of the squeeze, and it’s a sign that something important is broken: the set of economic mechanisms and forces that used to broadly and fairly distribute the benefits of growth. What “mechanisms” am I thinking of? They are unions, minimum wages, employer and firm loyalty, global competitiveness, full employment, the robust creation of quality jobs, safety nets, and social insurance, all of which are discussed in the following pages.
The belief that growth should be fairly distributed, that the bakers should get their slice, is a fundamental economic value in America. It is, of course, not one we have always lived up to, especially for the least advantaged among us. But it’s always there, this sense that the rising tide should lift the rowboats and the houseboats, not just the yachts. When the lesser boats founder, people know it. And that’s where we are today. Bill Clinton won an election appealing to those people in 1992, various senators and congresspeople did so in 2006, and, from what you could hear as the 2008 campaign season got under way (much too early for the taste of most of us), the Democratic presidential candidates were tapping directly into the same set of values.
Now, you won’t hear this description of our economic challenges from most op-ed writers, any presidents, or central bankers. Their answer to the inequality question comes down to one, and only one, solution: more education. They believe that the reason the economy is passing so many folks by is that they don’t have the smarts and skills to cash in on the opportunities we’re creating.
The education mantra is a clever framing because (a) it rings true— you’re always better off with more education, and (b) it subtly puts the burden on you. The message is, “The opportunities to get on the right side of the inequality tide are there, if you’re smart enough.” If you’re not, well, then, either smarten up and join the parade or stop whining. As one U.S. Treasury official put it, “If the country . . . is going to undergo economic growth, then the population has to be able to take advantage of opportunities.” Or, as President George W. Bush elliptically put it, “We have an economy that increasingly rewards education and skills because of that education.”
Ten years ago, he would have been at least partly right. Today, education is neither the main cause of nor the main solution to the inequalities we face.
I deal with this in greater detail in a later chapter, but for now, I’ll assert that inequality is no longer being driven by the highly skilled pulling away from the rest of the pack. Yes, you’re far better off with a college education than without, but that degree won’t insulate you from global competition. Especially if your work can be digitized and offshored, there are highly skilled but low-paid workers in other countries with whom you now compete. The real wages of American college grads rose less than 2 percent from 2000 to 2006.
Yet, while college grads are beginning to feel the same competitive pinch that the blue-collar workers have felt for years, the share of income going to the top 1 percent of households in 2005 was, at 22 percent, higher than in any year since 1929!
Therefore, a simple “big skills get big rewards” story just doesn’t cut it today. To understand what’s behind today’s inequality, something to which I devote considerable time in the coming pages, you’ve got to deal with principle #1: POWER. More so than in any recent period, those who hold a privileged position in the economic power hierarchy, the players who sit down at the poker table with a stack of chips reaching to the ceiling—the CEOs and the holders of large capital assets—are able to steer the bulk of growth their way. Then, using their political connections, they’re able to ice the cake with a nice bit of after-tax redistribution, as regressive changes in the tax code funnel even more resources their way.
The rest of us—those who sit down with a modest stack of chips—are left trying to figure out . . . well, like it says in the title, why do I feel so squeezed?
Crunchpoint: You feel squeezed because you are squeezed. If this were just a growth problem, we could have a nice, polite discussion of ways to get productivity humming again, or how to bring down the unemployment rate. But productivity’s been great and unemployment’s low. The squeeze is on, and we won’t be able to call it off until we deal with our inequality problem.
Before wading more deeply into the etiology of the crunch, how about a nice mystery story?
I’ve always enjoyed the noir style in films and books, where gritty gumshoes pursue mysterious ladies while snarled in deeply tangled plots. One evening, while struggling to reconcile the growing economy with falling wages, I felt unusually close to Humphrey Bogart and wrote this story. In the next chapter, I explain the concept of gross domestic product in greater depth. But it’s simply our broadest measure of economy-wide growth. I also mention Ben Bernanke, chairman of the Federal Reserve, in the story.
I was working late in my DC office. I’d been running some new simulations on my macro-model, but nothing was converging, so I figured I’d close up my spreadsheet and find a corner in some dark speakeasy to lick my wounds.
That’s when she walked in. She had a neckline as low as the Nasdaq in ’01, curves like sine waves, and a dress tighter than the global oil supply. She had my attention even before she pulled out two reports I’d seen that very morning.
“I’m sorry to barge in on you like this,” she said in a voice that gave my calculator a power surge. “I didn’t know where else to turn.”
“You came to the right place, doll,” I said. “I see you’ve got the first-quarter GDP report, along with the new compensation results.” I’d been puzzling over these numbers all day, but what, I wondered, could this tall glass of cool water want with them?
“That’s right,” she purred. “I need to know why GDP is up 4.8 percent, the strongest quarter since 2003, yet real wages are falling.” Yeah, I thought, you and everybody else who works for a living.
“Why the interest?” I shot back. She didn’t look like a Democrat.
“I wish I could tell you. But I work for some powerful people”—now I knew she wasn’t a Democrat—“and they’d be very upset if they knew I was here.”
“Why me? Why don’t you ask your powerful friends to explain why the economy’s racing ahead but leaving working stiffs behind?”
She got kinda sulky, and I kinda liked it. “They wouldn’t know where to look. What’s worse, most of them think it’s great when wage growth decelerates because with no inflationary pressure from labor costs, it means the Fed can take a powder on rate increases.”
“Tell me about it, sister. I’ve been leaning on Bernanke for months on that point, but he doesn’t return my calls.”
Needless to say, I took the case. I wasn’t sure what game Little Miss Conflicting Reports was playing, but I figured I’d play along for now.
Fact is, I’d been asking the same question myself. Every quarter we seemed to be getting great news on top-line statistics—GDP, productivity, profits—yet the typical worker’s real earnings were down 2 percent over the recovery. Guys like me don’t like it when things line up that way.
I headed for the union hall, figuring some of those people might have an angle. Problem was, with private-sector unions down to 8 percent of the workforce, the hall had become a Starbucks. I got a vanilla chai latte to go and beat it.
I decided to head for the new economy, so I looked up some managers and professionals in the service sector. I found them, all right, but they didn’t have any answers. As of the first quarter of 2006, their compensation had lagged inflation for three quarters running.
This was more serious than I’d thought. Whatever was driving a wedge between overall growth and living standards, it was reaching pretty high up the pay scale. I wasn’t sure what mess I’d gotten into here, but it was time to confront the doll that got me into it.
I caught up with her in her penthouse, a place that had “housing bubble” written all over it. I know my wealth distributions, and this kitten came from the top 0.1 percent. I don’t like playing the sap—it was time for some class warfare.
“OK, gorgeous. Drop the ‘two Americas’ line and give it to me straight. You know as well as I do where the growth is going. What’s your game?”
She nibbled her lip and looked up at me real sweet. “I suppose if I told you I’m just a girl who cares about the bottom 99 percent, you wouldn’t believe me.”
She supposed right.
“All right, I’ll come clean,” she said, slumping in a chaise longue that probably cost the average income of the bottom fifth. “I work for the Republican National Committee, and we’re starting to get spooked by the president’s poll numbers on the economy. We figured if we don’t get a little trickle-down soon, it could hurt us in 2006, not to mention ’08.”
I kicked myself for not seeing it sooner. “So you don’t give a damn about the structural factors driving the productivity/wage gap: the declining unions, low minimum wage, the profit squeeze, slack job creation, and most of all the way globalization is sapping the bargaining clout of the American worker, blue and white collar alike.”
“Why should I?” she said, finally showing her true colors. “Any intervention would just cuff the invisible hand, doing more harm than good.” She was Milton Friedman with the body of Scarlett Johansson. I had to get outta there.
“You’re wrong!” I shouted, staggering toward the door. “You can’t see it, but these two reports are a microcosm of everything that’s right and wrong with this economy. Tell your people that whoever understands and articulates this disconnect, along with offering a convincing policy agenda to reconnect growth and living standards—that’s who wins the big tamale.”
I was wasting my breath. She had me bounced by a security guard as pumped up as ExxonMobil’s profits.
I brushed the dust off my suit and headed for the office. You’d think a case like this would be dispiriting to a guy like me, but you’d be wrong. Sure, she’d made me mad, but I saw things clearly now, and her little scheme was about to backfire.
There’s an electorate out there that’s looking for some economic stewardship. Maybe I’m just one economist in this big, crazy city, and maybe the other guys’ve got the deep pockets. But the way I see it, we can shape our economic outcomes so that everyone gets a fair shake, not just the chosen few.
I opened up a spreadsheet and got to work.
OK, that squeeze stuff is pretty convincing. But I hear a lot of cheerleaders touting a different set of facts, and once economists start throwing these numbers around, I’m lost. How can I tell if presidents/politicians/economists are giving it to me straight? What’s the right scorecard?
That’s a great and critical question, one that reminds me of my days teaching statistics. On day one of the course, we’d have a wide-ranging discussion about the use of statistics in society, and someone always— and I mean without exception—raised the argument that statistics was a fancy way to make stuff up. I lay in wait for this argument, and my response was that, in fact, the point of learning this stuff was to be able to distinguish between bad and good statistics.
My sense at the time was that my explanation probably convinced only a precious few, but the idea is an important one for our journey through Crunchland. Folks on cable news shows aren’t the only ones who selectively pick which facts they want to feature about the economy. Presidents do, too, and that creates a lot of cognitive dissonance among those stuck in the crunch. Here’s a scorecard to help square the difference between what you hear and what you see.
Like the rooster who’s sure his crowing caused the sunrise, presidents, regardless of party—this is not, I repeat not, a partisan critique (though I am fully capable of such critiques, and you’ll bump into more than a few along the way)—will always take credit for anything good that happens in the economy. And they’re not above picking the cherry tree clean to do so.
To do so, they generally use three methods: strategic clock starting, broad averages, and bars so low that even they can get over them.
A posting on the White House Web site (“Hey, didn’t you just say something about this not being a partisan attack?” . . . Yes, but I’m picking on Bush here because, since he’s the prez, his spin is a mouse-click away, and, truth be told, his folks engage in this stuff more than most), accessed in November 2007, for example, makes the following claims:
More than 8.31 million jobs created since August 2003.
Granted, you may annoy anyone within hearing distance, but when economists start spouting numbers, remember this invaluable question:
“Compared to what?”
“More than 8.31 million” sounds big, right? But in a workforce of 150 million, it’s a rate of job growth that’s actually well behind the historical average. It also comes after years of employment losses, even as the economy expanded—another example of the disconnect we face today.
Since economic data series tend to fluctuate with the business cycle, trending negative in recessions and improving over recoveries, you can make any data series sing whatever tune you want if you start counting at a carefully chosen point in time. When looking at longer-term trends, economists avoid this bias by comparing similar points in the business cycle, the most common being peak-to-peak.
It’s analogous to when you weigh yourself. If you wanted to convince yourself you were losing weight, you’d take your baseline weight right after a big meal, then check your progress first thing in the morning (hey, I’m going to try that!).
In this case, the Bush-league econ wizards, recognizing that their boy had presided over the worse jobless recovery on record (though the recession was over in November 2001, we kept losing jobs until August 2003), started the clock in August 2003. If they’d done it the right way, they’d have to report that job growth over the Bush cycle has been the worst on record, going back to the 1940s.
Real wages rose 1.2 percent over the past 12 months.
Ah, the soft bigotry of low expectations. One percent real growth isn’t nothin’, but it’s way too close for comfort.
And anyway, compared to what? (See, you can start using that one right away.) Real wages were no higher when the White House made this announcement than they were three years earlier (real wages fell from 2004 through the first half of 2006). And over the longer term, wage growth for different groups of workers should always be compared with productivity, answering the question, are the bakers getting a fair slice of the pie?
Over the business cycle from 2001 to 2007, real wages were up 2.3 percent, compared with 18 percent for productivity, a sure sign that slices were not growing in proportion to the pie. That’s a gain of about $0.40 per hour (the average wage was $17.24 in April 2007), or around $800 for a full-year worker over six years: $133 per year. If your benchmark for success is essentially zero, then you’ll pop the champagne cork for even the most marginal gain.
Again, that’s what presidents do, and not just this president. Back in Bill Clinton’s first term, my organization published a critical review of the administration’s claims about the quality of jobs “they’d created”— that is, created on their watch. They pointed out that lots of the new jobs were in higher-paying occupations. We showed that (a) this is almost always the case—occupation upgrading is the norm (note the use of the classic compared-to-what technique), and (b) many of these supposedly good jobs were paying less than they used to. (Note: Back then, we got invited over to the White House for a nice lunch, wherein we spoke earnestly about our competing analyses; I waited and waited to hear from the Bushies, but my wait was in vain.)
Real after-tax income per person has risen by 12.7 percent, more than $3,800 per person, since the president took office.
Another classic tactic: Cite broad averages, ignoring the fact that the benefits of growth have been anything but broadly shared. Exacerbate the crime by implying that they have (“$3,800 per person”).
Imagine an economy with five people, each of whom earns, respectively, 1, 2, 3, 4, 5 bucks. The average and the median are both 3. Now, imagine that the income of the top person jumps to $100. The average is now 22 (4 bucks more per person!), but the median is still 3. In other words, the median is insensitive to big jumps at the top.
Unfortunately, this is no thought experiment. It’s representative of the growth concentration that occurred through the first half of the 2000s.
Crunchpoint: In the interest of statistical inoculation, invoke these simple rules when presidents and sundry economic types start throwing numbers around:
Ask “Compared to what?” Most economic variables grow most of the time, so the question is not whether X (employment, incomes, wealth, and so on) is growing, but by how much relative to what we’d expect in normal times. Don’t be moved by large numbers with no historical context.
Broad averages, such as income per person, are distorted by the huge values of the richest households, so remember this handy rhyme:
When it’s inequality you’re seein’,
Don’t use the average, use the median.
My dad had a full-time job, but my mom didn’t, and they managed to raise, feed, house, and educate two kids on one salary. I can’t do that today. Why not? What happened?
What happened was that the real earnings of lots of people, mostly male people, so husbands in this case, started to slip. At the same time, some of the very costs mentioned—a home and a college education—grew a lot faster than average inflation (and the fact that this questioner is from San Francisco makes a difference here, especially regarding home prices).
That’s bad.
Also, over the last 30 years, the job market has opened up much more for women, who have made impressive gains that have helped to offset their husbands’ wage stagnation.
That’s good.
But it also means that family members are spending a lot more time in the job market. That’s bad, or at least it’s stressful.
There are three problems here and one positive development.
Problem 1: Men’s earnings.
The hourly earnings of some men—and not a trivially small group— have done poorly over the last few decades. As shown in the graph, the typical married man in his prime earning years, age 25 to 54, saw his real median wage fall a couple of percent from 1979 to 2006. His female counterpart made a lot more progress; her real hourly wage rose 30 percent, and she also worked a lot more hours. And if we cut the data a little further and look at husbands with at most a high school degree—and only a minority of husbands were college educated over these years (16 percent in the mid-1970s; 30 percent today)—we find a real wage loss of 8 percent over these 27 years.
But before you spouses out there start humming “Hit the road, Jack,” recognize that it’s not their fault. These men have been caught in the crossfire of a set of trends that have ripped the bottom out of their earnings capacity. The loss of unionized factory jobs has meant the slow bleed of high-productivity jobs in a sector where these guys had some bargaining power—clout that enabled them to channel some of that growth into the household.
Figure 1.1. Real median hourly wage, husbands and wives, 1979–2006.
(Graph is based on author’s analysis of U.S. Census Bureau data.)
The fact is, when a man goes from making stuff to providing services, especially a man without a college degree, his wage falls between 15 and 20 percent, and he loses most of his fringes. What explains a loss of that magnitude? It’s not just the difference in the efficiencies between the two sectors, the so-called productivity differential—the fact that services create less value added per hour than factory work. It’s also that there’s a lot more wage inequality in services, and when income grows in that sector, it tends to flow to the top.
That’s where you most clearly see men’s loss of bargaining power playing out; and outside of the public sector, unions have been hard-pressed to get a foothold in services. Wal-Mart has shut down operations rather than entertain the possibility of their workers forming a union.
At any rate, given that most of these men were working full time, full year, families had one (legal) strategy to undertake if they wanted to offset those negative male wage trends: more work by wives.
Problem 2 and Good Development 1: Women’s increased presence in the paid labor market.
The increase in women’s participation in the paid labor market over the last 40 years is widely appreciated as a huge change in our economy, our culture, and our families. Back in the mid-1960s, about 40 percent of women worked; in 2006, it was about 60 percent. And, while gender wage discrimination was and is a problem, women have made important gains in education and experience, and some have successfully penetrated barriers in high-end professions like law and medicine.
The wage differences noted above are dramatic, and working wives, for example, have more than offset husbands’ losses. My own research has shown that in the absence of wives’ added contributions to family income, the real (inflation-adjusted) income of middle-income married-couple families with kids would have gone up a mere 6 percent between 1979 and 2000, a barely noticeable advance of 0.3 percent per year.
Instead, it was up 25 percent (1 percent per year). That’s the difference between stagnation and rising living standards. My guess is that families like that of the man who asked this question looked at the lay of this land and recognized that if they wanted a better life for their children, they were going to need to spend more time in the paid-job market. The men were topped out, already working full time, full year. And the upside is that working women were both taking advantage of increased economic opportunities and building some important economic independence.
But consider this. Husbands in these families were already working full time, and that hasn’t changed much at all. Wives, on the other hand, are working more weeks per year and more hours per week. In fact, they worked, on average, 535 more hours in 1979 than they did in 2000, the equivalent of more than three months of full-time work (they went from about 850 hours per year to about 1,390).
That’s one source of the squeeze that working families are talking about these days. You could write a book about that too, but let me summarize very simply: It’s a bitch to balance work and family when you and the spouse are working one and three-quarters full-time, full-year jobs between you. It can be done, families do it every day, and we should never downplay the empowerment endowed by greater economic independence. But it’s exhausting.
Problem 3: Faster price growth for some important stuff.
When people talk about the middle-class squeeze, what they’re really saying is that their paycheck isn’t going as far as it used to. Now, economists (like me) who look at overall inflation and compare that with incomes often miss what’s at the heart of these concerns, and at the heart of the question we’re parsing through: Your income can be beating overall inflation but falling behind on some highly visible and very important areas of your budget, your life, and your perfectly reasonable aspirations.
Over the past decade—1996 to 2006—overall prices as measured by the Consumer Price Index were up about 30 percent, a pretty typical rate of price growth. But the costs of child care and nursery school rose twice as fast—they were up 60 percent. College tuition: up 80 percent. The price of the median home doubled over those years, from $110,000 to $220,000 (of course, there was a bubble at work here—see Chapter 2). Health premiums—the monthly amount that families pay out of pocket for employer-provided coverage—also just about doubled, from $122 to $226.
Obviously, if these goods and services are outpacing overall inflation by a country mile, other goods are diving in price. And yes, if you’ve shopped for a DVD or computer lately, you know what I’m talking about. Later, in the section on globalization in Chapter 4, I describe my adventure when I went shopping for a music system. The price of audio equipment is down 40 percent over the last decade. Computer prices are down 86 percent!
Here we have the other answer to Bob’s “What happened?” While economists blissfully celebrate the price declines of cool shiny new stuff with keyboards and remote controls, some of those clunky old things that kind of get us through life, the stuff for which we write checks each month— mortgages, health insurance premiums, child care, the kids’ collegesaving account (if we’re lucky)—have been costing a lot more, and, even for many upper-income families, their prices have been rising more quickly than incomes. Don’t get me wrong: It’s great to be able to buy an awesome computer or sound system for pocket change. They’re very entertaining after an exhausting day of being squeezed by everything else.
Crunchpoint: The fact that lots of men have been whacked by globalization, deunionization, and deindustrialization shows up as real wage trends that have barely kept pace with inflation. Women, on the other hand, have done better in terms of wage growth, and they’re spending much more time in the job market. For many families, that’s more than offset the husband’s wage losses, but it comes at a cost: Balancing work and family is much harder now. Add the fact that the prices of some key components of the middle-class budget are rising faster than middle-class incomes, and you’ve got the genesis of the “middle-class squeeze.” And that’s why the family of the man who asked this question can’t live like his parents’ family did.
Haiku-nomics: Sure, we can talk about economics all day. But at some point, you want to hear some poetry, right? In that spirit, you will be introduced to a new form of Zen-based poetry I call haiku-nomics, strategically placed throughout the book. The haiku is a simple Japanese form intended to plant an image, idea, or fleeting feeling in the mind of the listener. Now, there may be a reason why the great haiku artists of the distant past avoided economic themes. You be the judge.
The economy grows.
Yet my resources fail
to reach my needs.
Health care reformers, from Michael Moore to scholarly wonks, constantly tell us that some other countries spend less on health care, cover more people, and have better health care outcomes than we do. Is it so, and if it is, are we going to do something about it?
One of the most important aspects of the middle-class squeeze relates to people’s concerns around health care, a topic about which many folks raise questions. Other countries spend less on health care because they recognize it is not something that can be efficiently produced and delivered through a purely market system. So they take it at least partially out of the market; that is, the government sector plays a much larger role in both access to and delivery of health care. In doing so, these other countries undermine the damaging power and scope of the medical industrial complex—the MdIC—a force that must be met if we’re ever going to get this right. Principle #1 comes first for a reason: Our biggest economic reform challenges, and I’d put health care at the top of that list, have been and will continue to be a struggle to wrest power from those with deeply vested interests in maintaining their privileged positions.
The advantage that reformers have in this debate is that there are demonstrable ways—systems up and running in other advanced economies—to provide health care more efficiently and effectively than we do now. Getting there will be neither easy nor painless nor devoid of sacrifice. But as I document later, unless we change course, health care spending will simply suck up too many of our resources, leaving too little for anything else.
Just for the record, this is not a radical claim, nor is it something unknown to most economists. Congress’s nonpartisan budget analyst, the Congressional Budget Office, regularly churns out documents pointing out . . . well, not so much that “the end is near,” but more, “You guys are going to want to do something about the ever-increasing share of health care in our economy . . . right, guys? Hello? Anyone there?”
Given these economic pressures, let me first dispose of the easier second question: I think and hope that we may be poised to at least start moving in the right direction on health care. Granted, that’s not a hugely confident assertion, but let’s face it: When it comes to making big changes in big, important systems, we don’t exactly turn on a dime. But let’s also face this fact: People get that the current system is breaking down, and politicians seem to get that people get it. A poll from early 2007 found that a majority wanted Congress to address the problem of health care coverage and 60 percent were even willing to pay higher taxes to deal with it. Almost 8 out of 10 said it was more important to make sure that people got health care than to extend the Bush tax cuts.
It’s no surprise that people want better health care, but when they start saying they’re ready to pay higher taxes, that should get everyone’s attention. These days, it seems every serious political candidate has a plan purporting to deal with health insurance, often highlighted as the centerpiece of his or her campaign. All this attention won’t guarantee a desirable outcome (we’ll talk about competing health plans in Chapter 5), but it does tell you that the issue is solidly on the front burner. As I stress below, there are powerful forces aligned against health care reform, and my guess is that we’ll move toward the light (that is, a better system) in baby steps.
About the first part of the question: As noted, every other advanced economy has recognized that health care coverage is not a commodity like picnic tables or pet food, and so, to one degree or another, either they provide it through the public sector or, if they keep it private, it’s highly regulated. This might sound odd to free market advocates, whose religion holds that any service taken out of the market or highly regulated will be provided less, not more, efficiently.
But health care is different from commodities in some fundamental ways. First, since we tend not to let people expire in the streets, we end up providing the uninsured with care. If a hungry person shows up at a supermarket without money, he doesn’t get fed. But if a sick person shows up at the hospital without insurance, she does get treatment. And the rest of us end up paying for it.
Another reason why we shouldn’t be thinking of health care as a commodity is that it’s one of those things that sellers—the insurers—want to sell less of, especially to sick people. Private insurers have an incentive to prevent people from getting all the care they think they need. This incentive rises as medical costs rise, and health costs have been rising a lot faster than average inflation. Remember, insurers are in the for-profit sector, and while of course they expect to make all kinds of payouts to the people they cover, they’re going to spend some time and resources trying to avoid doing so. Basically, the sicker you are, the more you need access to the system. But you are precisely the person the gatekeepers want to keep out. It’s a recipe for dysfunction.
Other countries with advanced economies save a lot by taking the insurers out of the picture. As noted, they employ either single-payer or heavily regulated systems, in which either the government is the exclusive insurer or private insurers must provide specified, subsidized coverage to all. There’s little market competition, but costs are held down by (1) taking advantage of the huge risk pool—when the nation has one insurer to which you have to contribute, the majority of healthy people subsidize the minority of sick people; (2) the absence of profits, advertising, and weeding-out costs (it takes insurers time and resources to get between people and the care they want); and (3) some degree of rationing and price controls, and a lot more attention paid to cost effectiveness (what works versus what’s wasteful).
Before anybody freaks out over the rationing part of #3, let’s be clear: Our current system rations like crazy. It’s called price rationing, and there are 47 million uninsured people who’d be happy to discuss it with you.
So, that’s why these countries spend one-half to two-thirds the share of GDP that we spend on health care, but cover everyone and still manage to report generally better health outcomes on important stuff like lower infant mortality rates, longer life expectancy, and less obesity, diabetes, and hypertension.
The absence of large risk pools and the inefficiencies in the private market are not the only reasons we spend a lot more for less. Pretty much everything I’ve told you thus far is well known, but moving to universal coverage would not, on its own, solve our other health care problem: the fact that health care spending is outpacing the growth of the overall economy. Every year, we’re spending more and more as a share of our economy on health care. Less than half of the increase is due to the aging of the population—it’s mostly due to increases in medical costs, which year after year outpace overall inflation.
So how do we wrestle these costs down? I’m afraid it’s another case of the need to do battle with a powerful foe—in this case, the medical industrial complex. Sure, expensive technologies are desirable—jeez, who wouldn’t want his kid to get a CAT scan when she bumped her head?—but a careful look at the way we spend health dollars suggests lots of waste and profiteering in the name of “good medicine,” something I go into in greater detail below and in Chapter 5.
Crunchpoint: It’s true. Every other modern economy delivers health care to more people with better results at less cost, and they do so by at least partially “de-commoditizing” health care: To one degree or another, they take it out of the market. This doesn’t mean it will be costless to follow their lead. When we fix this, and I think we may to be poised to take a serious run at it, some folks will receive less health care than they do today. But many others will receive more, and if we get it right, we’ll all benefit from the establishment of a system that covers everyone and does so in a way that doesn’t metastasize into an inoperable tumor.
Why is our health care system so crazy expensive, yet my health insurance company won’t pay for all of my child’s routine medical checkups?
To paraphrase slightly, why does our nation spend lavishly on all kinds of pricey care but skimp on routine prevention? Here’s a place where old-fashioned economic analysis is pretty helpful: Follow the incentives. Beware, though. The path they lead you down is not pretty.
Most health coverage is based on deductibles and premiums. Once you’ve paid your share—the deductible—the premium kicks in and the insurer takes over. Cheap plans offer high deductibles: You pay comparatively little for the plan, but it doesn’t kick in until you’ve shelled out some serious bucks. Sounds good if you’re healthy. But that’s risky. You might not be as healthy as you think, and if some serious accident or illness befalls you, you’re both sick and screwed.
On the other hand, if you have the dough, you can buy an expensive plan that kicks in with coverage quickly. But remember, these insurers weren’t born yesterday. You as much as cough while you’re filling out the form, and there’s no way they’re going to cover your sickly butt, even with an expensive plan.
Most of us with coverage end up somewhere in the middle, but once we’re no longer paying the bulk of our costs, we don’t have a great incentive to conserve. I know I’ve argued against many basic economic precepts in these pages, but in this case, I’m with the textbooks: Price signals matter. It’s true that, like this questioner’s kid, we underconsume preventive care. But at the same time, many of us overconsume wasteful and inefficient health care. Why? Because we don’t face the real costs.
Health policy analyst Ezra Klein uses this metaphor to describe the problem: “You eat more at a buffet because the refills are free, and you use more health care because insurers generally make you pay up front in premiums, rather than at the point of care.”
But it’s even a little worse. There’s an episode of The Simpsons in which Homer bankrupts an all-you-can-eat seafood restaurant. The buffet idea works for the restaurant because they can count on most diners’ getting full. With health care, we’re more like Homer, going back for ever-more-expensive treatments and drug regimens with no mechanism to satiate our infinite demand for health.
The insurers fight back with everything from hassle factors (making you or your doc jump through hoops to get reimbursed), to co-pays (you shell out 10 bucks at the MD’s office or the pharmacy), to covering two “well baby” visits this year as opposed to three last year.
Let me be clear about this last glib point regarding the baby visits, lest I’m accused of falling into the same “More health care is always better health care” assumption that’s partially gotten us into this mess. It may be that two well-baby visits are all that the typical baby needs to remain healthy, and while more visits might be reassuring to Mom and Dad, that’s precisely the kind of waste we need to drive out of the system. The question is, who makes that call? If it’s the insurer, forgive us—me and the person asking this question—if we’re a bit skeptical regarding the insurer’s ability to objectively do so.
At any rate, the insurers have been highly successful in the sense that their profits have been robust, but that’s partly because they’ve been raising costs: Premiums since 2000 have been growing at a rate that’s more than three times that of overall inflation. Family coverage premiums were up 67 percent from 2000 to 2005, compared with 13 percent for overall inflation.
Who pays these costs? Our employers do, but so do we, through lower wages and higher prices. American cars, for example, cost more than those of our competitors who don’t face these irrational burdens. The fast-rising cost of health care is one reason for the squeeze I write about elsewhere, as employers move that raise you expected out of your paycheck and into your insurer’s pocket. More and more, we pay through greater direct payments into the health care system, as employers shift costs back to their workers.
OK, you say. But we’re getting something for all that money, right?
Well, sure, we’re getting lots of health care, but it’s not making us any healthier than those much cheaper systems would. Well-known studies show that, controlling for how healthy or sick people are, health outcomes do not differ based on how much treatment people receive. One study by the Rand Corporation is particularly instructive because it followed people who were randomly sorted into insurance plans with great variation in their generosity (the random sorting is key, because you want the sick people and the healthies randomly distributed among the plans). As you would expect, those in the Cadillac plans got a lot more care, 40 percent more than those driving Hyundais. But their health outcomes were unaffected. (Important exception: The poor in the more generous plans fared better than the poor in cheaper plans, because—news flash—they tend to underconsume health care.)
Some findings suggest weird regional differences. For example, in certain parts of the country, C-sections are a lot more likely (for no apparent reason); and elderly persons in their last stages of life will see many more specialists in one part of the country relative to another—again, with no difference in outcome. By default, people across the country are, at their doctors’ behest, getting expensive and exotic tests just because they’re there, with no regard to their cost effectiveness.
As Ezra Klein summarizes in his American Prospect article, “Not only is more care not always better, it is sometimes worse—and it is always more expensive.” The nonpartisan wonks of the Congressional Budget Office (CBO), who crunch the budget numbers for the Congress, are duly freaked out about the longer-term budgetary implications of these developments and echo Klein’s point in their own starchy terms: “Significant evidence exists that more-expensive care need not mean higher-quality care—suggesting an opportunity to reduce costs without impairing health outcomes.”
Here’s a revealing picture about this story from the CBO. Each dot represents a state, and they plot the relationship between a quality of care measure on the y-axis and that state’s per-beneficiary Medicare costs (x- axis). If costs and quality were correlated, the dots would generally line up from the lower left to the upper right of the figure. Instead, they’re randomly scattered about. I kind of see a little doggie running, but that’s me. I also see some pretty obvious waste. We’re clearly kickin’ back, spending money hand over fist, with little regard for what works, and for what’s cost effective and what’s not.
Figure 1.2. Spending and quality of care for Medicare beneficiaries, with each dot representing a state. Do you see a relationship? (I see a little doggie . . . ). (Source:Health Care and Budget: Issues and Challenges for Reform , Congressional Budget Office, June 21, 2007 [figure 4].)
Given the nature of the problem, stopping this waste has been and will continue to be extremely hard. Who wants to be the one whose spouse, child, or parent doesn’t get the fancy drug because it might not be “cost effective”? Our own infinite demand for anything we think might help us or our loved ones is but one of the intractable forces we’re up against. The other is the MdIC.
There’s no better critic of the MdIC than health policy analyst Merrill Goozner, a guy who not only has fearlessly looked in every dark corner of that complex, but also has the medical knowledge to recognize waste when he sees it. Here’s how he sees the fight we need to have:
In [these] debates, we’ll be taking on the drug, device, and durable equipment makers, the diagnostic testing industry, hospitals and organized medicine, as well as the tobacco industry, environmental polluters, the food industry and other drivers of poor health in American society.
Those are some pretty formidable foes, but here is the reason why we as a society must join this fight: Something’s got to give. The arithmetic is, once again, scarily simple. If spending on health care keeps growing faster than our income, the share of our economy devoted to health care will continue to rise.
Some economists have argued that they’re OK with that. We are an aging, rich country, and, as I’ve been stressing, spending on health care is, to introduce a tiny bit of useful jargon, “highly elastic,” meaning that as our income goes up, we want more, more, more of it. In this view, we’re sovereign consumers buying lots of what we want. That’s the genius of the market, right?
Uhh . . . nope. We’re consumers who are pretty much helpless to recognize the utility of what’s being proffered, whether it’s a $5,000 screw for our hip joint or an MRI for a headache. Some of those screws and MRIs will be tremendously life enhancing, if not livesaving, but we don’t know which ones, and the system is set up to keep us from learning or caring about the difference. And, to be fair, a lot of well-intentioned doctors don’t know, either. That way, the bucks keep flowing to the MdIC.
There’s got to be a better way. Read on.
Crunchpoint: Health care in America costs so much because consumers don’t face the costs, we tend to have insatiable demands, and no one in charge is trying to figure out what works and what’s wasteful—meaning that quality and cost have far too little to do with each other. Fixing these structural problems puts us on a collision course with the medical industrial complex, and those guys are street fighters (K Street, that is).
What’s it going to take for large-scale health reform to occur?
This is a tricky question to answer because a large majority of us— according to health care expert Atul Gawande—are satisfied with our own coverage and care. The problems discussed above are real, but their result is more of a slow bleed than a hemorrhage, so my answer—and this is clearly more about politics than economics—is that change is going to come only gradually. We’ll get to large-scale reform, but it will occur through chaining together a bunch of small-scale steps. We won’t see Medicare for All anytime soon, but we might see it for selected age groups, like children or those nearing retirement.
That said, I’m confident that reform will occur, and let me quickly shift back to my economics turf to show you a convincing picture of why I think so. It relates closely to principle #3, regarding trade-offs, the notion that economics often forces us to choose one thing over something else. The trajectory of health care spending, both private and public, is going to be forcing our hand in this regard in a big way, and not all that far down the road.
Figure 1.3. Per capita income, per capita income less taxes, and per capita income less taxes and all health care spending, 2005–2050. (Source: Henry Aaron, Brookings Institution, used with permission from Mr. Aaron.)
The figure captures two critical points. First, excess health spending is not simply a public sector problem, as in, we can fix Medicare and be done with it. It’s a private sector problem, too. In fact, it’s worse there, which is the figure’s second point: The current rate of health care spending will, in a few decades, leave us with a lot less disposable income.
The top line shows that real GDP per person is expected to double over the next 50 or so years. Sounds good, right? We’ll all be wealthier, at least on average.
Not so fast. The next line shows that if we account for the taxes we’ll need to pay for public health care, like Medicare and Medicaid (along with other stuff, but those are the big-ticket items), real income will start lower and grow more slowly. But it’s still up about 75 percent, meaning once we pay for public health care, we’re still better off over time.
But it’s the bottom line that’s the nasty one. Here we take out private sector health care spending, under the assumption that nothing changes in the way we spend these dollars. Income growth peaks in 2044 and we’re actually poorer than we would be otherwise after that. And this is “per capita,” or average, income. Excuse me if I worry that those in the bottom half will get whacked the hardest.
Now, I distrust 50-year forecasts as much as you probably do, so don’t take these future trends as indisputable. But do view them as a hard-headed warning against blithely following the current path, nudged happily along our way by economic positivists jabbering about “sovereign consumers exercising their preferences.”
Even if we wanted to, we could not continue down that path, a fact that’s widely known and appreciated by policymakers of all stripes, as well as by the MdIC itself. As we near the point where income minus the cost of health care flattens or even falls, excessive, wasteful health spending inexorably crowds out our ability to invest in or pay for other stuff we want and need. We will be unable to improve schools, get more low and middle-income kids through college, make the needed investments to push back global warming, or simply make the paycheck go far enough to meet basic needs and aspirations.
And even if you think your family’s health care plan is OK, at some point that squeeze will get your attention. It’s already gotten the attention of some of the states. As is often the case with big policy matters in the United States, the states tend to act before the feds, and some big players, including California and Massachusetts, are not waiting for Big Brother to get started. They and others are trying out good ideas, mostly involving “pay or play” plans: Employers either provide coverage to their workers or pay into a state plan to do so.
This phenomenon where states serve as laboratories for what later becomes national policy has a pretty good track record, but there are reasons to be a little skeptical in this case. First, remember all that stuff about the benefits of a large risk pool? Well, certain states have a disadvantage in this regard, like Florida and Arizona, with their larger-than-average share of elderly residents and low-income immigrants.
But a bigger constraint is fiscal: States just don’t have as deep and flexible a purse as the feds. Unlike the federal government, states have to balance their budgets, meaning that in economic hard times, when revenue grows scarce, they’ll have to start cutting health services, and this at a time when folks are particularly vulnerable. We’ve actually seen this happen already, around a public health insurance program for children that’s funded by a combination of state and fed bucks. States generously expanded coverage in the flush 1990s and retrenched in the 2000s.
So, while we should look closely at the state experiments and learn all we can from them, one lesson seems to be that you can’t do this effectively at the state level. You need the big elephant (more likely, the big donkey) in the ring.
Crunchpoint: The current health care system—public and private—can’t be sustained, in the sense that it will soon start gobbling up too much of our income to be justified, especially considering the unsatisfactory outcomes. Change is a-comin’, but it’s likely to be incremental. That’s fine, as long as the baby steps we’re taking are on the right path, toward single-payer coverage and away from the MdIC.
My health plan betrays me.
I fight alone against
a stronger foe.
How many people are actually poor in America?
As of this writing, the most recent poverty statistics inform us that there were 36.5 million officially poor people in the United States in 2006, 12.3 percent of the population. They’re “official” in the sense that they meet the government definition, but we hit a snag right away: Not even the officials believe the official measure. It’s terribly out of date and is simply no longer a reliable measure of economic deprivation. Even the statistician who invented our poverty measure lo these 40 years ago doesn’t believe it anymore.
Let’s take a quick look at how we measure poverty and why the measure has become so inadequate. For you to be counted as officially poor, your income gets compared with a threshold for your family size. For example, for a family with two parents and two kids, the 2006 threshold was about $20,400; for a single parent with two kids, the threshold was about $16,200. If your family income, with a few adjustments, was below that level, you were poor.
The official thresholds were based on food costs of low-income families in the mid-1950s. Surveys showed that these families spent about a third of their income on food, so we simply tripled the value of the “economy food plan” (the cheapest nutritionally adequate food plan derived by the Department of Agriculture) for a given family size.
Amazingly, with very few changes and with adjustments for inflation, this remains the official poverty measure to this day. Food consumption represents a much smaller share of family budgets than was the case 50 years ago (its average share has fallen by about half), while housing, transportation, and health care, for example, constitute larger shares. Simply updating the official thresholds for this change alone would lead poverty thresholds (and poverty rates) to be much higher today.
But there’s a deeper problem with the official approach: As living standards rise for the rest of society, those deemed poor by a fixed income level that is adjusted solely for price changes will fall behind the rest of us. Back in 1960, the official poverty threshold for a family of four was about half the typical (median) income for a four-person family. Today, it’s around 30 percent of the four-person median.
In an era with sharply growing income inequality, it is worth contemplating the importance of this development. Why should we be concerned if our poverty thresholds drift farther below the income of the median household?
The answer is that the concept of deprivation is not solely an absolute concept; it is a relative one as well. Economists since Adam (Smith, that is) have recognized that even if the poor are able to meet their fundamental needs for food and shelter in such a way to sustain their lives, they can, by dint of the economic and social distance between themselves and the rest of us, still experience deprivation that is harmful to society.
As Smith put it, over 200 years ago:
By necessaries I understand not only the commodities which are indispensably necessary for the support of life, but what ever the customs of the country renders it indecent for creditable people, even the lowest order, to be without. A linen shirt, for example, is, strictly speaking, not a necessary of life. The Greeks and Romans lived, I suppose, very comfortably, though they had no linen. But in the present times, through the greater part of Europe, a creditable day-laborer would be ashamed to appear in public without a linen shirt, the want of which would be supposed to denote that disgraceful degree of poverty which, it is presumed, nobody can well fall into, without extreme bad conduct. Custom, in the same manner, has rendered leather shoes a necessary of life in England.
In other words, whether or not you’re poor isn’t simply a matter of whether you can afford to meet your most basic needs. It’s also a question of whether you’re keeping up with the general rise in living standards that most people are experiencing and enjoying.
To this day, top-tier poverty analysts who should know better overlook this point, citing material gains made by today’s poor relative to those of the past. Two such analysts, for example, writing in 1999, noted, “By the standards of 1971, many of today’s poor families might be considered members of the middle class.” Another noted that “poor people’s physical and material well-being is considerably better now than in the late ’60s. How else to explain why so many poor now have color TV (93%) and air conditioning (50%), and own their own homes (46%)?”
Such comparisons implicitly freeze the well-being of the poor at a point in time, ignoring progress in technology, consumption, relative prices, and opportunities. In short, to ignore the economic distance between the poor and everyone else is to ensure that they will remain outside the mainstream. Yes, they will not starve, many will be housed, and they will all watch TV in color. But they will still be separate and unequal relative to the majority.
Does all this mean we don’t know how many poor people there are today? Fear not, poverty warriors: the faithful Census Bureau has been working hard on improving the measure. An updated measure that corrects for many of the shortcomings of the official one would add another 4.5 million to the poverty rolls. Were it not for political constraints—no president wants to add that many people to the rolls on his watch—we would retire the old measure and adopt the new one.
Crunchpoint: Though we do a lousy job of measuring it, there are a lot of poor people in America, about 40 million in the mid-2000s. Yet all is far from lost in the war on poverty. Read on.
Counting the poor?
Would it not be better to
Simply help them aboard?
OK, I’m glad we could do a better job measuring poverty. I’m sure that makes you social scientists very happy. But how about ending it? Couldn’t we just put an end to poverty if we gave the poor a little money?
No, we couldn’t really end poverty just by giving the poor a little money. I mean, we could, but (a) it would cost $62 billion a year, and (b) it’s not going to end poverty. Poverty in America is not just a lack of resources, although, as I stress in the next question, that tends to be the poor’s most pressing problem. What sustains poverty is the lack of educational and employment opportunities, along with a lack of ongoing supports to give people the lift they need.
How do I know that? Lots of academic research, for one, but more convincing is the fact that for a few years in the 1990s, we provided the poor and near-poor with what was missing, and the results were dramatic, as we’ll see in a moment.
But why are they poor? As you might expect, we’ve been arguing about that forever. When the first poor person stumbled into the marketplace, an argument broke out as to whether it was his fault for being a lazy bum with low morals or society’s fault for not providing him with adequate opportunities. In the economic debate, this reduces to whether you believe unfettered market forces or government solutions can fix the poverty problem.
The argument will never end, because it’s too reductionist. As we learned in the 1990s, it takes both forces working together. During those years, we made a huge dent in our poverty problem, and the main causes were the tightest labor market in 30 years and a new, beefed-up set of antipoverty policies.
Some of these policies were delivered under the rubric of welfare reform, a mid-1990s change in poverty policy that had some harsh, punitive aspects but also invested some serious resources in “work supports,” programs that help poor people move into the workforce and stay there for a while.
We increased worker training and access to higher education; added child care, health care, housing, and transportation subsidies; implemented a major expansion of the Earned Income Tax Credit, a program that adds literally thousands of dollars to the incomes of working poor parents (around $4,500 for a family with at least two kids in 2007); and raised the minimum wage.
At the same time that welfare reform and work supports were pushing poor people into work, the job market pulled them in. In the latter 1990s, the labor market heated up more than it had in 30 years, and as unemployment began to slide down, even low-wage employers had to raise wage offers to get and keep the workers they needed.
It was the perfect resolution to the irresolvable historical poverty argument—market and nonmarket forces working together to solve a problem neither could solve alone—and the results were striking. Figure 1.4 shows the depressingly high poverty rates of African-American children from 1979 to 2006. The rate basically cruised along in the mid-40s throughout the 1980s, despite the ongoing economic recovery in those years, a recovery that clearly bypassed poor black families.
Figure 1.4. Poverty rates, African-American children: We got it right for a minute. (Source: U.S. Census Bureau.)
But from 1992 to 2000, thanks to increased jobs and earnings for African-Americans, black child poverty rates fell an unprecedented 17 percentage points, from 47 to 30 percent. Granted, ending up with about one-third of African-American kids in poverty isn’t exactly a huge success story, but we’ll never get child poverty down to the levels we want if we don’t get the trend headed in the right direction. And, man, it was definitely doing that.
But with the 2001 recession, the trend stalled. This, too, is elucidative: There were no policy changes in that year, but you need both policy and markets pointing in the right direction, and when the market dropped out, poverty headed north again. In fact, it became clear in those years that with all our policy emphasis on work supports, we’d lost some of the safety net functions that poor people need when work disappears.
We’ve never regained the full-employment conditions that prevailed in the latter 1990s, and poverty rose through 2005 (poverty fell slightly, from 12.6 percent to 12.3 percent in 2006; that left it one point above its 2000 level and added 4.9 million to the ranks of the poor).
Crunchpoint: We can’t end poverty by giving money to poor people because it doesn’t scratch the itch that keeps them poor. For years, scholars argued whether that itch should be scratched by government or markets. Thanks to critical lessons learned in the 1990s, we know it’s both: When market forces deliver a full-employment job market, and publicly provided work supports help to close the deal, we can make tremendous progress against poverty.
Should I give money to the guy selling the “Street Sheet” outside my office?
Sure. It helps chip away at his most pressing problem—poverty. But it’s a very temporary fix.
Economists worry about negative incentives, which in this case is the idea that you could inculcate dependency by supporting this marginal enterprise. In fact, the “Street Sheet” was invented in part to avoid the pure begging done by destitute people (usually men). Giving money to that person feels a little different, and that’s probably because even the most liberal among us worry about fostering damaging incentives. But the guy selling the “Street Sheet” is something of an entrepreneur. Still, the main question here from an economic perspective is, does your contribution help or hurt this guy?
Is he more likely to look for a legit job, or at least one with steady hours and more reliable pay, if nobody gives him anything? The research that gets closest to this suggests that there’s a small chance that he might, if he were mentally up to it and if such work were available to him. But this whole dependency rap has been way overplayed. Conservatives argued for years that welfare payments kept people from seeking work, and while the statistical evidence suggested that there was something to that, the effects were economically small. That is, in the absence of welfare benefits, the evidence was that poor people would work just a little bit more than they did already.
Most of this research was on single mothers. In the 1980s, when we experimented with ending welfare payments for single men (“general assistance” was the term of art), the research didn’t show that such payments were what stood between these men and gainful employment.
The real barriers turned out to be their own personal limitations, in terms of skills and in some cases mental illness, and the lack of decent jobs for people with limited skills and an unimpressive work history. For parents, especially single parents, the lack of reliable and safe child-care options just made things worse. In the 1990s, welfare reform turned a lot of this around, but in many ways, the successes of that program—more solid labor market connections, lower poverty rates— have been misunderstood.
By “welfare reform,” I mean a big set of legal changes in the rules governing the receipt of welfare benefits, along with an equally important set of attitudinal changes in the way the program was administered on the ground level. The rule changes made work in the paid labor market a much greater requirement of benefit receipt. The ground-level changes meant those administering the program were much less quick to just hand over and send the recipient on her way.
Some argue that the lesson of welfare reform is that once welfare benefits were contingent upon finding a job, people got their act together and went to work. Which, if true, might lead you to start stiffing the guy outside your office. Give him an incentive to find real work, and he’ll do so. But that’s not what went on in those years.
For example, you might think we spent less on welfare in those years (the 1990s), and we certainly spent a lot less on welfare benefits. But the fact is, we ended up spending tens of billions more helping the poor move from welfare to work, subsidizing their wages and providing them with so-called work supports, such as subsidies for child and health care, transportation and other costs of work, and worker training. And while these efforts gave the recipients an added push, the pull of the strongest low-wage labor market in 30 years also helped (see the full-employment discussion in Chapter 5 for more discussion).
Years ago, I was a social worker in New York City’s East Harlem, where I met all kinds of people needing all kinds of help. But I’ll tell you the two things that most of these people had in common. First, they lacked economic resources, and that made everything so much harder for them, from getting a phone turned on, to keeping a roof overhead, to getting themselves or their kids some health care (to this day, I remember an entire day spent trying to get someone in chronic pain in to see a dentist). Second, their economic aspirations were the same as everyone else’s.
If you think I’m being sentimental, you’re wrong. Maybe it’s a “selection bias”—these folks came for help—but when I think back on my clients from those long-ago days, my impression is of people pretty much just like me but (a) persons of color and (b) intensely hassled with the challenges of making ends meet and parenting their kids. Many also were worried, if not depressed, by the nagging fear that they were not doing right by their kids, whom they feared were not getting the opportunities they deserved.
Which is simply to say that the guy selling the ”Street Sheet” is most likely trying to get by the best way he knows. Would he be better off with a stable job with a decent wage and health coverage? Are folks living and/or working on the street more likely to have all kinds of problems, with drug abuse and possibly a criminal record at the top of the list? Of course, in both cases. But does your dollar somehow make his goals less attainable or his problems worse?
To the contrary, his most immediate problem is poverty, and your dollar helps chip away at that, so feel free to give it up.
Crunchpoint: The biggest problem facing the “Street Sheet” seller and others like him is society’s reluctance to invest in their well-being. Your dollar helps in the very short run, and all that negative-incentive stuff has been way over-played. But what he really needs is the political support for an agenda that helps folks like him get into the legit economy, an agenda I present in Chapter 5.
Why do teachers make so little compared with stock traders? Aren’t the teachers entrusted with greater responsibilities?
Economists have an answer to this that’s as simple as it is unsatisfying: People are paid what they’re worth. That is, they are paid according to the value they add to the economy.
It’s your classic, pristine economic assumption: If, by definition, you’re paid according to your “value added,” you cannot, by definition, be under- or overpaid. If you think you’re earning too little, then you must be placing an inflated value on your self-worth. Your economic self-esteem is too high.
How do I know this is wrong? Oh, come on . . . can’t I just assert it? Does anyone really believe that people are paid their precise worth? How come other people get to make bogus assumptions and I’ve gotta prove everything?!
(Sorry—excuse the rant. We’re back live.)
Contradictions abound, in fact. People doing a job in high-end firms get paid more than those doing the same job in low-end ones, like janitors at Goldman Sachs versus those at the dockside warehouse. Union workers make more than nonunion workers doing the same job. Even when we control for all the relevant differences (experience, occupation, education), women and minorities earn less than white men (and 75 percent of public school teachers are women). Most recently, earnings have stagnated—the real weekly earnings of the typical (median) worker were down slightly between 2000 and 2006. Yet the economy’s productivity rose 17 percent. There is simply, absolutely, unequivocally no way that people were being paid commensurate with their contributions to the economy over those years.
Not that there’s no relationship between value added and earnings, but a million other factors come into play. Let’s examine a few regarding the question posed above.
First, the motivation behind this question is usually something like: “Teachers are educating our future citizens and workforce, while stock traders are making bets that Ukrainian oil futures will fall relative to Bulgarian wheat prices. Shouldn’t society value the former more than the latter?”
Well, part of what determines your pay in occupations like law, finance, and real estate, for example, is the money you bring in through the door. Successful traders bring in a lot; successful teachers don’t bring in any. So, part of the answer is that traders and lawyers and such folks are literally working with the coin of the realm. Valuing teachers’ work, which is really more like valuing an investment, takes a little more thought.
How, in fact, should we evaluate teachers’ “output”? The rage nowadays is to hold teachers and schools accountable for test scores. While this sounds like a reasonable metric, there are countless factors that affect a student’s ability to learn, and some of the most important ones are at work outside the realm of education and inside the realm of family.
And even if we could have faith in such output measures and they led us to believe we should pay teachers more, where would that lead us? Right to the taxpayer.
Herein lies the other rub. Teachers, at least the majority of the K–12 ones who work in the public sector, get paid through taxes, mostly local ones. And that can be a terribly tough wedge between what you get and what you’re worth. Communities are constantly squabbling over this issue, and if you’ve ever been to a town meeting, you know how contentious this gets. Those asking taxpayers to pony up more bucks for teacher pay can’t point to a new library, a ball field, or another such structure. They’ve got to make the case that this is the right investment.
Teachers’ unions get villainized, and you can find examples where lousy teachers were unduly protected by the union. But all of the above discussion tells you why you really need unions here: Without them, the teachers would have little bargaining clout against those who would devalue their work. To the contrary, let us not forget principle #1, regarding the role of power in determining economic outcomes. We should thank the unions for trying to keep teachers’ pay high enough to attract decent people to the job.
And, in fact, in terms of compensation, the unions aren’t as successful as their detractors make them out to be. Careful research shows that we underpay teachers. Even accounting for the fact that most of them work fewer hours per year than comparably skilled professionals (that is, controlled for education, age, and other relevant characteristics), when we compare their pay with that of other such workers, they earn less than they should.
Crunchpoint: We underpay teachers because we undervalue their work. This is partly because, despite the great responsibilities they shoulder, it’s not easy to value their output, compared with that of other professions, and partly because they’re paid through taxes, and it’s always a struggle to convince taxpayers that they need to pay more for something, especially when the returns are down the road.
Should we aspire to a totally equal society? Is there anything good about inequality?
I get this one a lot, and it gives me a chance to underscore an important point that can get lost in all this inveighing against the extent of inequality in today’s economy. Inequality is inherent in economies, modern and ancient. While some utopians might aspire to total equality, I don’t, and it’s not happening in our lifetime, anyway.
The classic response to this question, usually by those who would like to downplay the extent to which income and wealth have become concentrated at the top, is that we seek equality of opportunity, not of outcomes. Everyone should have an equal chance to attend a top school or enter a profession for which they’re qualified, for example, but no one should be guaranteed that their grades or pay will be equal to those of others. That’s for the “market” to decide, based on individual merit.
Now, that’s a perfectly fine, albeit pretty abstract, goal. But the problem is, the distribution of opportunity follows that of wealth. When too many economic resources are held by too few, when the benefits of growth elude broad swaths of working families, opportunity itself becomes a rare commodity, out of the reach of the majority. Too much inequality precludes a meritocracy. We see this most clearly in educational opportunity, where college-completion rates for high-score poor kids are about equal to those of low-score rich kids.
In crunch terms, I’ve stressed how excessively unequal economic outcomes hurt living standards and aspirations, but there’s a political price to pay here, too, and it can be steep. If people feel that the system is rigged against them, they’re less invested in that system. The first sign is political disengagement, and we’ve certainly seen that relative to other countries with more narrow income distributions. But the next stage is one I also try to warn about in these pages: opposition to positive aspects of free markets that people perceive as hostile to their economic interests, such as globalization and immigration.
It is precisely for this reason that capitalism always comes with pressure valves— policies and social norms devised explicitly to preclude the excesses of wealth and power concentration that threaten the system. The fact that these “excess dampeners” have withered is one of the reasons for our current difficulties, and therefore the policy set I introduce later is designed largely with one intention: to bring them back to life. Resetting America’s economic balance must become our central policy goal in this area.
The other part of the question—is inequality ever good?—is also interesting. Some level of inequality is a fact of life, one I don’t view as particularly good or bad. But this does give me a chance to warn of a strange and misguided argument I’ve seen surface in conservative and libertarian circles: Inequality is very good, because it creates strong incentives.
The idea here is that fast-growing inequality makes the returns to success and penalties of failure much greater. You’ve more to gain and more to lose, so you try harder. Sounds like a dark mixture of Machiavelli and Darwin to me, but I suppose it’s plausible. In real life, however, it appears to be just plain wrong.
Though surely market incentives affect effort, this idea takes that notion to a silly extreme, one for which there’s no evidence. There have never been signs of greater effort in periods of high levels of inequality. If anything, the opposite has occurred, as the have-nots drop out of a game they perceived to be rigged against them, and the haves, content that the game in rigged in their favor, chill by the swimming pool (that’s just snark—the rich work a lot, but their effort is uncorrelated with inequality). Some interesting behavioral research has shown that the inequality incentive structure does have one noticeable impact: It leads insiders to cheat more, because they figure the system is tilted their way anyway, so who’ll notice if they cut a corner? Call it the Halliburton effect.
A recent variant of this “greed is good” motif has been applied to college attendance. Some prominent economists, including Nobel laureate Gary Becker, claim that high inequality sends a market signal to high school grads that they should attend college. They even go so far as to oppose progressive tax changes as a move that would dampen people’s incentives to get more education. Raising taxes on wealthy people, in their model, would be advocating “a tax on going to college and a subsidy for dropping out of high school.”
These scholars really believe that some kid who’s considering going to college today will think, “I was seriously looking at college. But, hey, a few decades from now, I could be making serious bucks, and if the 2010 Congress is just going let the high-end marginal tax rates reset from 35 to 39.6 percent, what’s the point? I hear Starbucks is hiring.”
Or, even more bizarre, some kid thinking about dropping out of high school will allegedly say, “Jeez, a Democrat could win in ’08, and they’re talking about letting tax rates on capital gains and dividend income go back up. Relative to rich people, that lowers my tax bill . . . I’m so outta here!”
I’m not making this up. In mid-2007, these economists put forth this argument as a reason we’d better not let the Bush high-end tax cuts phase out as planned.
Perhaps you’ve got to be a Nobelist in economics to be able to convince yourself that people need a rising after-tax wage premium to persuade them to go to college. You’ve also got to ignore the fact that other countries with much less inequality have college graduation rates at least as high as ours, and in fact, these countries have seen much faster advances in college attainment than we have.
If you ask me, that’s what these big shots should be concerned about. I’ve looked at our lagging higher-ed attainment problem, and the most obvious cause seems to be a diminished ability among those in the bottom half of the income scale to afford the price tag of higher education. In other words, it’s exactly the opposite of what the economists are telling us: Inequality is not promoting college attendance—the crunch is discouraging it.
These economists also could have looked at the fact that the largest spurt in college attendance occurred in the 1970s, having nothing to do with the relative wage advantage of college over high school workers, which fell in those years. It was instead a combination of the baby boomers’ entering their college years and the fact that college students got draft deferrals from the Vietnam War. Now there’s an example of incentives at work.
Crunchpoint: Once you get as close to equal opportunity as you can, some degree of unequal outcomes is neither bad nor good. It’s excessive inequality that’s a problem, particularly when the benefits of growth elude those responsible for generating that growth. Our economy used to have mechanisms to preclude such excesses, but they are broken and need fixing . . . fast.
The one thing all economists seem to agree on is that the best if not the only way out of this squeeze is for more people to get more education. Sounds simple and makes sense. But is it right?
No, it’s not right. A more highly educated society is obviously a better society, but more education will not solve all or even most of our economic problems.
Economists and the policymakers who listen to them often end up confusing people on this point, because they default to it in every case. That is, I’ve seen politicians tell aging industrial workers that they need a college education to compete in the global economy. Now, there’s no question in my mind that most of the time, a more highly educated worker has a real competitive advantage over one with less schooling, but it’s simply unrealistic and far too simplistic to think that everyone is going to get more schooling.
Do you know what share of our workforce is college educated—that is, has at least a four-year degree? It’s about 30 percent. Is it really possible that we have an economy structured to function effectively for only about a third of us?
In fact, it’s not possible. Thankfully, we have an economy that generates a great deal of labor demand for workers with all kinds of educational credentials, including those with very few.
Often, the image of the skill demands for the types of jobs we create are quite skewed in the minds of many economists and policymakers. They tend to envision white-coated “symbolic analysts” who are pointing and clicking the way forward, uncovering the nanotech secrets that will drive the next economic revolution. And, in fact, a tiny share of the work-force—way less than half a percent—does exactly that. But these folks also need someone to mind their kids while they’re designing the future, someone to wash and fold the lab coat, and someone to prep the food they’ll pick up on the way home.
If you look at the current and future composition of jobs (the Bureau of Labor Statistics has a pretty good track record predicting future job categories), you’ll find that the sectors adding the most jobs are cashiers, food prep, nurses, home health aides, security guards, waiters and waitresses, customer service reps, landscapers, and truck drivers. In fact, looking at the top 20 occupations expected to add the most jobs over the next 10 years, you’ve got to get to number 19 before you find an image to match the conventional wisdom. There you will find computer software engineers, right in between truckers and repair/maintenance workers.
By the way, there’s an interesting characteristic that ties many of these job categories together: They can’t be moved to offshore destinations. A waitress in Bangalore can’t serve your burger in Cleveland. You’ve got to be in the hotel to make the bed and prep the food. This is another factor militating against the “Education is everything” mindset. Jobs at all skill levels are in competition in the global economy, and even if everyone became a PhD tomorrow, many of the challenges I write about in this book would still be operative.
These changes imply an interesting new development in our economy: Inequality’s growth is now driven less by educational differences than by competition between select people in the right place at the right time with the right assets, regardless of skills. In mid-2007, this phenomenon made the front page of the New York Times, in an article documenting “the growing concentration of wealth and income among a select group at the pinnacle of success, leaving many others with similar talents and experience well behind” (my italics).
Take a look, for example, at what happened to the incomes of the very rich households in the top 1 percent of the income scale (average income in 2005: $1.1 mil) and those of the “only pretty rich” households just below them (those between the 90th and 99th percentiles—average income: $151K). Clearly, we’re comparing the haves with the have-mores here, a largely college-educated group, so any difference between them is not explained by a conventional education/skills story.
Between 2001 and 2005, the average income of the “pretty rich” group grew a measly 3 percent after inflation, while that of the top 1 percent was up 23 percent. Accordingly, the income gap between these two groups rose as well, with the top 1 percent having 6.5 times the income as the 90 to 99 percent group in 2005, up from 6.2 times in 2001 and way above the 3.7 multiple at the end of the 1970s.
All of that said, it’s absolutely legitimate for policymakers to stress the benefits of greater skills and more education. I think it’s so important that I devote a section to this challenge in Chapter 5, wherein I focus on greater access to quality education for the disadvantaged. But with the majority of the workforce not college educated, and with lots of demand for their lower- and middle-level skills, shouldn’t we also worry about the quality of the jobs that these people will face today, tomorrow, and even in the long term?
Of course we should. But we won’t if we’re single-mindedly focused on educational upgrading. As a trained economist, I understand that bias firsthand. Our training and ideology allow us to think about the quality of inputs: We can legitimately argue for higher-quality labor and capital inputs into the “production function” (it’s the magic box that combines the materials and skills [inputs] used to create goods and services [outputs]). But we’re not supposed to wander over to issues of job quality, the fairness of wage levels, labor standards, or fringes like health and pension benefits, sick leave, overtime, and vacations. That’s the purview of the pro-business politicians and U.S. Chamber of Commerce on one side, and labor unions on the other.
That prohibition must not stand. We are fully capable of holding two ideas in our minds: The best path for any individual is to get more skills, and the quality of existing and future jobs for folks at all skill levels matters a lot, too.
And there’s also, of course, a political point in play here, one that relates to the first principle: power as a key determinant of economic outcomes. Behind the education exhortations of some policymakers, especially those who refuse to take any other actions to loosen the crunch, is a solidly “blame the victim” agenda. Earlier, I quoted a Bush official saying that it’s up to the people “to take advantage of [the] opportunities” that the economy is creating. In other words, it’s not that we policymakers have a deficit of good ideas to promote broadly shared growth. It’s that you, working person, suffer from a deficit of skills.
Let me assure you that there is absolutely no reason why someone with less than a college education can’t enjoy consistent increases in his or her living standards in America today. There is nothing inherent in our economy or political system that would preclude that result, especially given the important role of such workers in our current and future labor markets. Yes, globalization will continue to displace some of these workers, now including those with relatively high levels of skills, and it will place downward pressure on the wage growth of many more others. Yet with a full-employment job market, the necessary forces in place to ensure adequate bargaining power (such as unions and decent minimum wage levels), and robust safety nets and social insurance—a policy set I elaborate upon in Chapter 5—all the bakers can get their fair slice of the pie.
Crunchpoint: The more education, the better. That’s true for individuals and it’s true for society. But it’s not even close to a cure-all for the crunch. Economists and policymakers need to stop blaming the 70 percent of the workforce that’s not college educated and start building the policy architecture to ensure that they too share in the growth of an economy to which their contributions are critical, not ancillary. The challenge is not just to make more people ready for skilled jobs. It’s also to make more jobs ready for people regardless of their skill levels.