Less Is More: The Ugly Truth about American Health Care

I have been promising this post for a month. I’m sorry for the delay. I hope it was worth the wait.

A Health Care Crisis, More or Less

Last month, Atul Gawande—surgeon, professor, and journalist—published an essay in The New Yorker on the disparity and inefficiency of American health care spending, highlighting in particular the poor performance of McAllen, Texas. Gawande’s article rocked the policy world. The real news, though, is not the sad facts that Gawande brought to light, but rather the fact that it is considered news at all, for health economists have spent many years trying to wake up the media and the public to this state of affairs.

Gawande took a commonly accepted premise, “Americans like to believe that, with most things, more is better.” And then he shattered our happy little world, “But research suggests that where medicine is concerned it may actually be worse.”

Okay, it’s not that shocking. More has been the watchword of the last thirty years in more than just health care, and most of us recognize we haven’t exactly been well served by it. More, roughly speaking, is responsible for a housing bubble, a financial crisis, an unsustainably warming climate, and now a health care system that is making us sicker and poorer than most other industrialized nations.

But more isn’t all bad. Any economist will tell you that more is responsible for the unprecedented economic growth of the last century—and hence, the way of life we hold so dear. In fact, it is exactly the opposite of more that we fear the most. Open any newspaper these days, and you’re bound to find the word “rationing” at least once. Maybe we can accept that more is not always better, but in exchange, are we willing to accept…less?

The Stunning Findings of John Wennberg and Dartmouth University

Gawande does not only base his criticism on anecdotal evidence, and in that we have our first lesson for the health care debate. All too often such discussions revolve around scare stories and hearsay. “I’ve talked to people who lived in Great Britain,” someone will say, “and they hate the system.” Another person will pipe up, “Yea, my Canadian friend had to wait six months for an MRI, and the government won’t pay for the surgery he needs.” (The first questions I have are whether your friend really needs that MRI and whether the surgery will do more harm than good, but we’ll get to them later.)

Anecdotes are important. Gawande went to the source. He interviewed doctors and patients and administrators, but he didn’t take their word as gospel. You may be surprised by some of the facts that I will recite. When the human brain is confronted with evidence that contradicts its worldview, the first instinct is to dismiss it. It’s called “cognitive dissonance,” and it’s a dangerous tendency in this context. “I don’t know what biased source you’re getting that from,” I’ll hear, “but I’ve talked to people who’ve actually experienced the system, and they say the opposite is true.” To which I reply, “My facts come from foundations and universities who have surveyed between one thousand and ten thousand people for any given statistic. You’ve talked to three people. Who do you think has a more complete story?” Anecdotes are useful to put a human face on the problem, but too many people are practicing sample bias, extrapolating too broad conclusions from too few examples.

Gawande’s evidence comes from a now-famous team at Dartmouth University, a project that actually stretches back to the early 1960s when a doctor named John E. Wennberg campaigned to remove the dangerous drug Orabilex from the market. The long story is a fascinating tale of a frustrating slog by Wennberg, his peers, and later his successors to map out exactly how the United States spends its money on health care and how that money is used. For that version, you should check out Overtreated by Shannon Brownlee. In fact, if you plan to voice any opinion whatsoever in the health care debate, you should read Overtreated. Throughout the rest of this post, I will use facts and figures from Brownlee’s book, and I hope that she will not mind because I am telling you right now to go to Amazon.com and buy the book. There is no exaggeration in my claim that it is the best book on the American health care crisis that you will find in any bookstore.

To anyone who has read Overtreated (which was published over two years ago), Gawande’s article was old news. And to anyone who has followed the Dartmouth team over the last thirty years (Wennberg didn’t actually get to Dartmouth until 1979, but his research began at the University of Vermont twelve years earlier), it was long, long, long overdue. Gawande’s major accomplishment is summarizing all their findings into an engaging short story [all emphasis below is mine, not from the original]:

[The] more money Medicare spent per person in a given state the lower that state’s quality ranking tended to be.

[Patients] in higher-spending regions received sixty per cent more care than elsewhere. They got more frequent tests and procedures, more visits with specialists, and more frequent admission to hospitals. Yet they did no better than other patients, whether this was measured in terms of survival, their ability to function, or satisfaction with the care they received. If anything, they seemed to do worse.

[Patients] in high-cost areas were actually less likely to receive low-cost preventive services, such as flu and pneumonia vaccines, faced longer waits at doctor and emergency-room visits, and were less likely to have a primary-care physician.

In situations in which the right thing to do was well established—for example, whether to recommend a mammogram for a fifty-year-old woman (the answer is yes)—physicians in high- and low-cost cities made the same decisions. But, in cases in which the science was unclear, some physicians pursued the maximum possible amount of testing and procedures; some pursued the minimum. And which kind of doctor they were depended on where they came from.

So let me get this straight. Higher spending means lower quality, less prevention, longer waits, and less primary-care physicians; more care means lower quality; and a large proportion of doctors use the wrong treatments for many illnesses. How in the world is this possible?

A Bargain Gone Bad

In the 1940s, the United States made a bargain. There was no ceremony, no papers signed (well, maybe a law or two), and no headline announcement. But the deal Big Business, Big Labor, and the U.S. Government made during that decade haunts us today.

When World War II hit, the government imposed wage controls, so businesses competed for labor by offering health insurance and other benefits. But Harry Truman had a better idea. Stealing a page from Teddy Roosevelt, the President asked, Why not put government in charge of health insurance? Truman faced more than a little opposition, particularly from doctors. When the naysayers proved insurmountable, he settled for a compromise: Government would exempt health care expenses from a company’s income taxes, so businesses could offer full benefits to woo workers and labor unions could ensure their members had sufficient coverage.

Today, that amounts to $300 billion in regressive tax breaks—and what do we have to show for it? Rising health care costs that dwarf Americans’ ability to pay them, the threat of government insolvency down the road, an increasing number of uninsured citizens, and businesses that cannot compete in the face of steep benefit payments.

What we have here is a bargain gone bad.

Follow the Money

Of course, the bargain did not extend to everyone. If you didn’t have a job (or significant personal wealth), then you didn’t have health insurance. The poor and the elderly—two groups who need health care more than most—therefore got the shaft. This was unacceptable in Lyndon Johnson’s Great Society, so he summoned his legislative magic to pass the Social Security Act of 1965, establishing Medicaid for the poor and Medicare for the elderly.

Initially, Medicare paid hospitals with a cost-plus system. Blue Cross would tell the Social Security Administration how much each hospital spent, and Medicare would reimburse the hospital for its average annual expenses plus 2 percent. As you can imagine, inefficient hospitals could drive their expenses through the roof and count on getting paid handsomely the following year. Costs rose dramatically into the 1980s, even after Paul Volcker put high inflation to rest.

Medicare was headed for bankruptcy, and the responsibility fell to Ronald Reagan to reverse gears. The high priest of conservatism was not about to impose price controls; instead, he signed a bill putting Medicare reimbursements on a “diagnosis-related groups” (DRG) system, which paid hospitals a specified fee for each diagnosis. The fees were set so low that Medicare spending growth plummeted, along with hospital profit margins. Oh, it stopped the bleeding, but it was more of a temporary redirection.

Not all fees were set low. Medicare overpaid for some procedures and underpaid for others. Over time, hospitals learned which was which and adjusted their business accordingly. We don’t like to believe it, but for-profit hospitals are, well, for-profit. Why do most successful hospitals have overcrowded, timeworn emergency rooms and oversized, brand-spanking-new cancer centers? Smart CEOs invest money where they can profit the most. Why do too many patients with heart disease receive angioplasty and too few receive only drugs without risky surgery? You get the idea.

If you’re like me, the first time you hear this news, your instinct is to reject the notion entirely. Doctors don’t decide how to treat me based on what will yield the highest profit, do they? Health care isn’t like hawking jewelry. My wellbeing doesn’t depend on whether some businessman thinks my pain is his gain, does it? If you’re like me, you’re about to be gravely disappointed.

Three Complications in Health Economics

The first fact to understand is that health care isn’t like most markets. Health care is rife with three market failures: moral hazard, adverse selection, and asymmetric information. There is, to be sure, plenty of these failures in various parts of the economy, but no other market is so strongly influenced by all three.

If you’ve been paying attention to the financial crisis at all during the past couple years, you should be familiar with moral hazard. If the banks know they can screw up and get bailed out, they have an incentive to do it again. If consumers do not have to pay for their own health care—that is, if everything they need is covered by an insurance company and it doesn’t come out of their own pocket—they have an incentive to buy too much expensive, unnecessary health care.

Moral hazard and adverse selection are two sides to the same coin. Moral hazard is an ex-post problem: After you have the insurance, you have an incentive to overspend. Adverse selection is an ex-ante problem: Before you have the insurance, you have an incentive to cheat the insurer, so to speak. After all, who knows more about your health, you or some stranger sitting in an insurance office 300 miles away? That’s why insurance companies invest so much money in assessing the health—and therefore the proper insurance premium—of each policyholder.

If you are a high-risk consumer (you smoke, you have diabetes, and your family has a history of heart disease), the insurance company will demand a high premium to cover all the costs they expect to pay for the health care you will probably need down the road. If you are a low-risk consumer (you exercise, you are fit as a fiddle, and all your family members lived to the age of 95), they will demand a low premium because they don’t expect you to have high health care costs that they will have to cover. The insurance company is trying to beat adverse selection. If they didn’t do their research and they set the same premium for everybody, the low-risk consumer probably wouldn’t pay it because they know it’s too expensive for someone with their future health care expenses. The company would end up insuring only high-risk consumers, and their health care expenses would eventually outweigh the money they’re taking in from premiums.

All this has been conventional wisdom in health economics for many years, but recent evidence suggests adverse selection is not as powerful as once thought. Part of the reason low-risk consumers are so healthy is that they avoid risk more carefully than high-risk consumers; because they’re such risk-averse people, low-risk consumers often buy more health insurance than high-risk consumers…just to be on the safe side. Some economists have interpreted this to mean that adverse selection isn’t important, but they couldn’t be further from the truth. Private insurance companies still spend huge sums of money—much higher than government insurance agencies—to ascertain the exact risk profile of each consumer.

As much attention as we have paid to moral hazard and adverse selection, we have given precious little credit to the role of asymmetric information in health care. In 1963, before John Wennberg even began his research, Kenneth Arrow, who would later win a Nobel Prize in economics, published a paper foreshadowing—and explaining—Wennberg’s findings. In a normal market, Arrow explained, producers compete for consumers with low prices and high quality. In health care, however, consumers rely upon doctors (the “producers”) to give them the proper products and services. You can do all the Googling you want to find out how to treat microsporidiosis or what the side effects of clemastine fumarate syrup are, but at the end of the day, you’re handing your life over because of that degree on your doctor’s wall.

Wennberg’s findings can be seen as a derivative of all the above market failures. Moral hazard means consumers are shielded from paying health care costs out of their own pockets, so they do not bargain for lower prices and better quality. Adverse selection forces insurance companies to pour money into administrative costs, which drive up prices. Asymmetric information puts consumers at a disadvantage; no matter how much they might want to bargain for a better deal, it’s not in their best interest to pretend they know as much as their doctor.

Some economists ignore this last point and place the emphasis on moral hazard. If we can just make consumers bear the costs directly instead of having health benefits handed to them by their employer, they argue, the market would operate as markets are supposed to. The evidence does not support their contention. Their solution, most often proposed in the form of health savings accounts (HSAs), is blind to how the health care market operates:

Myth: Consumer-driven health plans encourage high quality and appropriate use of care.

Fact: High deductibles and cost sharing discourage consumers from seeking any care, even when it is high quality or critical.

The idea that consumer-driven health plans lead consumers to choose cost-effective providers is based on faulty assumptions that consumers can obtain adequate and reliable information on cost and quality and that they can differentiate necessary from unnecessary care. What’s more, consumer-driven health plans’ high-deductibles and cost sharing cause a reduction in use of care whether it is needed or not.

Even plans with small co-pays can discourage people from using preventive services in particular. Consumer-driven health plans are permitted, but not required, to cover preventive services outside of the deductible. In a number of these plans, recommended preventive services must be financed through the accounts, sometimes at a considerable cost to the consumer.

Because about 70 percent of health care costs in the United States come from 10 percent of Americans whose costs are well above the deductible, there’s little reason that a high deductible will change their behavior and encourage them to seek more preventative or cost-effective care.

Myth: Consumer-driven health plans will reduce costs by giving people more control over their health care choices.

Fact: Doctors still retain primary control over health care, and patients are unlikely to “shop around” for services.

The truth is that doctors have a larger role in determining health care use than patients. Most tests, drugs, and services result from the recommendations of health professionals, not the desires of consumers. In fact, we encourage patients to follow doctors’ advice. This means that high deductibles will likely not curb higher use of many health care “commodities.”

And the stakes of “shopping around” for services can be very high for patients and their families. Health care is ultimately about preserving life and delaying death, which makes people think differently about it than other services. For example, most parents of sick children do not shop for or negotiate prices; people with cancer are unlikely to decline a new and expensive test or treatment.

Myth: Consumer-driven health plans will encourage people to get the care that best suits their needs.

Fact: High deductibles and cost sharing shift benefits to the healthy but shift costs to the sick.

A survey from the Employee Benefit Research Institute found that, while people in such plans were more cost conscious, they were twice as likely to report delaying or avoiding care and about three times as likely to report paying a large fraction of their income on health costs as those in comprehensive insurance.

Although employers are allowed to make contributions to health savings accounts, a 2007 survey shows that employers contribute less to HSA-qualified plans compared with other types of plans, shifting higher out-of-pocket expenses to workers which could further deter workers from seeking care.

What’s more, enrollees in consumer-driven health plans appear to be significantly healthier than others. As sicker workers stay in traditional plans, the cost of such plans will go up, causing such plans to become unaffordable for workers and employers. This erodes group purchasing power, leading to even higher prices, and possibly more uninsured Americans. It could also undermine Medicare as it expands there.

The Real Story of Direct-to-Consumer

One of the “producers” has created a closer relationship with the consumer in recent years. It hasn’t removed moral hazard, but it does offer clues to how the profit incentive affects health care when consumers play a more integral role.

Before 1985, the pharmaceutical companies were just a rung on the health care ladder. Their product traveled to hospitals and then doctors before it reached the consumer. In the last twenty-five years, that gap has been erased. For decades, health care professionals had warned of the dangers in such a move: consumers would pressure doctors for drugs they don’t need; doctors would prescribe overpriced or even harmful drugs to turn a profit; and researchers would skew their findings, often to the detriment of patients. Pharmaceutical executives agreed with them, and even if they didn’t, the Food and Drug Administration (FDA) regulated direct-to-consumer advertisements so heavily that they weren’t worth the effort.

From 1985 forward, the pharmaceutical industry—and the FDA—changed its tune. (Why? That’s a topic for another column, but it was part of a cultural shift toward faster growth and excess everywhere, which in corporate America meant stiffer competitive pressure and more intense focus on short-term profits.) First, the industry lobbied the FDA and fought them in court until they relaxed their restrictions on direct-to-consumer ads. Until they won that victory, they got around the regulation by running advertisements about medical conditions—so you were scared enough to ask your doctor about it—without mentioning the name of the actual drugs.

Second, the industry paid researchers to study their drugs and publish their results. Where once the government funded the majority of clinical research, now over 80 percent is backed by Big Pharma. And research shows that “industry-sponsored research tends to produce conclusions that favor the sponsor’s product.” But there is a point to all that investment, and it cuts to the core of Americans’ pride in their medical care. Just as proponents of Pax Americana argue that the rest of the world receives protection and saves money on defense spending because of our global presence, many Americans believe that our elevated health care spending has been responsible for all the medical innovations that have been extending and improving lives across the world for the last century. To a point, this is true, but if you think we have the pharmaceutical companies to thank for that, you are sadly mistaken:

The great breakthroughs in the history of medicine, from the development of the polio vaccine to the identification of cancer-killing agents, did not take place because a for-profit company saw an opportunity and invested heavily in research. They happened because of scientists toiling in academic settings. “The nice thing about people like me in universities is that the great majority are not motivated by profit,” says Cynthia Kenyon, a renowned cancer researcher at the University of California at San Francisco. “If we were, we wouldn’t be here.” And, while the United States may be the world leader in this sort of research, that’s probably not—as critics of universal coverage frequently claim—because of our private insurance system. If anything, it’s because of the federal government.

The single biggest source of medical research funding, not just in the United States but in the entire world, is the National Institutes of Health (NIH): Last year, it spent more than $28 billion on research, accounting for about one-third of the total dollars spent on medical research and development in this country (and half the money spent at universities). The majority of that money pays for the kind of basic research that might someday unlock cures for killer diseases like Alzheimer’s, aids, and cancer. No other country has an institution that matches the NIH in scale. And that is probably the primary explanation for why so many of the intellectual breakthroughs in medical science happen here.

Third, the industry hired large teams of salespeople, or “drug reps,” to bribe doctors with free samples and all sorts of perks. Okay, maybe “bribe” isn’t the right term. Most of them don’t think of their job as anything unethical, but Brownlee recites research that should make them think twice:

At least sixteen studies have found that drugs that are most heavily marketed to physicians are the ones most likely to be prescribed. The more time doctors spend with drug reps, and the more free gifts, drug samples, and food they accept, the more likely they are to prescribe the brand-name drugs that the reps are pushing. Physicians who have the most contact with reps prescribe the most “irrationally,” which means they give patients expensive, brand-name drugs when there are cheaper and often better, safer alternatives—or when no drug at all would have been the best choice.

Disconcerting as these trends may be, they make sense when you step back and look at the whole picture. We have become a pill-popping culture; a people obsessed with our health and convinced we are sicker than we were thirty years ago; a society where every flaw, no matter how common, has a diagnosis; and a country that offers multiple cures for erectile dysfunction yet still struggles to contain cancer and AIDS. We—consumers, doctors, and researchers—have been seduced by Big Pharma, and their share prices reflect a resounding success.

The Rise and Fall of Managed Care

To be fair, most of the consumers in that story were shielded from the full cost of their decisions by insurance, but as indicated earlier, health care consumers who bear the cost directly still rely almost entirely on their doctors (or haggle with their doctors with very poor results). No matter what, the doctors—and the hospitals that employ them—have the most control over what treatments their patients receive. And how has that been working for us?

In our abbreviated history of health care costs, you’ll recall we left off with Medicare imposing the DRG system, which initially brought spending growth under control but eventually led to manipulation by for-profit hospitals. Private insurers used a similar system called fee-for-service, where doctors and hospitals got reimbursed for each service provided instead of the entire diagnosis. Cat scan? Fee. MRI? Fee. Unnecessary surgery? Fee. They had no incentive to cut costs because they could just jack up the premiums that employers were paying.

Around the same time that Ronald Reagan took a wrench to Medicare, employers pressured private insurers to put a straightjacket on spending growth. Insurers looked around for a good model to emulate and saw that the only health plans that seemed to have costs under control were health maintenance organizations (HMOs). HMOs had been around since the 1970s, delivering high quality and low costs to their patients; unfortunately, many physicians (led by the American Medical Association) resisted them enough to keep them at bay. Private insurers decided to dust off the few HMOs scattered across the country and breathe new life into the system with their own version: managed care.

The innovation that set HMOs apart was how they managed doctors. First, they grouped a bunch of doctors together in a cooperative system that could address almost any need a patient might have. Second, they paid the doctors with salaries instead of a fee-for-service reimbursement. Third, they performed “utilization reviews” to keep doctors up-to-date on how their performance compared to evidence-based recommendations for various illnesses.

Managed care borrowed the term “HMOs” and slapped it on a bunch of organizations that never quite measured up to the original version; the other incarnations of managed care were preferred provider organizations (PPOs) and physician practice management companies (PPMs). Each was a bit different in its approach, but the general pattern borrowed selectively from traditional HMOs.

Patients revolted almost immediately. Fee-for-service plans had few restrictions compared to managed care, but the latter was so much cheaper that employers jumped at it in droves and left fee-for-service in the dust. First, patients could only see doctors who were covered by their new insurer, which more often than not meant leaving the doctor with whom they have become so comfortable. Second, in a misguided effort to cut back on unnecessary care, managed care insurers deducted the cost of each test and procedure ordered by primary care physicians from an annual “account” that they stood to earn at the end of the year. It doesn’t take an economist to predict that such a system would cut back on unnecessary and necessary care. Third, they imposed utilization reviews on doctors who were not working in a cooperative environment.

If you have heard anything about HMOs in popular culture, it has probably been negative. The original HMOs, however, are still alive and well. Kaiser Permanente and Group Health Cooperative of Puget Sound continue to deliver high quality care at low costs. Their half-breed successors, on the other hand, were driven out of town because they only paid attention to the one half of the equation that private insurers had an incentive to fix: costs.

The Danger of Supply-Driven Demand

As the 1990s drug on, managed care fizzled out under the weight of angry consumers who demanded better bang for their buck. Hospitals returned to the good ole days: courting patients instead of managed care plans, receiving reimbursements on a fee-for-service basis, and investing heavily in new technology and the most profitable “lines of business.” The result, as Brownlee puts it, has been “a medical-technology arms race.”

None of this is to say that hospitals—or any of the characters throughout this story, for that matter—have sinister intent. They are responding to the incentives in front of them and often are oblivious to negative effects on their patients and the economy. Doctors need machines and beds and labs to do their jobs. Hospitals buy the best machines on the market, fill the building with more beds, and build bigger labs for tests, surgeries, and other procedures. With more of every resource available, doctors no longer feel compelled to judge whether extra tests and procedures are absolutely necessary. Because they see no harm in one more test or an extra night spent in the hospital—and because they earn money for every service performed—they utilize them in greater numbers. The increased demand for these resources signals to the hospitals that they need to invest in even more resources, and on and on.

Brownlee analogizes, “When the road is first improved, traffic eases and commutes get shorter. But then, more people move into an area because the commute isn’t too bad, and eventually traffic is congested again until work starts anew to widen the road or build a new one.” It is the vicious cycle of “supply-driven demand,” a phenomenon which is muted or nonexistent in most industries but unfortunately the norm in health care. After all, you may question the car salesman when he says you need a sunroof, but how many of us are willing to question the doctor when she says you need an angioplasty?

Every procedure increases the chance of medical errors. Unless the procedure is necessary, it can do more harm than good. Add this to the fact that the kind of procedures that are so lucrative are by definition not the simple, inexpensive route, which often has the same (or better) chance of success. Doctors don’t benefit much if they prevent your heart disease by giving you nutritional advice or if they treat your heart disease with better diet and medication instead of invasive surgery.

Most unfortunately (and frighteningly), they have little incentive to follow “evidence-based medicine”—that is, what research has proven to be the most effective treatment in a given situation—if it contradicts their profit motive or gut instinct. Maybe we would have more consistently effective health care if patient care were coordinated to rein in poor practices like under HMOs; if primary care physicians were given a more powerful role in connecting the pieces; if managed care hadn’t been so oppressive to doctors that the ratio of specialists-to-generalists shot way out of proportion; and if insurers didn’t reimburse doctors so richly for specialist functions like catheterizations or CT scans and so poorly for generalist functions like preventing illness and guiding patients through the system. “[On] average,” writes Brownlee, ever the bearer of bad news, “patients were given recommended care a little less than 55 percent of the time.”

When you understand all that history, it is no longer surprising that Wennberg and his Dartmouth heirs found such disturbing trends—nor should it come as a shock that our system compares abysmally to the rest of the world.

The Atheists of the Health Care World

Michael F. Cannon and his colleagues at the Cato Institute deride liberals and their dreams of socialized medicine as the “Church of Universal Coverage.” If these churchgoers are so fervent about their beliefs, it is only because the empirical evidence against our current system is so damning—and every nation with universal coverage seems to do health care so much better than us.

Don’t believe me? The numbers don’t lie:

We have the most expensive system in the world per capita, but we lag behind many developed countries on virtually every health statistic you can name. Life expectancy at birth? We rank near the bottom of countries in the Organization for Economic Cooperation and Development, just ahead of Cuba and way behind Japan, France, Italy, Sweden and Canada, countries whose governments (gasp!) pay for the lion’s share of health care. Infant mortality in the United States is 6.8 per 1,000 births, more than twice as high as in Japan, Norway and Sweden and worse than in Poland and Hungary. We’re doing a better job than most on reducing smoking rates, but our obesity epidemic is out of control, our death rate from prostate cancer is only slightly lower than the United Kingdom’s, and in at least one study, American heart attack patients did no better than Swedish patients, even though the Americans got twice as many high-tech treatments.

But wait, there’s more:

[Our] health care system makes more mistakes than those of other countries, and is unique in denying necessary care to people who lack insurance and can’t pay cash. The frequent claim that the United States pays high medical prices to avoid long waiting lists for care also fails to hold up in the face of the evidence: there are long waiting lists for elective surgery in some non-US systems, but not all, and the procedures for which these waiting lists exist account for only 3 percent of US health care spending.

I know, I know. Even if you accept all those claims about higher quality and lower cost, you still have trouble getting over the R-word. Thankfully, Jonathan Cohn wrote a great article in The Boston Globe putting rationing fears to rest:

In a 2008 survey of adults with chronic disease conducted by the Commonwealth Fund – a foundation which financed my own research abroad – 60 percent of Dutch patients and 42 percent of French patients could get same-day appointments. The figure in the US was just 26 percent.

The contrast with after-hours care is even more striking. If you live in either Amsterdam or Paris, and get sick after your family physician has gone home, a phone call will typically get you an immediate medical consultation – or even, if necessary, a house call. And if you need the sort of attention available only at a formal medical facility, you can get that, too – without the long waits typical in US emergency rooms.

In a 2007 Commonwealth Fund survey, just 9 percent of Dutch patients reported waiting more than two hours for care in an ER, compared to 31 percent of Americans.

And my favorite comparison:

On a blog on Fox News earlier this year, the conservative writer John Lott wrote, “Americans should ask Canadians and Brits — people who have long suffered from rationing — how happy they are with central government decisions on eliminating ‘unnecessary’ health care.” There is no particular reason that the United States should copy the British or Canadian forms of universal coverage, rather than one of the different arrangements that have developed in other industrialized nations, some of which may be better. But as it happens, last year the Gallup organization did ask Canadians and Brits, and people in many different countries, if they have confidence in “health care or medical systems” in their country. In Canada, 73 percent answered this question affirmatively. Coincidentally, an identical percentage of Britons gave the same answer. In the United States, despite spending much more, per person, on health care, the figure was only 56 percent.

In a Perfect World

I wish Kenneth W. Kizer were in charge of health care reform. When Kizer was named Under Secretary for Health in the Department of Veterans Affairs in 1994, the Veterans Health Administration was a disaster. By the time he left in 1999, it was on its way to becoming the best health plan in the country.

No, that wasn’t a typo. Every year, the National Committee for Quality Assurance ranks the quality of all the best health plans in the United States, and in recent years the VHA has outperformed everyone. VHA patients are significantly happier with their health care than anyone else in America. The VHA has lower costs per patient than all other plans, including Medicare.

If you haven’t heard these statistics before, your brain is probably doing somersaults. We should take a moment to dispel a myth that haunts many Americans.

“Does anyone seriously suppose,” wrote GMU economist Don Boudreaux not too long ago, “that decisions by government bureaucrats over who will get, and who will be denied, some expensive lifesaving procedure would be better than having such decisions made according to each patient’s willingness and ability to pay?”

Yes. Before I explain why, let me make an important distinction about a term that has been used to hoodwink most Americans: socialized medicine. Canada does not have socialized medicine. Medicare is not socialized medicine. Socialized medicine is a system where the doctors are employed by and the hospitals are run by the government. Canada, Medicare, and most systems abroad are socialized insurance. I know that sounds like semantics, but it matters.

It matters because we have to decide whether we want socialized insurance, socialized medicine, or none of the above. I also know that most people who use the term “socialized medicine” would choose “none of the above,” so it’s a moot point. But take just a moment to reconsider: The VHA is socialized medicine. It’s the highest quality, lowest cost, highest customer satisfaction medical plan in America. And on all those measures, socialized medicine and socialized insurance perform better across the world than our mostly privatized system. We ration by price; they ration by expert-recommended protocol. And even though many of their systems have shorter waiting lines than ours, and even though our price-rationing is correlated with lower quality and earlier deaths and higher costs, we fear, as Don Boudreaux put it, the rule of “government bureaucrats.”

In the VHA system, patients get the recommended care two-thirds of the time; in our private system, it’s 50-50. It is the equivalent of the doctor giving you two choices—one is the correct care, the other less effective and possibly dangerous—and letting you guess which one to treat you with. You’re worried about government bureaucrats coming between you and your doctor? First, that image is ridiculous partisan propaganda. Put another way, all socialized medicine means is that doctors would not have an incentive to ignore proper procedure (an incentive that they have in the current system), and it no longer sounds intrusive. Heck, it sounds less dangerous! And that’s because it is, which brings me to my second answer: If all the history and statistics recited above teach us anything, it is that bringing “government bureaucrats” into the fold would make all our lives better. Ignore all the evidence at your own peril.

But it wasn’t always this way. Before Kizer assumed the helm, the VHA was wasteful, ineffective, and difficult to access. In 1996, President Clinton made the system available to all veterans, regardless of health condition or ability to pay. Kizer knew that the system needed a transformation if it was to tend to all the new patients in an efficient manner. His biggest accomplishment was implementing the VistA system. If anyone tells you that having a technology system that links our country’s hospitals and digitizes all our health care information won’t help lower costs or save lives, tell them to check out VistA. Every veteran’s information is available immediately to VHA doctors on VistA. The hospitals are paperless. Medical errors have dropped significantly, care has become more coordinated (and therefore better quality), and efficient care is now being delivered at low costs in a way unimaginable only fifteen years earlier.

What our system does wrong, the VHA—as well as the Mayo Clinic, Kaiser, Group Health, and other coordinated systems like the Geisinger Health System in my home state of Pennsylvania—does right. (Full disclosure: My primary care physician works for Geisinger, and I love the system.) In a perfect world, our health care reform would bring our system more in line with their way of doing business. Doctors would be salaried; care would be coordinated; doctors would be encouraged to follow evidence-based medicine (which means we need more evidence through comparative effectiveness research and the like); hospitals would not have a financial incentive to invest in unnecessary equipment; doctors would not have a financial incentive to order unnecessary tests or conduct unnecessary procedures; all health professionals would use electronic medical records compatible with VistA; pharmaceutical companies would not market directly to consumers or bribe doctors; clinical research would be funded by the government and other unbiased sources; the employer tax deduction would be eliminated; and all American citizens would have health insurance, regardless of health condition or ability to pay—which, if it didn’t affect their premiums, would remove the cost of outsmarting adverse selection.

Laws and Sausages

Any progress toward that perfect world must come in steps. If you read most accounts of Bill Clinton’s presidency, you hear about a man who struggled to bring fundamental change to a country that only grudgingly accepts incremental change—and thankfully so, for how dangerous it would be if we did not stand there to check our leaders’ ambitions when bad changes are proposed (though, admittedly, we have failed at that task in recent years). The one reform that stumped him the most was health care. We are indeed a nation of centrists, and as human beings we are resistant to abandoning the comfortable status quo. But the tide has shifted in health care. A majority of Americans now support a public health insurance plan. Against that tide press the powerful special interests. In the middle is a Congress that seems to be juggling a dozen different reform proposals at any given time. David Gergen called President Obama’s speech last week “a holding pattern,” designed to keep the American people from losing interest and to keep the critical parties on Capitol Hill from backing out before the August recess.

No one expects Congress to fix the whole system in one fell swoop, but it is in the Democrats’ political interest to pass a bill before the 2010 elections. The only reasonable way to judge the final legislation, then, is to assess whether it gets us closer to the ideal system described above. Even now, that is a tricky task. It is so frustrating to read pundits lambasting “Obamacare” because there are at least half a dozen different proposals in Congress that have a chance of passing into law—and President Obama has not been actively involved in shaping any of them. I don’t mean to let the President off the hook, but I do want to caution Americans before they start critiquing reform efforts—many of which will be dropped before the final vote—that are still in a rather preliminary stage.

Originally, I was planning to give you an overview of all the major proposals, but there are just too many balls in the air to fit them into this post. I would also be wasting a lot of time explaining reforms that will never make it out of committee. Instead, I encourage you to read everything Jonathan Cohn, Ezra Klein, and Arnold Kling write about the matter. (Kling supports more consumer-driven care than me, but of the various consumer-driven proposals out there, his is definitely the best. He takes a similar approach as Brownlee in advocating coordinated care, but he does it through corporations instead of government-run hospitals.)

Steven Pearlstein made a smart point in The Washington Post recently: The status quo is worse than most of the reforms being debated in Congress. For the moment, “the outlines of a good reform plan are there — universal coverage, insurance market reform, cost controls, computerized medical records, emphasis on effectiveness research and quality improvements.” They may be less than we hoped, and the final bill may be one big letdown. But we’re not there yet, and until we get there, the worst thing we can do is blindly reject anything that doesn’t fit our notion of perfect reform. For now, at least we know that most of the things we thought we knew about American health care were wrong. It’s time to look at the issue through new eyes.

Oh, and buy Overtreated.