La Couture Libertaire

Libertarian thoughts on American policy

Open Question: Why Should College Athletes Get Paid?


Everyone loves to point at the NCAA executives, college athletics coaches, and athletic directors and call them thieves benefiting from the exploitation of young athletes. In this context even writers at Slate will demand that the NCAA loosen its regulations and allow for a ‘free market’ economy in athletic talent. As usual, they haven’t stopped to consider why the NCAA doesn’t pay student-athletes. I find two points sum up the situation to the point that economic arguments are unnecessary.

First, the NBA/NFL have creates age minimums for participation in the leagues. These age minimums serve several purposes for these professional-level leagues: 1) age minimums reduce scouting and recruiting cost by creating an incentive for young athletes to spend a year or two at one of a limited number of colleges with competitive athletic teams. The NCAA had nothing to do with that. 2) age minimums save professional leagues the expense of hiring pre-prime athletes who would require years of expensive training and experience to be ready to compete at the professional level. It eliminates the need for a minor league by encouraging those players to go to school instead. 3) Despite what many might claim, age restrictions force young men to experience something akin to responsibility outside their family homes, with the support and guidance of coaches, professors, peers and others. It makes perfect economic sense for these leagues to take advantage of the opportunity to allow young athletes time to develop in these ways at the expense of universities.

A second concept to consider then, is the interest of the universities. Student athletes are first and foremost students, and to treat them otherwise takes away from the fundamental nature of education. Yes; athletes work very hard at their sport, but if being a student and an athlete is too much for them, then they should choose to stop being one. You go to school to learn- you join a professional athletic league to play professional sports. It’s true that some athletes in particular tend to bring in revenue for the schools, but so do some students. Should top-tier students be paid over and above the cost of tuition to compensate them for the prestige and donations they bring to their schools? Students — whether or not they are athletes — are presumably attending college to learn. If they have different motivations, fine, but the school shouldn’t have to pay a student for his attendance, and other students certainly shouldn’t have to pay that salary. If a student feels that his time is more valuable than the education he is receiving, no one is forcing him to stay.


Perhaps my position is unkind, perhaps it is biased by my own positive experience as a student athlete. But despite the many valid free-market arguments that have been made, I can’t see how demanding an institution pay more for someone to perform a service they already provide is any more ‘free market’ than the existing system. Schools and the NCAA allow participation, even exhibition, in their athletic teams, and in exchange for the time spent in those activities, provide athletes with an education, books, computers, food, nutritionists, housing, cable, internet, trainers, basic medical care, even ‘spending money’. Each year 400,000 student athletes participate in NCAA college athletics, and are generally grateful for the opportunity. If the 1% destined for the professional leagues think their time would be better spent elsewhere, why don’t they go? Because the other 396,000 student athletes have classes to focus on.


Which ‘Healthcare Market’?

When politicians and policy wonks refer to “the healthcare market,” what they are really referring to are six nearly distinct insurance markets with different payment methods and purchasing power, each of which has a different level of government control. To understand much of the general healthcare debate, it is first necessary to understand the markets.

First, there is the privately insured market. Though it is still subject to many state and federal regulations, it is as close as most Americans will ever come to free-market health care coverage. In the private individual and small-group market, individuals purchase health insurance with their own money. This fact helps to ensure that each individual receives the appropriate level of coverage based on his or her personal preferences, objectives, and risk factors. People are generally satisfied with these plans apparently. This is essentially the system we would have had if the government had never interfered in the first place — and it would have worked well.

Next, we have the ‘uninsured’ or those who simply pay out of pocket. For many, absent any catastrophic event, this is a cheaper approach to healthcare than any other method of purchasing care because the individual actually recognizes the cost of the care involved and takes interest in and plays an active role in driving down the cost of care. not $200 aspirin for these people! Without an insurer writing blank checks to providers, uninsured patients are often able to negotiate down prices for themselves. But they don’t exist in a perfect market- the federal government has passed laws such as EMTALA which require Emergency Departments to accept all patients, regardless of ability to pay. This leads many uninsured individuals to expensive hospital Emergency Rooms for even non-emergent indications because they are guaranteed free care if they are unable to pay. This predictably drives up prices for those who do pay.

The newest form of quasi-market directed insurance is insurance purchased through the Obamacare Exchanges. Although there is a  lot of dispute as to whether all of the Exchanges should function the same way, the Obama administration has made it clear they intend to make subsidies available on all 51 Exchanges, regardless of whether they have the legal authority to do so. These subsidies will lessen the cost (not price) of health insurance for a specific population of Americans. They will not only enable, but force many Americans to buy insurance with a specific level of coverage, regardless of whether that coverage is appropriate for the individual. The subsidies (along with other non-market forces, such as the individual mandate-tax) create incentives for individuals to purchase an inappropriate amount of insurance coverage, as well as forcing them to buy things they don’t want or need (for example, all men buying in the exchanges are required to have health insurance that will provide them ‘free’ hormonal contraception). This ‘market’ has a high level of distortion due tot government interference and is therefore anything but free.

Employer-sponsored insurance, though fairly entrenched in American healthcare payment today, is actually very far from being anything like a proper market. As I’ve explained, the employer-sponsored health insurance system arose as a reaction to wage freezes during WWII. After the war a preference for employer-sponsored insurance was codified in tax law. To this day, many large employers offer this tax-preferred insurance as part of an employment package. The relatively young and healthy workforce creates a prime risk pool for insurers and employees benefit from tax savings. On the surface it appears to be a win-win — until you look closer. Employees are, once again, given an incentive to purchase more insurance coverage than they may need because of the tax preference – if they can get $1.30 worth of coverage for $1, of course they will get more. This set up, despite making insurance appear cheaper, also has the effect of stripping employees of the full range of choice among health plans and causes a young, entry-level employee to be forced to buy the same amount of coverage as a decades-older president or CEO. Worse, because of restrictions on preexisting conditions or incomparable plans from one employer to another, employees find themselves trapped in what is called job-lock, where an employee remains in an unwanted or inappropriate career position because of fear of losing a necessary health benefit. A market where risk pools are skewed, prices are distorted, and workers are trapped certainly isn’t a free one.

Medicare, when you think about it in the context of the Employer-sponsored insurance mess, really makes perfect sense. Throughout their productive and healthy years, Americans benefit from employer-sponsored insurance and the associated preferential tax treatment. Then they decide to retire and hit the mother of job-lock hurdles: when they retire, they will lose their employer-group coverage and because of their age, their risk rating makes private insurance entirely unaffordable. Rather than admit that government intervention had in fact caused a major problem, Big government instead decided to seize upon an opportunity to create a new class of American who are dependent on government. They guaranteed absolute addiction to Medicare by writing healthcare providers a blank check to provide any and all ‘reasonable and necessary’ care, and then condemned all opposition as ‘pulling the plug on grandma’. They are funding this ‘blank check’ with a poorly disguised ponzi scheme that will very shortly be bankrupt. As Medicare is (for the most part) a single-payer system, there is no competition among payers, and therefore no free-market.

Medicaid is likewise a single-payer system, but while it too has essentially no limits on how the money may be spent by healthcare providers, there are serious limitations on the amount of funding available. Federal funds are based on a state-match rate. The limited funds coupled with the nearly unlimited available uses for those funds incentivizes misuse and waste by both providers and beneficiaries, while limiting initial access to care by beneficiaries who may truly need it. Because of these misplaces incentives, gross mismatches between consumer and providers’ interests, and the absurd limits and limitlessness of this program, Medicaid is both the least free-market health coverage market in America, and probably the most useless.

So next time you hear someone ranting about how the government needs to reform the ‘healthcare market’ consider to which market that person is referring. Also, remember all the ‘reforms’ the government has already made and whether we are really any better off. There is plenty of room in each of the 6 current markets for reform — but not 1 of them will be able to survive much more government regulation.

10 Libertarian New Years Resolutions for 2014



As the new year approaches, it’s hard not to impose some silly resolutions on myself. I will stop eating junk food. I will go to the gym. I will stop cursing so damn much. But I think I should make some resolutions in my political life as well.

1. I will stop making negative comments about other libertarians who have slightly different philosophies than my own — objectivists, anarchists, even tea-party republicans are on my side and now is not the time to harp on small differences.

2. I will accept that people come to libertarianism from different backgrounds and with different priorities. Neither social nor fiscal freedoms are inherently more important than the other.

3. I will stop talking about politics to people who have no interest in talking about politics — it only makes them not want to talk to me. I especially will not insist on explaining NAP.

4. Before criticizing any aspect of government, I will be prepared with at least one justification for my position that could be explained to a non-libertarian in 20 words or less. (Eg. Who will build the roads? The same people who build the parking lots.)

5. If I vote, I will stop voting for the ‘lesser of two evils’ — I will not vote for evil.

6. I will read and share the works of other libertarians because I am not always right, and I still have a lot to learn.

7. I will read and share the works of non-libertarians because it is the only way to be fully sure of my convictions, to credibly defend them from attacks, and to lead others to libertarianism.

8. I will live my values and not request or accept any new government hand-outs (ie- Obamacare).

9. I will (almost) always exercise the self control and rationality that I know people to be capable of, and on which libertarian principles are based. This does not mean that I will not need help or advice, or that I will not make mistakes — but I will try to make the most well-informed and rational choice in all situations, and I will accept responsibility for my actions.

10. I will not lose hope that true freedom can be attained.

Big Brother Isn’t Our Biggest Problem

In his alternative Christmas address, whistleblower Edward Snowden urged those listening to recognize that the police state George Orwell warned us against in his novel “1984” is upon us. He pointed to mass surveillance as proof that Big Brother is real. Snowden is right when he claims that freedoms have been taken from us in the form of government intrusion into our homes, but it made me wonder — how much freedom have we simply given away? Everyone remembers the TV screens and microphones that allowed Big Brother to keep a watchful eye on the characters in Orwell’s “1984”, but few pay attention to what I always considered to be the greatest intrusion into the people’s lives: the degradation of language to such an extent that the government was not only able to monitor thoughts, but control them by removing the words with with people could express ideas such as individualism, autonomy, responsibility, and freedom.

In his essay Politics and the English Language, Orwell expanded on the theme of the use of language. It occurred to me while re-reading it that for most modern users, written English has very little in common with that of Orwell’s time. Orwell lamented contemporary writers’ abuse of words. In nearly all four of his major critiques the hallmark of an abuse of language was excess verbiage. Today, in the world of twitter, instagram, and tumblr it is next to impossible to hold an audience’s attention for more than 140 characters. Even to President tweets! So how could verbiage still be a hallmark of poor writing? It turns out that while the major symptoms might be limited to 140 characters, the disease remains. English speakers and writers are still trying to hide ignorance and laziness with the use of the same four techniques to which Orwell objected. One must only spend a little time on the internet to understand why they should be avoided.


Dying Metaphors— Orwell claimed that dying metaphors lose their ability to create a mental image to which the audience can immediately relate. Without that ability, metaphors are simply a bunch of random words strung together with an intended meaning that is entirely different than their actual meaning. these types of metaphors do little other than confuse the audience. So why do we keep using them? Intellectual laziness, according to Orwell — not spending time creating our own word-images. Also, I think, we might not even know exactly what it is we’re trying to say. In a world where we are constantly sharing every thought we have the exact moment we have it, spending even a little time understanding those thoughts so that we might articulate them well might be more than our generation can handle.


Verbal False Limbs– Orwell also spent much time on these operators –extra words and syllables added to a sentence to bulk it up to try to sound more ‘academic’. This primarily resulted in inappropriate noun/verb combinations such as “render inoperative” instead of “break”. I didn’t find many examples of this in the twitter-verse, probably due to character limits, but I did find this technique used by people who wanted to sound smarter than the target of their rants. Phrases like “however” or “yet” are replaced by “while in actuality”. These verbal false limbs almost detract from the argument the author is trying to make!


Pretentious Diction, or the use of foreign-based or unnecessarily affixed words (such as my use of unnecessarily instead of ‘twice’) indicates a level of pretention which Orwell disliked, not because of xenophobia (yet another latinized word-choice on my part), but because the addition of each prefix or suffix creates ambiguity. For example – are you short? Are you tall? Are you not-untall? Counter-intuitively, it is logically possible to answer two of those questions in the affirmative – stating you are not untall tells us very little about your height. Foreign words also seem to give an air of ‘culture’ to a statement, but may just as easily confuse the audience or reveal the speakers ignorance.


Meaningless Words are one of my own personal pet peeves (the one above absolutely drives me crazy!). Another example could be demonstrated by the current fad of using “that’s hot!” and “that’s cool!” to mean the exact same thing. These words tell the audience nothing about the subject of the sentence and are therefore a pure waste of words. But that doesn’t stop us from saying things this silly on a daily basis.

Snowden’s warning about the surveillance state is legitimate and should be heeded, but Orwell’s other warnings to us about protecting the integrity of our thoughts are equally important, if not more difficult to understand. Giving in to pop culture tools of mass communication not only opens us up to the eyes of Big Brother, but also encourages habits worse than Orwell’s direst predictions. Snapchat, emoticons, and lol’s have removed not only the need for us to articulately express our thoughts, but even the need for us to fully form a thought.


An Economic Explanation Of Why You Should Always Rescue, Not Buy

228953_10150285722966252_7068914_n - Copy 

This is my rescue the day I brought him home. 

With the exception of people who need a very specific breed of dog with a very specific quality to serve as a working dog, anyone who breeds or buys a dog might as well go down to their local animal shelter and murder a puppy. No offense to those of you who have bought dogs, but that was a terrible decision.

Every time we spend money buying a dog from a breeder (or worse, a pet shop) there are three major unintended consequences that are even more horrible than that puppy is adorable. (My Austrian friends will recognize this concept as the seen and unseen).

1. By buying a puppy from a breeder or pet store (which is usually worse because they often use puppy mills rather than careful breeding practices), you are signaling to breeders and potential breeders that there is a profitable market in selling dogs. Every single dog that is bought creates an incentive for the breeder to have another litter- it is simple supply and demand — the supply will adjust to met demand. One argument for purebred dogs is that you know exactly what type of dog you are getting — yet, on average, 25% of dogs in shelters are purebreds that have been relinquished by owners who “bought” rather than “adopted” them. A supply of purebred puppies does exist, and in many rescues you may even be able to get health history on at least the mother of a puppy.

998771_10151799639531252_177099028_n - Copy

He’s settled in since that first day, and is awfully glad I saved him. And that I let him be so cool.

2. More innocent shelter dogs will be euthanized because of your decision to buy. Even if a specific dog is not yet slated to be “destroyed” (yes, that is actually what they call it), even if you adopt from a no-kill shelter, you are still saving a life because you are opening up space in the shelter for another dog who might otherwise be destroyed.

The problem, really, is that many people don’t recognize that shelter dogs have value, even if they can’t be sold, because they are ‘substitutes’ for something with sale value (the bought dog). By destroying the value of a resource that we already have, we are literally wasting money and resources breeding more dogs to replace those that are unwanted and eventually destroyed just because they were “shelter dogs”.

1185973_10151833515731252_1645804088_n - Copy

Now we’re basically inseparable.

3. An even more heartbreaking result of buying dogs is the entirely predictable existence of inbreeding and puppy mills. While many breeders do genuinely care for their dogs and attempt to avoid inbreeding, to an extent it is unavoidable, and in puppy mills there may not even be an attempt to avoid it.

In case you are unfamiliar with how puppy mills work, a mill will obtain a male and a few female purebred dogs and continuously breed them for the duration of their fertility, which, for females, often isn’t very long because of the stress on their bodies caused by perpetual pregnancy. Then those dogs are usually either abandoned or destroyed.

The result of both inbreeding and less-than-reproductively healthy parents is that many of these “perfect” purebred puppies are born deformed and unfit for sale — they will be abandoned or destroyed. Others might live normal lives but be especially susceptible to injuries and illnesses. Dogs that require an unexpected level of care due to illness or injury are more likely to be abandoned and stand little chance of being adopted.

To recap — when you buy one dog, the breeder breeds more dogs. Some of those dogs will end up in a shelter that is even more full because by buying the dog you failed to remove another dog from the shelter. Because the shelter is full more dogS (plural) will be destroyed. Because. Of. YOUR. Decision. When a breeder is out of pups, they breed more, when a shelter or rescue sends a pup home, they save dogs from being destroyed.

On the up side, there are literally thousands of resources out there that could help you find the right shelter or rescue organization. Many specialize by location, size, or even breed. By working with one of these groups you could find the (purebred?) dog of your dreams before he or she is even born! The experience is almost indistinguishable from what you might get with a breeder except instead of having to buy your new pet you get to adopt a member of your family, and, along with the puppy, take home the knowledge that you have not only rescued your new friend, but saved the lives of shelter dogs you will never even meet.

313782_10150338170521252_1099244133_n - Copy

Yup.  Inseparable.

Now, I’m a libertarian, and I respect everyone’s right to engage in economic activities as they see fit, but part of how markets keep things ethical is by transparency and peer pressure. So now that you know that your choice to buy would have a direct, immediate, harmful impact on the lives of innocent puppies, I hope you will make choices that maximize the happiness of all living things- especially puppies, because they are just so cute!!

Actually, You Can’t Keep Your Doctor Either

In a government run health system, freedom can’t exist. The numbers simply don’t add up. The incentives for doctors to stay in the market dry up and disappear as more patients show up expecting more care for less money. When the incentives disappear, so will doctors, and the state will end up behaving like California.

Doctor shortages are already becoming evidence in states like California where there has been the greatest support for Obamacare. California, for instance, is one of the most democratic-leaning states in the country and has consistently been among the first to implement each step of Obamacare. Covered California is touting to the public that 85% of doctors are participating in this massive intrusion of government into healthcare. They have shown no support for that claim, which is interesting because the California Medical Association, which represents doctors, along with insurance brokers in the state, have calculated that 70% of doctors in CA are in fact ‘boycotting’ Obamacare.

Covered California is reporting that physicians have agreed to participate despite those physicians’ refusal to enter into any contract to that effect. At this moment Californians are signing up for Obamacare under the impression that their doctors are participating. Covered California has outright lied to consumers in order to induce them to sign up for Obamacare. And so true freedom has disappeared.

Doctors, for their part, can’t really be blamed. Covered California sent letters to CA physicians asking them to sign a contract to participate in Obamacare networks at “a reduced rate”, but they were never told how reduced. They were essentially being asked to sign a blank check. Some doctors who do want to participate in Obamacare exchanges are being excluded from networks in order to keep costs low.

Doctors opting not to participate in Exchange plans is only the beginning. With the expansion of Medicaid in half the states, many people newly enrolled in this program will also find they don’t have access to doctors, especially specialists, who are reimbursed at 30% or more less than they would be under Medicare or private insurance.

Regulatory hurdles also abound. 12,000 pages of regulations on top of the 2,000 page legislation that is Obamacare increases the administrative cost of compliance and decreases doctors’ autonomy to practice medicine.

With such large pay cuts looming, and administrative burdens growing, doctors are beginning to seek alternatives to participating in government programs – some are retiring early, others are creating boutique practices where government insurance simply isn’t accepted, others are restricting or giving up their practices – including the personalized care that goes with them – in favor of joining large hospital or practice groups.

Lack of access to health insurance, doctors,  pharmaceuticals, etc. are all predictable side-effects of a government take-over of health insurance. If no other good comes of this disaster of a health overhaul, at the very least a new respect for the free-market must grow with Americans’ distrust of government promises.

Short History of Health Insurance in America: it’s not as boring as it sounds

People don’t like risk. Never have, never will.

‘Insurance’ dates back as far as 1750BC in the Code of Hammurabi. ‘Life Insurance’ developed around 600BC in Greece and Rome. Maritime insurance was the first system to use actuarial risk adjustment by varying the cost of insurance shipments based on predicted weather conditions.

In America the first ‘accident insurance’ was offered in 1850 for injuries to men working on railroads. In 1887 African American workers organized a fund that covered healthcare and burial expenses for workers and their families. But in 1900 health insurance wasn’t widespread because healthcare itself wasn’t particularly advanced and only cost the average person about $5 a year ($100 in today’s money). A cost that small wasn’t even worth insuring against.

By the 1920’s medical care began to quickly advance and hospitals were no longer charitable institutions where people basically just went to die.  Society’s attitude, however, didn’t reflect the changing realities of healthcare and Americans continued to spend very little on healthcare, and they saved even less as a safeguard against unexpected medical costs.

Baylor hospital changed all of this by offering local teachers a deal whereby they would pay $0.50 a month in exchange for the hospital absorbing the cost of any care those teachers may incur at the hospital. The Teachers lowered their risk of large hospital fees and the hospital kept its risk low by spreading it among individuals in a large group. This program, now known as Blue Cross, was the grandfather of the employer-based health insurance system.

WWII wage freezes were what truly gave birth to our national obsession with employer-sponsored insurance. With so many workers fighting in the war and manufacturing business rapidly increasing, companies were fiercely competing for workers. As salaries began to soar, the government enacted a wage freeze that forced employers to compete in new ways – fringe benefits. Employers began to offer increasingly generous health care packages as a means of attracting workers, regardless of the appropriateness of the level of coverage being offered. Then, in 1943, employer-sponsored insurance became institutionalized when the IRS determined that health insurance does not qualify as taxable income, and therefore anyone with employer sponsored insurance would in effect only be paying $1 for $1.30 worth of insurance.

Because under these tax incentives individuals are essentially getting $0.30 in ‘free coverage’, they are even les likely to be cost-conscious than individuals who buy insurance with after-tax dollars. Moral hazard (the loss of cost-consciousness because consumers are not spending their own money, but rather the insurers) was amplified by the arbitrary tax preference. Because people pay less than the full value of the health services they receive, the laws of supply and demand predict, and history has shown, that hey consume more healthcare than they likely need. As this rate of consumption increases, demand increases, and the care becomes more expensive. But the laws of supply and demand are restrained in this market because the consumer does not pay for the goods and services, so regardless of how high the price climbs, consumers continue to consume because the insurers are paying for it.

In response, insurers raise their prices. For some this may create minimal cost consciousness – they might buy less comprehensive plans or buy plans with higher deductibles, which will dampen the effect of moral hazard. Others are priced out of the health insurance market entirely, and they either pay for healthcare themselves, or receive government insurance (which, it turns out, often has little advantage over being uninsured).

If this trend were allowed to continue, eventually market forces would decrease the demand for overly-expensive healthcare and prices would return to a somewhat sustainable level. In order for that to happen though, Americans would need to demonstrate both self-restraint and financial responsibility – both of which are virtues many in our society have failed to develop due to the ever-present nanny state.

So instead of admitting that government intervention has only made the situation worse, first through the wage freeze and later through preferential tax treatment, the government decided to do the exact opposite and expand its involvement.

Obamacare exacerbates the problems described above by 1) making certain “Essential Health Benefits” completely free to the consumer, and therefore more likely to be over utilized, and 2) despite the fact that for many Americans the price they pay for health insurance has gone up, Obamacare has still managed to increase moral hazard for many by subsidizing insurance through employer mandates and premium-assistance  tax credits that will be distributed in at least some Exchanges (it has been convincingly argued that 34 states will be exempted from this federal spending and the tax penalties that go with it).

Keep in mind that even as prices rise, supply won’t necessarily rise with it. There are other complex market factors at play. So in the end people will have more expensive, less comprehensive insurance with less access to the healthcare that that insurance is supposed to pay for.

Whether you like Obamacare or not, these facts are all, well, facts. Perhaps with a better understanding of the history of health insurance in America, it will be easier for Americans to see why we are headed down the wrong path, and hopefully we will then be in a better position to do something to stop it.

Success. You keep using that word. I don’t think it means what you think it means.


How are we measuring the success of Obamacare? This is the first of what I think will be 3 posts on this ridiculous topic that makes me wonder where the Obama Administration got it’s dictionary.

We’re hearing so many conflicting things about the Obamacare roll-out that I thought it would be wise to take a more skeptical, if not scientific look at the way Obamacare’s “success” is being measured.

The administration has made a big deal of the fact that at least 1 in 114 Americans have attempted to access at least once since the site launched. That is just under 3 million people. Although there are no solid numbers available, it is believed that 36,000 Americans have managed to enroll in health insurance through Federal Exchanges, and 42,000 in State Exchanges (not including those who enrolled in Medicaid, the topic of my next post, I think). It is estimated that for the Exchanges to escape a “death spiral” (caused by a disproportionate number of sick people entering these insurance pools and driving up costs), around 7 million people will need to enroll in health insurance through the exchanges by the end of 2014.

Granted, there have been ‘glitches’ that made it difficult to actually enroll in health insurance on the Exchange websites, despite the huge amount of initial interest. However, there was a huge amount of initial interest that may have resulted in disillusionment or distrust of the Exchanges when they didn’t work; alternatively, that initial interest may have been curiosity more than interest in enrollment. Therefore, let’s be generous and assume that all American demographics enroll proportionately at a constant rate double that of what it was during the first 2 weeks of Open Enrollment. Under these circumstances it would still take around 2 years to reach the goal of 7 million enrollees.

The administration has equated this dismal performance with the initial ‘glitches’ experienced by the iPhone 5 when it was first released- you know, when Apple broke world records by selling 5 million phones at up to $649 a piece during its FIRST WEEKEND.

Yea. Obamacare is just like that.

Please. Someone get these people a dictionary!

Healthcare Market Failures.

Okay, back to healthcare.

There is a lot of talk about whether or not a free-market is capable of controlling healthcare costs. People in favor of socialized medicine think there are too many problems for it to work. People who prefer the free market often acknowledge problems, but still believe it is better than socialized healthcare. Here I describe what those basic problems are in each market:

People on both sides of the isle struggle to answer the problems raised by the healthcare marketplace, and though neither side likes to admit it, both free-market capitalism and socialism leave something to be desired in their attempts to answer these questions. Pure capitalism comes up short because the health care market is rife with market failures where socialism can’t work because at the end of the day, healthcare is still susceptible to market forces.

Free-Market Failures

Inflexible Demand: When you or a loved one are in pain or imminent danger you will likely do or promise nearly anything to alleviate or escape that situation. If a healthcare provider asks for a large sum of money in exchange for helping you, in those circumstances you would probably agree to pay the sum, and then you would be legally and morally obligated to pay it. Part of the reason prices for health services are rising so quickly is that regardless of the abundance of supply, prices stay high because demand often does not respond to prices.

Information asymmetry: Another market failure in health services is information asymmetry. Because medicine is such a large and complex field of study, it is entirely rational for individuals to remain largely ignorant of important information relating to their health, and instead to rely upon the information and decisions provided by their healthcare providers (to submit to agency, discussed below). This makes it difficult for healthcare consumers to accurately gauge the appropriateness, quality, and value of any health services they receive. This market failure is closely related to the next topic — imperfect agency.

Imperfect Agency: This market failure is a two-faced monster. In this market there is what I call rational and ignoble imperfect agency. The first form, rational imperfect agency, occurs because it is unreasonable, if not impossible, for a healthcare provider to spend enough time on any single patient to become fully informed of all of that patient’s history, risk factors, and preferences about health. From this position, a provider must gather as much information as is necessary to make the best decision he or she can make on that patient’s behalf. Ignoble imperfect agency, on the other hand, arises from the conflict of interests inherent in our payment system- the doctor gets paid for providing care, even if it is not necessary or what the patient would want. Providers take advantage of the information asymmetry mentioned earlier, to increase the amount of services provided. This is not necessarily done out of malice, but could be an attempt to protect against legal liability, or to cover lost costs from providing free or too-cheap care to indigent patients. However, because of information asymmetry, patients may never even know this is happening, and unlike in other markets (like buying a car or computer, or hiring an assistant) talking to another provider and getting a second opinion could be not only painful and time-consuming, but also very expensive because the purchaser of healthcare is often an insurer, who has an interest in limiting the amount of care the patient receives. This brings me to…

Moral Hazard: During WWII, while many men were away fighting, American businesses were in fierce competition over the remaining workers. As the wage competition escalated, the US government instituted a wage freeze. Since employers could no longer offer higher pay, they began offering benefits packages – one of those benefits was health insurance. This employer-sponsored health insurance became very popular  and was given a tax exemption so that $1 of salary now equals up to $1.30 if used to purchase health insurance. This tax exemption has created an incentive for people to buy more generous health packages than they really need, and once the money has been spent on health insurance, the remaining incentive is for the insured to consume as much healthcare as possible, without too much concern about overspending. This, predictably, has led to increased health spending in America, without providing any additional benefits.

These all sound, to many, like great reasons to switch to a socialized health system, but there are a number of reasons why those don’t work either (despite what your Canadian friends tell you).

Socialist Failures

Inflexible Supply: Despite the fact that a single payer may have agreed to cover the costs of all patients, that doesn’t do anything to increase the supply of providers. All the money in the world won’t do a sick person any good if there aren’t any available doctors. And this problem is not the result of the fact that there is not currently enough demand for providers- as discussed earlier, there is currently high and inflexible demand. The shortage of providers will instead be the result of moral hazard and diminishing marginal returns.

Moral Hazard: If you think moral hazard is bad now… with socialized medicine there would be NO incentive — no co-pay or deductible — to keep people with colds out of the emergency room.

Rapidly Diminishing Marginal Returns: A single payer structure would give that payer great bargaining power. It will use that power to significantly reduce the price paid per patient. This lowered price would mean that the marginal return on each additional patient is significantly less than under our current system– essentially a large pay cut for providers.  With these rapidly diminishing marginal returns, it may not make sense for a provider to go into or stay in business or keep long hours. This incentive will only magnify the resulting provider shortage.

There are dozens of other issues plaguing both socialist and fee-market proposals for management of the health market, but I believe free-market remains the better option for one reason: by allowing providers to make profits in healthcare, we enable two essential functions socialism would crowd out: 1) more people have access to care because there are incentives for providers to stay in the market, and in many cases, those profits are used to provide care to those who can’t afford to pay anyway, 2) allowing for greater profit spurs competition. Dispute inflexible demand, there is still competition in innovation that will provide an increasingly higher level of care to an increasing number of people, at lower and lower costs. And at the end of the day, not just health insurance, but good, timely, appropriate health care is what matters.



Another (non-ObamaCare related) way the government is hurting healthcare.

In a perfect market, the buyers and consumers are constantly working together to find an optimal outcome whereby both parties are at least marginally better off than they were before engaging in the trade. But the healthcare market is not a perfect market. Failures such as Moral Hazard and Imperfect Agency create bad incentives for providers to engage in fraud and abuse of the system. In an attempt to address these concerns, congress has passed a series of laws, collectively known as the Stark laws, which limit providers’ ability to take advantage of these market failures. But like any market transaction, these laws come at a price, and in this case, it may be that we have traded competition for a perceived reduction in waste.

The implementation of Medicare as an entitlement program in the United States, followed by Medicaid, has created moral hazard for its beneficiaries. Moral hazard exists when the interference of insurance or a third-party payer eliminates any incentive to be frugal in how money is spent. Medicare is the largest purchaser of health services in the country, and is responsible for approximately one-third of our nation’s spending on health care. Medicaid, a joint state-federal program, covers another several million Americans. This creates a myriad of issues, but in this context, the consequence of greatest importance is that moral hazard eliminates the consumers’ interest in how much money is being spent an whether it is appropriate. This, predictably, contributes greatly to the rapidly escalating cost of health care in America.

Health services is a unique industry because it is one of the only industries where the consumer and the purchaser are not the same entity. Because the provider rationally wants to produce as much as possible, and because the patients will consume those services, regardless of price, the only actor in this market who does not want there to be high spending, the payer, has no say in the decision because Medicare and Medicaid are entitlements that must be provided for “all reasonable and necessary” treatment. This situation creates the perfect storm for rapid increases in healthcare spending.

Despite the absurd incentives created by the separation of the functions of payer and consumer, as well as the separation of producer and consumer, there are ethical and societal obligations to encourage physicians to prioritize the interests of their patients above their own. However, it is a complex world and a lot of gray area exists where the interests of the patient do not directly conflict with those of the provider, but only with those of the payer. Ethical obligations, such as the Hippocratic Oath, do not, and perhaps should not, prevent providers from trying to make a profit. It is therefore reasonable to believe that there may be circumstances, such as self-referrals, where providers advance both their own self interests and those of the patient simultaneously.


In 1989 the first Stark law was passed in an Omnibus bill. Its champion, Rep. Stark, introduced the rather meek bill, which limited the ability of physicians to refer patients to other medical facilities if the physician has a financial interest in the facility, thereby attempting to limit fraud and abuse in Medicare. The laws have since evolved and become increasingly complex, even to the point where the American Bar Association wrote an article lamenting it difficult in wading through the vague yet extensive list of exceptions to Stark laws. In 1998 the Health Care Financing Administration (now CMS) issued revised Rules for Stark, which addressed the now huge amount of modifications and exceptions to Stark, and the breadth and scope of the reporting requirements.

As a practical matter, what the stark laws have become so little resembles what they were intended to do that even Rep. Stark has lamented the burden they impose on the healthcare system. Today’s Stark laws are an entirely different entity from which nearly every relationship between health service providers requires an exemption. Without one, Hospitals and physicians may be liable for financial penalties of $15,000 a patient, and exclusion from Medicare reimbursement. These harsh penalties create a strong incentive to be overcautious in establishing any kind of relationship (financial or otherwise!) with any other health service provider or entity.While caution is generally a good thing, especially when it comes to health, in the context of the healthcare marketplace, too much of an incentive to be cautious can have negative effects that are just as harmful as the behaviors they are trying to prevent.

Stark laws were created with the primary purpose of eliminating fraud and abuse in the healthcare system in order to help rein in the skyrocketing cost of American healthcare. However, the consequences of these laws can be worse than the original abuses.

Most physicians are genuinely concerned about the health and safety of their patients, and want to be able to provide the highest quality of care possible, but Stark laws create a barrier to entry that may prevent health services providers from being able to provide all the reasonable related care that a patient might need during a single visit. Referring patients to an off-site location creates a barrier between the patients and a necessary health service. When physicians are unable to provide certain types of diagnosis and treatment in their own office, patients are forced to make additional time, travel and financial commitments in order to obtain those tests. Social science and economic theory tells us, however, that as these opportunity costs increase, the likelihood of a patient complying with a doctors orders to get those diagnoses or treatments decreases. As a result of Stark, patients may be less likely to receive the care they need, despite the marginally reduced cost of care that results from Stark’s mitigation of fraud and abuse in the healthcare system.

Another consequence of Stark laws is that providers are stripped of incentives to innovate and create new payment systems that may help to decrease costs of care without necessitating government intervention.  Because Stark laws are so difficult to navigate, there are hefty administrative fees associated with compliance efforts. These costs, in combination with the ever-looming threat of liability and Medicare exclusion as penalties for even inadvertent Stark violations, may discourage physicians from attempting innovative payment structures or relationships in attempt to control costs.

Physicians with independent practices will feel this administrative burden more acutely, as the majority of the exceptions to stark laws apply exclusively to physician groups and hospitals. Because of the exceptions available tot group practices, physicians running their own practice are at a disadvantage in attracting patients because they are unable to provide the same range of services, much less the convenience and continuity that groups and hospitals can provide because of those exemptions. This presents individual physicians with the dilemma of having to choose between the autonomy of having an independent practice, or the competitive advantage of joining a large practice group or hospital.

As far as cost savings are concerned with Stark, the exceptions have nearly swallowed the rule. Stark creates protections from certain forms of abuse that are so complicated that the cost of compliance is extremely burdensome. What is worse is that despite cost savings being jeopardized by the additional administrative fees that are necessary to comply, physicians have no incentive to work to create more efficient or patient-friendly relationships with other providers for fear of violating Stark and losing an entire practice.

One thing Stark laws have succeeded in doing is limiting the competitive nature of the health services market. This is just another example of Congress legislatively picking winners and losers and taking the decision out of the hands of the consumers.

%d bloggers like this: