It started long ago with the idea that employers should do their part to keep their employees healthy and productive. Of course that was a lot more relevant in the 1800s when industrial revolution’s working conditions and the era’s public health were mediocre at best. The US employer-sponsored healthcare took firm root during World War II when the US government placed wage restrictions on employers but did not restrict the amount of fringe benefits. To compete for the best talent in an environment with fewer workers, employers beefed up their health benefits.
It was the Revenue Act of 1954 that solidified the employer-based healthcare system that we have today. It explicitly categorized health benefits as non-taxable income. This created a ripple effect that has influenced the way we purchase and receive our healthcare. The first effect is that workers would always demand healthcare through our employers. Any other way would be tax-inefficient. All else being equal, $1 of health benefits and $1 of cash wages cost the same to the employer; however, the employee health benefit would be $1 through the employer but only $0.65 if purchased on the individual market with taxed cash wages. This tax arbitrage lead to more compensation being given in the form of non-cash health benefits, and this fact is readily apparent in many union negotiation contracts. Unions have some of the best benefits available, and its origin is derived from the tax treatment of such benefits. It’s no wonder that GM now spends more on healthcare than steel or that Starbucks spends more on healthcare than coffee beans!
An employer typically offers benefits in one of two ways, either on a fully-insured or self-insured basis. Both types function the same way for employees. Fully-insured employers tend to be small to mid-sized companies that purchase health insurance from an insurance company – usually a United Health, Aetna, Cigna, or Blue Cross / Blue Shield plan, who can take on the insurance risk and pool it with other employers. Self-insured employers tend to be large to nation-wide companies that purchase only the administrative services portion but takes on the fiscal responsibility/liability of paying the claims. The idea is that with a sufficiently large employee population, the company can act like a mini-health insurer and save on cash-insurance costs, particularly if it finds that it employs a younger or healthier-than-average population. Because of our insurance mechanism of payment, the second (and likely unintended) effect is that employees incur primarily indirect costs, which skews the purchase decision. The employer pays most or all of a premium on behalf of the employee to the health insurer. That premium is determined by the aggregate cost of covering all the employees or in the case of smaller companies by the aggregate cost of covering everyone in the insurance pool. Because of this pooling, the individual’s use of healthcare will only serve to increase future premiums by a relatively small amount, i.e. a relatively small marginal cost. With this imperfect information, consumers do not know or understand what the true costs of the healthcare that they are consuming relative to the benefit that the employee is receiving. Said another way, a doctor’s visit may feel like it’s costing $10 out-of-pocket for the co-pay, but the real cost of the visit may be $80. The hidden $70 makes it way back to the insurance pool where the rates get raised later. Since the marginal cost is hidden away, demand and overconsumption runs rampantly in the system.
So while employers wanted to do something good for their employees and the government wanted to encourage a way for people to get healthcare coverage, the tax-advantaged health insurance system has certainly contributed to the way healthcare costs have grown unchecked.
Next Post: How can we put on the brakes? Why is “Health Maintenance Organization” such a bad term?