Saturday, September 15, 2018

Do I really want to be addressed with the title of of doctor?

Do I really want to be addressed with the title of doctor?



Can PhDs legitimately claim to be doctors?

I’ve frequently heard people claim that individuals who hold PhDs are not “real” doctors. These people assert that only physicians can rightfully claim this title, and that it’s inappropriate for PhD-holders to use this term.
For some reason, many also think that the MD is much more difficult to attain than a PhD for example in computer science.

So - should Ph.D.s Be Referred To As ‘Doctor?

Ps: currently I am a PhD student and don't know why the question is being devoted!

PhD

edited Mar 27 '17 at 17:47
asked Mar 27 '17 at 14:59

This is possibly country dependent, but for Germany this is utterly wrong: "many also think that the MD is much more difficult to attain than a PhD" - Medical doctors get the equivalent of a "paper doctorate" thrown after them so they can be called "doctor" as part of their degree, while "real doctors" have to start a doctorate and carry out rigorous research to obtain the degree/academic title. Now other countries may handle this very differently and there this statement may or may not be true. – DetlevCM Aug 23 at 6:45

In the modern USA the title of doctor is valid for both medical doctors and holders of PhDs in the US, but particular customs may vary by institution. The general rule of thumb for etiquette is to refer to someone however they wish to be referred to. If you have a PhD that insists they be referred to as doctor it would be very impolite to not do so. Likewise if you have an MD who insists that you do not use their title it would be similarly impolite.

In situations where it is important to avoid confusion it is common to spell it out explicitly. Rather than using the honorific use the explicit degree, for example it is very common for email signatures to look like:

John Doe, Ph.D. in Computer Science

instead of

Dr. John Doe

Similarly, an MD would tend to say:

Jane Doe, MD, Cardiologist

or even

Jane Doe, MD, Ph.D., Cardiology

I suspect that your question has another component, which is essentially whether or not it is "fair" for a Ph.D. holder to refer to themselves as doctor. This requires an assumption that the MD is more challenging to attain than a Ph.D., and that calling oneself a doctor is somehow illegitimately taking the status of a medical doctor. Let me just say that the people who have earned these degrees are generally less concerned about this than those who have not, and that the title someone puts after their name doesn't tell you very much about their individual ability, dedication, or experience.


Krebto



The answer is "Yes". – Michael Mar 27 '17 at 15:07



Yes they can legitimately claim that, just not that they are medical doctors (or doctors in any other field they are no doctor in). – skymningen Mar 27 '17 at 15:07



The people I know who say this (of themselves) are usually being somewhat sarcastic and say this as a form of irony. – Dave L Renfro Mar 27 '17 at 16:30 



Probable duplicate of, or at least answered by this: academia.stackexchange.com/questions/30790/… – Bob Brown Mar 27 '17 at 17:58

You can't immediately tell from the title, but then titles are not typically used by an individual to broadcast their occupation - we don't have variants of "Mr" for plumbers, bank managers, or rock stars - despite their very different occupations. Rather, the title is to be used by others when addressing that individual, in order to signify a degree of respect, typically for a particular level of training, qualification and responsibility, or else for a particularly respected position in society. Even the term "mister" is a meaningful sign of respect that historically would not have been as widely applied as it is today - the ratchet of etiquette has gradually eliminated everything below it.
The actual title "Doctor" means "teacher" (from Latin "doceo", "I teach"). This title is more often more relevant to PhDs than MDs, so you probably have your suggested solution backwards. That said, the solution is really neither necessary nor appropriate. Much like "Master" (from Latin "magister", in this case "teacher"), "Doctor" signifies that an individual has not only gained enough competency to practice in a particular field, but has developed enough expertise to instruct others. An individual who is sufficiently qualified to practice but not teach would historically have been known as a "journeyman", roughly equivalent to "professional".
In short "doctor" refers not to a field of expertise, but rather to a level of expertise.
Incidentally, most UK surgeons drop their title of "Dr" and revert to "Mr" after joining the Royal College of Surgeons. I've heard through a friend of at least one surgeon who reacted quite angrily at being addressed as a mere "Dr", which in such circles, due to a collision between traditional titles and modern medical training, could be unkindly translated as "trainee".

A PhD is like a heavily spicy meal

 A PhD is like a heavily spicy meal 
– it doesn't matter how much you enjoy it, once you're finished,
 half of the pain is still ahead.

PhD: so what does it really stand for?
If PhD students are the working class of academic research – and paid accordingly – what needs to change?
Anonymous

Fri 30 Aug 2013 05.00 EDT First published on Fri 30 Aug 2013 05.00 EDT

 A PhD is like a heavily spicy meal – it doesn't matter how much you enjoy it, once you're finished, half of the pain is still ahead. Photograph:

 Octopus Publishing

Recently, during some particularly thorough literature research, I stumbled on a list of alternative interpretations of the acronym PhD. Most were funny: protein has degraded, parents have doubts. But one froze my face in a bittersweet grimace: paid half of what I deserve.

When I was still a rookie PhD student, I read with outrage an Economist article entitled the disposable academic, which argued that doing a PhD is mostly needless. Lately, I've come to think of the PhD as more of a heavily spicy meal. It doesn't matter how much you enjoy the process, once you're done, you still have half of the pain ahead.

The years of academic slog to work your way up to a full tenure slot (professorship? ha – dream on!) are not much different from the work of a PhD in terms of relentless benchwork (pipetting hand disease) and unceasing literature research (pound head on desk), served on a fixed menu with professional uncertainty (please hire: desperate). All of which result in, if not professorship, then potential heavy drinking.

Advertisement

PhD students and postdocs are the working class of academic research and paid accordingly. Although postgraduates are crucial to the generation, discussion and dissemination of knowledge, 50% pay (i.e half of what they deserve) is standard for PhDs in natural sciences and not even guaranteed in the arts and humanities. It's depressing to think that the overall salary of a PhD candidate is less than the cost of much lab equipment. Lab devices are meant to last years – but, hell, what about the work of PhD students in a system where knowledge is incremental?

There could be several reasons for this discrepancy. Equipment and consumables are costly and have a substantial impact on future budget setting. The number of PhDs, meanwhile, is inflated and international competition is fierce. PhD candidates are earning a degree, which shouldn't come for free, and demands motivation and not a little self-denial – including financially.

PhD candidates are at their infancy in science and being trained to do something different from their education to date – lessons in theory combined with practical labwork – as they move into more independent, innovative research. And contributing to the advancement of knowledge requires a certain naive idealism, right? But does this mean it's okay to exploit highly educated individuals (probably heavily in debt)? No.

The possible solutions are simple. The most obvious is: raise the salary of PhD students. A remedy for the resulting scarcity of resources would be stricter selection so that only the best candidates started a PhD. Realistically though, this is never going to happen. It's not because policymakers are greedy but because it would mean a reduction of PhDs and thus a slowdown of science.

A second option wouldn't hinder research, and might even enhance it: cut the salary of professors by half. If there are solid reasons for PhDs being paid half of what they deserve, then the same hold good for professors. They too are doing something different from their previous jobs. After tenure, natural scientists move out of the lab and into an office from where they supervise the research of their team members. The knowledge acquired before (both theoretical and practical) still counts, but the job looks quite different.

Political and managerial skills are equally essential, and nurtured for the sake of tenure, not science. Top-tier staff write proposals, manage funds and coordinate subaltern research units and are sometimes scarcely involved with the generation, presentation and discussion of results which is the core purpose of science. Some department chairs merely take note of advancements generated from the institutes they preside over, but co-author papers nonetheless.

Wages of these academic administrators, then, don't deserve to sit even at 50%. And however grim this may sound to today's professors and those postdocs close to a permanent role, the benefits might appeal to future professors much more. Reduction in salaries for tenured staff will create new professorial appointments and reduce the imbalance between the number of temporary researchers and professors, while smaller research units will favour better supervision of PhD candidates and reduce fixed costs.

Today's professors probably already earn too little, after so many years of being underpaid. As one reader wrote in response to that Economist article: "The PhD student is someone who forgoes current income in order to forgo future income." But if some of the surplus resulting from a slash in professorial salaries flowed down to PhDs and postdocs, then entry level professors would be put in a better financial position.

In this light, cuts to science funding (like those we have seen recently in the US) could be an opportunity. Will they slow down scientific advancement? Most probably, yes. But here is a chance for the elite to rethink the way science is done and stop placing merit only on the levels of grant money they gain, the papers they publish, and the prestige they acquire, but instead taking a closer look at the predicament of those who prop this community up.

Advocates of competition see it as a positive outcome of the current shortage of funding and resources. But to defend job insecurity as the main incentive to scientific advancement is offensive. Science would benefit more from a harmonious coexistence of its members than by favouring ruthless competition.

Jorge Cham, creator of the wittily depressing PhD Comics series, revealed that a major motivation for his sketches was to give solace to fellow PhDs struggling as he did through their postgraduate years. He interprets the acronym as piled higher and deeper. You might think of the paper bulk on your desk, but I believe he had something else in mind.

PhD actually stands for philosophiae doctor, or doctor of philosophy. As we say in my native Italian: prendila con filosofia (take it easy, take it as it comes). And waiting for a change in the current system, or for a global PhD manifesto to emerge, one cannot take it any other way.

This blog was written by a current PhD student in Italy

ultra low dose Duloxetine for diarrhea predominant Irritable bowel syundrome

 i have  

"Selective serotonin and norepinephrine inhibitors (SSNRI/SNRI) such as venlafaxine and duloxetine may also have a role in the treatment of IBS pain, although these newer agents also require careful study. Of the SSNRIs, duloxetine (Cymbalta) has been uniquely studied and marketed for both psychiatric disease and neuropathic pain. Duloxetine received FDA approval in 2004 for the treatment of major depressive disorder and diabetic neuropathy. It received FDA approval for the treatment of fibromyalgia in 2008. Given its clinical effectiveness in treating these conditions, the medication has been applied off-label for visceral hypersensitivity syndromes, including IBS. Eli-Lilly is currently sponsoring an open-label trial of duloxetine for the treatment of irritable bowel syndrome in the absence of major depressive disorder"

Wednesday, September 12, 2018

A primer on US healthcare History



 Very little is taught about the main structure and history of the health system in the residency programs where thousands of foreign medical graduates FMG/International medical graduates (IMGs)  get their training.

So for those  who want  to join residencies in future and those who are doing their residencies  and those newly minted practising IMG 

US  healthcare 101


The History of Healthcare in America

Jeff Griffin

A photo of the Capitol Building in Washington D.C., where a large portion of the history of healthcare in America has been decided.America’s history of healthcare is a bit different than most first world nations. Our staunch belief in capitalism has prevented us from developing the kind of national healthcare the United Kingdom, France, and Canada have used for decades. As a result, we have our own system of sorts that has evolved drastically over the past century and is both loved and hated.
Whichever end of the spectrum you lean toward, there’s no doubt about it: the history of healthcare in America is a long and winding road. How we got to where we are in 2017 is quite a story, so let’s dive in.

The History of Healthcare: From the Late 1800’s to Now

The Late 1800’s

The earliest formalized records in America’s history of healthcare are dated toward the end of the 19th century. The industrial revolution brought steel mill jobs to many U.S. cities, but the dangerous nature of the work led to more and more workplace injuries.
As these manufacturing jobs became increasingly prevalent, their unions grew stronger. In order to shield their union members from catastrophic financial losses due to injury or illness, they began to offer various forms of sickness protection. At the time, there was very little organized structure and most decisions were made on a trial and error basis.

The 1900’s

With the turn of the century came a push for organized medicine, led in part by the American Medical Association (AMA), which was growing stronger and gained 62,000 physicians during the coming decade. But because the working class wasn’t supportive of the idea of compulsory healthcare, the U.S. didn’t see the kind of groundswell that leading European nations would see soon after.
The 26th President of the United States, Theodore Roosevelt (1901-1909), believed health insurance was important because “no country could be strong whose people were sick and poor.” Even so, he didn’t lead the charge for stronger healthcare in America. Most of the initiative in the early 1900’s was led by organizations outside the government.

The 1910’s

One of the organizations heavily involved with advancing healthcare was the American Association of Labor Legislation (AALL), who drafted legislation targeting the working class and low-income citizens (including children). Under the proposed bill, qualified recipients would receive sick pay, maternity benefits, and a death benefit of $50.00 to cover funeral expenses. The cost of these benefits would be split between states, employers, and employees.
The AMA initially supported the bill, but some medical societies expressed objections, citing concerns over how doctors would be compensated. The fierce opposition caused the AMA to back down and ultimately pull support for the AALL bill. Not to mention the fact that union leaders feared compulsory health insurance would weaken them, as a portion of their power came from being able to negotiate insurance benefits for union members.
As one might expect, the private insurance industry also opposed the AALL Bill because they feared it would undermine their business. If Americans received compulsory insurance through the government, they might not see the need to purchase additional insurance policies privately (especially life insurance), which could put them out of business — or at the very least, cut into their profits. In the end, the AALL bill couldn’t garner enough support to move forward and global events turned Americans' attention toward the war effort.
After the start of World War I, Congress passed the War Risk Insurance Act, which covered military servicemen in the event of death or injury. The Act was later amended to extend financial support to the servicemen’s dependents. The War Risk Insurance program essentially ended with the conclusion of the war in 1918, though benefits continued to be paid to survivors and their families.

The 1920’s

Post World War I, the cost of healthcare became a more pressing matter as hospitals and physicians began to charge more than the average citizen could afford. Seeing that this was becoming an issue, a group of teachers created a program through Baylor University Hospital where they would agree to pre-pay for future medical services (up to 21 days in advance). The resulting organization was not-for-profit and only covered hospital services. It was essentially the precursor to Blue Cross.

The 1930’s

When the Great Depression hit in the 30’s, healthcare started to become a more heated debate. One might believe such conditions would create the perfect climate for compulsory, universal healthcare, but in reality, it did not. Rather, unemployment and “old age” benefits took precedence.
While “The Blues” (Blue Cross and Blue Shield) began to expand across the country, the 32nd President of the United States, Franklin Delano Roosevelt (1933-1945), knew healthcare would grow to be a substantial problem, so he got to work on a health insurance bill that included the “old age” benefits so desperately needed at the time.
However, the AMA once again fiercely opposed any plan for a national health system, causing FDR to drop the health insurance portion of the bill. The resulting Social Security Act of 1935created a system of “old-age” benefits and allowed states to create provisions for people who were either unemployed or disabled (or both).

The 1940’sA photo of the FDR Memorial in Washington D.C. FDR played an important role in the history of healthcare in America.

As the U.S. entered World War II after the attack on Pearl Harbor, attention fell from the health insurance debate. Essentially all government focus was placed on the war effort, including the Stabilization Act of 1942, which was written to fight inflation by limiting wage increases.
Since U.S. businesses were prohibited from offering higher salaries, they began looking for other ways to recruit new employees as well as incentivizing existing ones to stay. Their solution was the foundation of employer-sponsored health insurance as we know it today. Employees enjoyed this benefit, as they didn’t have to pay taxes on their new form of compensation and they were able to secure healthcare for themselves and their families. After the war ended, this practice continued to spread as veterans returned home and began looking for work.
While this was an improvement for many, it left out vulnerable groups of people: retirees, those who are unemployed, those unable to work due to a disability, and those who had an employer that did not offer health insurance. In an effort to not alienate at-risk citizens, some government officials felt it was important to keep pushing for a national healthcare system.
The Wagner-Murray-Dingell Bill was introduced in 1943, proposing universal health care funded through a payroll tax. If the history of healthcare thus far could be a lesson for anyone, the bill was faced with intense opposition and eventually drowned in committee.
When FDR died in 1945, Harry Truman (1945-1953) became the 33rd President of the United States. He took over FDR’s old national health insurance platform from the mid-30’s, but with some key changes. Truman’s plan included all Americans, rather than only working class and poor citizens who had a hard time affording care — and it was met with mixed reactions in Congress.
Some members of Congress called the plan “socialist” and suggested that it came straight out of the Soviet Union, adding fuel to the Red Scare that was already gripping the nation. Once again, the AMA took a hard stance against the bill, also claiming the Truman Administration was towing “the Moscow party line.” The AMA even introduced their own plan, which proposed private insurance options, departing from their previous platform that opposed third-parties in healthcare.
Even after Truman was re-elected in 1948, his health insurance plan died as public support dropped off and the Korean War began. Those who could afford it began purchasing health insurance plans privately and labor unions used employer-sponsored benefits as a bargaining chip during negotiations.

The 1950’s

As the government became primarily concerned with the Korean War, the national health insurance debate was tabled, once again. While the country tried to recover from its third war in 40 years, medicine was moving forward. It could be argued that the effects of Penicillin in the 40’s opened people’s eyes to the benefits of medical advancements and discoveries.
In 1952, Jonas Salk’s team at the University of Pittsburgh created an effective Polio vaccine, which was tested nationwide two years later and was approved in 1955. During this same time frame, the first organ transplant was performed when Dr. Joseph Murray and Dr. David Hume took a kidney from one man and successfully placed it in his twin brother.
Of course, with such leaps in medical advancement, came additional cost — a story from the history of healthcare that is still repeated today. During this decade, the price of hospital care doubled, again pointing to America’s desperate need for affordable healthcare. But in the meantime, not much changed in the health insurance landscape.

The 1960’s

By 1960, the government started tracking National Health Expenditures (NHE) and calculated them as a percentage of Gross Domestic Product (GDP). At the start of the decade, NHEaccounted for 5 percent of GDP.
When John F. Kennedy (1961-1963) was sworn in as the 35th President of the United States, he wasted no time at all on a healthcare plan for senior citizens. Seeing that NHE would continue to increase and knowing that retirees would be most affected, he urged Americans to get involved in the legislative process and pushed Congress to pass his bill. But in the end, it failed miserablyagainst harsh AMA opposition and again — fear of socialized medicine.
After Kennedy was assassinated on November 22, 1963, Vice President Lyndon B. Johnson (1963-1969) took over as the 36th President of the United States. He picked up where Kennedy left off with a senior citizen’s health plan. He proposed an extension and expansion of the Social Security Act of 1935, as well as the Hill-Burton Program (which gave government grants to medical facilities in need of modernization, in exchange for providing a “reasonable” amount of medical services to those who could not pay).
Johnson’s plan focused solely on making sure senior and disabled citizens were still able to access affordable healthcare, both through physicians and hospitals. Though Congress made hundreds of amendments to the original bill, it did not face nearly the opposition that preceding legislation had — one could speculate as to the reason for its easier path to success, but it would be impossible to pinpoint with certainty.
It passed the House and Senate with generous margins and went to the President’s desk. Johnson signed the Social Security Act of 1965 on July 30 of that year, with President Harry Truman sitting at the table with him. This bill laid the groundwork for what we now know as Medicare and Medicaid.

The 1970’sA photo of the White House in Washington D.C., where many Presidents have mulled over the implications of healthcare reform throughout history.

By 1970, NHEaccounted for 6.9 percent of GDP, due in part to “unexpectedly high” Medicare expenses. Because the U.S. had not formalized a health insurance system (it was still just people who could afford it buying insurance), they didn’t really have any idea how much it would cost to provide healthcare for an entire group of people — especially an older group who is more likely to have health problems. Nevertheless, this was quite a leap in a ten year time span, but it wouldn’t be the last time we’d see such jumps. This decade would mark another push for national health insurance — this time from unexpected places.
Richard Nixon (1969-1974) was elected the 37th President of the United States in 1968. As a teen, he watched two brothers die and saw his family struggle through the 1920’s to care for them. To earn extra money for the household, he worked as a janitor. When it came time to apply for colleges, he had to turn Harvard down because his scholarship didn’t include room and board.
Entering the White House as a Republican, many were surprised when he proposed new legislature that strayed from party lines in the healthcare debate. With Medicare still fresh in everyone’s minds, it wasn’t a stretch to believe additional healthcare reform would come hot on its heels, so members of Congress were already working on a plan.
In 1971, Senator Edward (Ted) Kennedy proposed a single-payer plan (a modern version of a universal, or compulsory system) that would be funded through taxes. Nixon didn’t want the government reaching so far into Americans’ lives, so he proposed his own plan, which required employers to offer health insurance to employees and even provided subsidies to those who had trouble affording the cost. You can read more about the history of employer-sponsored healthcare by downloading our free guide below. 
GET IT NOW
Nixon believed that basing a health insurance system in the open marketplace was the best way to strengthen the existing makeshift system of private insurers. In theory, this would have allowed the majority of Americans to have some form of health insurance. People of working age (and their immediate families) would have insurance through their employers and then they’d be on Medicare when they retired. Lawmakers believed the bill satisfied the AMA because doctors’ fees and decisions would not be influenced by the government.
Kennedy and Nixon ended up working together on a plan, but in the end, Kennedy buckled under pressure from unions and he walked away from the deal — a decision he later said was “one of the biggest mistakes of his life.” Shortly after negotiations broke down, Watergate hit and all the support Nixon’s healthcare plan had garnered completely disappeared. The bill did not survive his resignation and his successor, Gerald Ford (1974-1977) distanced himself from the scandal.
However, Nixon was able to accomplish two healthcare-related tasks. The first was an expansion of Medicare in the Social Security Amendment of 1972 and the other was the Health Maintenance Organization Act of 1973 (HMO), which established some order in the healthcare industry chaos. But by the end of the decade, American medicine was considered to be in “crisis,” aided by an economic recession and heavy inflation.

The 1980’s

By 1980, NHE accounted for 8.9 percent of GDP, an even larger leap than the decade prior. Under the Reagan Administration (1981-1989), regulations loosened across the board and privatization of healthcare became increasingly common.
In 1986, Reagan signed the Consolidated Omnibus Budget Reconciliation Act (COBRA), which allowed former employees to continue to be enrolled in their previous employer’s group health plan — as long as they agreed to pay the full premium (employer portion plus employee contribution). This provided health insurance access to the recently unemployed who might have otherwise had difficulty purchasing private insurance (due to a pre-existing condition, for example).

The 1990’s

By 1990, NHE accounted for 12.1 percent of GDP — the largest increase thus far in the history of healthcare. Like others before him, the 42nd President of the United States, Bill Clinton (1993-2001), saw that this rapid increase in healthcare expenses would be damaging to the average American and attempted to take action.
Shortly after being sworn in, Clinton proposed the Health Security Act of 1993. It proposed many similar ideas to FDR and Nixon’s plans — a mix of universal coverage while respecting the private insurance system that had formed on its own in the absence of legislation. Individuals could purchase insurance through “state-based cooperatives,” companies could not deny anyone based on a pre-existing condition, and employers would be required to offer health insurance to full-time employees.
Multiple issues stood in the way of the Clinton plan, including foreign affairs, the complexity of the bill, an increasing national deficit, and opposition from big business. After a period of debate toward the end of 1993, Congress left for winter recess with no conclusions or decisions, leading to the bill’s quiet death.
In 1996, Clinton signed the Health Insurance Portability and Accountability Act (HIPAA), which established privacy standards for individuals. It also guaranteed that a person’s medical records would be available upon their request and placed restrictions on how pre-existing conditions were treated in group health plans.
The final healthcare contribution from the Clinton Administration was part of the Balanced Budget Act of 1997. It was called the Children’s Health Insurance Program (CHIP) and it expanded Medicaid assistance to “uninsured children up to age 19 in families with incomes too high to qualify them for Medicaid.” CHIP is run by each individual State and is still in use today.
In the meantime, employers were trying to find ways to cut back on healthcare costs. In some cases, this meant offering HMOs, which by design, are meant to cost both the insurer and the enrollee less money. Typically this includes cost saving measures, such as narrow networks and requiring enrollees to see a primary care physician (PCP) before a specialist. Generally speaking, insurance companies were trying to gain more control over how people received healthcare. This strategy worked overall — the 90’s saw slower healthcare cost growth than previous decades.

The 2000’s

By the year 2000, NHE accounted for 13.3 percent of GDP — just a 1.2 percent increase over the past decade. When George W. Bush (2001-2009) was elected the 43rd President of the United States, he wanted to update Medicare to include prescription drug coverage. This idea eventually turned into the Medicare Prescription Drug, Improvement and Modernization Act of 2003 (sometimes called Medicare Part D). Enrollment was (and still is) voluntary, although millions of Americans use the program.
The history of healthcare slowed down at that point, as the national healthcare debate was tabled while the U.S. focused on the increased threat of terrorism and the second Iraq War. It wasn’t until election campaign mumblings began in 2006 and 2007 that insurance worked its way back into the national discussion.
When Barack Obama (2009-2017) was elected the 44th President of the United States in 2008, he wasted no time getting to work on health care reform. He worked closely with Senator Ted Kennedy to create a new healthcare law that mirrored the one Kennedy and Nixon worked on in the 70’s.
Like Nixon’s bill, it mandated that applicable large employers provide health insurance, in addition to requiring that all Americans carry health insurance, even if their employer did not offer it. The bill would establish an open Marketplace, on which insurance companies could not deny coverage based on pre-existing conditions. American citizens earning less than 400 percent of the poverty level would qualify for subsidies to help cover the cost. It wasn’t universal or single-payer coverage, but instead used the existing private insurance industry model to extend coverage to millions of Americans. The bill circulated the House and the Senate for months, going through multiple revisions, but ultimately, passed and moved to the President’s desk.

2010 to NowA photo of the Washington Monument in Washington D.C. at dusk.

While the country focused on the second Iraq War, the cost of healthcare took another leap. By 2010, NHEaccounted for 17.4 percentof GDP. This period of time would bring a new, but divisive chapter in the history of healthcare in America.
On March 23, 2010, President Obama signed the Patient Protection and Affordable Care Act (PPACA), commonly called the Affordable Care Act (ACA) into law. Because the law was complex and the first of its kind, the government issued a multi-year rollout of its provisions. In theory, this should have helped ease insurance companies (and individuals) through the transition, but in practice, things weren’t so smooth. The first open enrollment season for the Marketplace started in October 2013 and it was rocky, to say the least.
Nevertheless, 8 million people signed up for insurance through the ACA Marketplace during the first open enrollment season. The numbers increased to 11.7 million in 2015 and it’s estimated that the ACA has covered an average of 11.4 million annually ever since.
It’s no secret that the ACA was met with heavy opposition for a variety of reasons (the individual mandate and the employer mandate being two of the most hotly contested). Some provisions were even taken before the Supreme Court on the basis of constitutionality. In addition, critics highlighted the problems with healthcare.gov as a sign this grand “socialist” plan was destined to fail.
Regardless of the controversy, it could be argued that the most helpful part of the ACA was its pre-existing condition clause. Over the course of the 20th century, insurance companies began denying coverage to individuals with pre-existing conditions, such as asthma, heart attacks, strokes, and AIDS. The exact point when pre-existing conditions were cited in the history of our healthcare is debatable, but very possibly, it occurred as for-profit insurance companies popped up across the landscape. Back in the 20’s, not-for-profit Blue Cross charged the same amount, regardless of age, sex, or pre-existing condition, but eventually, they changed their status to compete with the newcomers. And as the cost of healthcare increased, so did the number of people being denied coverage.
Prior to the passing of the ACA, it’s estimated that one in seven Americans were denied health insurance because of a pre-existing condition, the list of which was extensive and often elusive, thanks to variations between insurance companies and language like “including, but not limited to the following.”
In addition, the ACA allowed for immediate coverage of maternal and prenatal care, which had previously been far more restrictive in private insurance policies. Usually, women had to pay an additional fee for maternity coverage for at least 12 months prior to prenatal care being covered — otherwise, the pregnancy was viewed as a pre-existing condition and services involving prenatal care (bloodwork, ultrasounds, check-ups, etc) were not included in the policy.

The Future of Healthcare: History Shall Repeat Itself

Since Donald Trump was sworn in as the 45th President of the United States on January 20, 2017, many have been questioning what will happen with our healthcare system — specifically, what will happen to the ACA, since Trump ran on a platform of “repealing and replacing” the bill.
As open enrollment for 2017 drew to a close, it became apparent to lawmakers that either repealing or replacing the ACA would be no easy task. If they were to repeal the bill, what would happen to the 11 million Americans currently insured through the Marketplace? If they come up with a replacement plan, what does that look like? What changes would be made? Will there be a Marketplace? Will insurance companies be able to deny coverage based on pre-existing conditions again?
There are plenty of questions that need answered and it doesn’t seem like a decision will be reached anytime soon, but one thing is for sure: changes will be made. What those changes will be depends on Congress, Trump, the new Secretary of Health and Human Services, Tom Price — and you, the voter.
The history of healthcare in America will continue to evolve and it will be interesting to see where this administration takes us and the affects their plan will have on Americans. Regardless, we’ll have to keep an eye on the national health expenditure numbers. According to the latest data available, NHE was 17.8 percent of GDP in 2015, signaling slower growth than the previous decade, but only time will tell the full story.

Doctors With Borders: How the U.S. Shuts Out Foreign Physicians

Doctors With Borders: How the U.S. Shuts Out Foreign Physicians