I recently blogged my blog for Peripartum Depression Awareness / Melanie Blocker Stokes Act support. when looking at that topic, I noted that there is an active question regarding the safety the developing baby in utero, or the baby when breastfeeding, if mom is taking SSRI.
I noted just one study showing increased risk, just to make the point that this is a problem we need to be concerned about. Especially since there is an alternative - talk therapy.
In that effort, I came across a recent study that seems like it was trying to evaluate whether mom taking ssri posed risk in pregnancy or in the "neonatal" phase. What I discovered was just another piece of Big Pharma marketing, masquerading as clincal research.
The article is from American Journal of Psychiatry, March 16, 2009: Major depression and antidepressant treatment: Impact on pregnancy and neonatal outcomes. Lead author is Katherine Wisner. The paper is "brought to you by" NIMH funding, so by psychiatrist standards, it is totally free of bias from Big Pharma. Wisner, however, has developed a long history of collaboration with pharmaceutical companies, and this is noted at the end of the article. So, if you were a cynical person, the needle on your suspicion-o-meter for lousy research might start moving.
Now, here is the big deal about this type of study:
This is a surveillance study. You are looking at a bunch of people to see if some condition or event happens to occur in some group, or in one group compared to another group.
The whole deal with a surveillance study is the "base rate." This is the number of people, out of a bigger population of people, that have the characteristic of interest.
Let's say your friend is a professional photographer, and your friend has a job taking pictures for some new eyedrops advertising campaign. They want people with brown eyes, blue eyes, and green eyes using the eyedrops. Well, people with green eyes are not so common. So, your friend asks you to go find a person or two with green eyes, and ask if they want to be america's next top eyedrop model.
If you suspect that 1 out of 10 people have green eyes in your neighborhood, you figure that you will have to go out and meet 10 neighbors before you find the 1 person with green eyes.
Now, it makes sense that if you want to detect at least 1 person with green eyes, you are going to have to talk to at least 10 neighbors. Because you have some idea of the "base rate" you might expect.
You also understand that you might luck into finding a green-eyed person in the first 2 or 3 neighbors you encounter, just by chance. People do not go places in your neighborhood and wait in some organized fashion, according to eye color. So, you know there will be some randomness, and variation. So, you figure if you are not very lucky, you might have to encounter 20 people before you encounter a person with green eyes.
Now, if the 3rd person you encounter has green-eyes, you are not going to decide that the prevalence of green eyes is 1 out of 3. No, you are gonna conclude that you were lucky that day: the randomness favored you that day.
The next day, the ball might bounce the other way.
Knowing this, you set out to do your surveillance sufficient for any given day, by being over-prepared: you are gonna plan time to encounter 50 neighbors. That number seems good to overcome the random variations that might happen day to day.
The point is: if you are going to conduct a surveillance study for something, and you have a fair idea it is out there in the neighborhood, you need to plan to do your surveillance on enough people to overcome this randomness.
It also makes sense that you have to pay attention to demographics: the portion of people with green eyes will, obviously, be greater in the anglo people in your neighborhood, compared to the non-anglo people: when you encounter Asians, Hispanics, and African-Americans, you are not going to discover anything close to 10% of people with green eyes.
So, the number of people you plan to encounter will have to go even higher.
So, demographic composition matters. And that, obviously, will depend on your neighborhood.
Now, it makes sense to us that, in this example, if for whatever reason we really are not going to be happy to just encounter 10 people, and check eye color.
So, now, on to this study at hand, where the researchers claim they are trying to detect whether taking SSRI antidepressant medications in pregnancy leads to any pregnancy or birth outcomes.
Low birth weight. Early delivery.
Now, this is similar to looking for the green-eyed people: we know that low birth weight is gonna happen in some births. we know that early delivery happens some of the time. We know that physical anomolies / birth defects happen some of the time.
Do they happen because mom was taking SSRI?
Well, now that we have had our discussion about encountering green-eyed people, it is plain old flat-out obvious that we are going to need find a decent number of pregnant women, then see how the delivery goes.
And, how will we be able to answer whether the SSRI had anything to do with any unfavorable outcome: with low birth weight, birth defect, etc.?
Well, we could compare birth outcomes between women taking SSRI, and women not taking SSRI.
And we will have to have a big enough number to overcome the other randomness - like our neighborhood trips, which we saw might be good one day - finding our green-eyed neighbor after only 3 people on one day, and on another day, after encountering 20 people.
So, with this NIH-funded study, how many pregnant women were in the SSRI group, and how many in the no-SSRI group?
But wait - you cleverly note: what if the depression - not the pills - leads to the low birth weight? We might falsely conclude that SSRIs lead to bad birth outcomes, but the real culprit is the depression.
OK, so we will look in a group of depressed women, some taking SSRI, and some not.
OK - now - how many pregnant women will you want in each group in order to test whether SSRI meds are associated with bad birth outcomes?
Well, here is the answer, for Wisner and colleagues.
Depressed women taking SSRI: 48.
Depressed women not taking SSRI: 14.
Yes: forty-eight, and fourteen. I write this twice so you don't think that I am mis-representing this study by a typo.
Wisner and colleagues believe it is OK to answer the question of whether SSRI is associated with birth defects by looking at 50 births.
Now, people, let's get serious. This is ridiculous.
Unless you are a pharmaceutical company, and you want to promote your pills by encouraging routine surveillance for depression in pregnancy ("prepartum depression," AKA "antepartum depression"), and by declaring that SSRI are safe for the baby. No risk of early delivery. No risk of low birth weight. No risk of birth defects.
Are you kidding me?
I am not even going to discuss the FINDINGS of this study, since our reasonable minds can safely declare that the study is either plain old nonsense, or propoganda, and therefore worthless regarding the question of whether SSRI antidepressants leads to birth defects.
And people, this is your tax dollars at work. NIH funded study.
We can maybe see why Wisner and colleagues signed on to the study. Profit motive, or to get a publication in a major journal to advance their researcher careers.
But this was approved by someone's "IRB" institutional review board / ethics review board. And, it found favor in the NIH, which is quite an accomplishment.
And, the Am J Psychiatry editorial staff and their article reviewers favored the article.
Am I crazy, or is it the rest of the world?
So, in my opinion, they slanted this study, in advance, to fail to find SSRI risk if there is any risk.
This makes me scared, and makes me wonder if they know something we don't know. Like are they actually afraid that if they actually surveyed, like, 100 women, they might actually find a woman with green eyes / find birth defects at a greater rate in the SSRI group, comapared to the non-SSRI group?
Now, if you have followed me this far, you may be able to see that a surveillance study takes a lot more people than a treatment study.
In surveillance, you have to winnow through many study participants to find the rare events: green eyes, birth defects, low-birth-weight babies, etc.
In a treatment study, let's say hypothetically that you were going to see if you could successfuly TREAT postpartum depression. Well, you are going to need to guess your expected success rate, and guess what randomness might happen: who might get better just by luck, who might not get better just by bad luck, etc.
So, in that kind of study, you might consider 14 in a treament group and 48 in the control group.
Maybe. But still you would have to have a "surveillance" type aspect to your treatment study. In this case, that is called "recruitment." You need to screen through lots of pregnant women to discover the depressed women. Then, you provide treatment and see who gets better. Obviously, the number of screened women will need to be much greater than the number treated. So, a surveillance study, in general, will have a sample of people much greater than a treamtent study. Generally.
So, in any treatment study, you hypothetically have a surveillance study built in. If only you conceptualize it that way. And if you report that info, then people reading your treatment study can glean the recuitment data from the treatment study.
The point being: suveillance bigger number than treatment.
At the same time, Dr. Wsner has a detection-and-treatment grant listed in CRISP, the data base listing federally funded studies. Two groups: usual care for postpartum depression, or a more organized "chronic disease management" model, with more case managment and monitoring, plus some emphasis on patient choice.
Sample size for each of these two groups? 231 each.
Yes, for Wisner, you only need 48 patients to conduct surveillance for birth defects, but a treatment study needs four times as many patients in each of the two treatment groups.
That just does not add up. Unless you consider: 1. maybe they really did not want to find birth defects? Why not? Here's a wild idea: profit motive: Big Pharma has been funding Wisner for a long time. When considering meds versus therapy for prepartum depression, she has a reason to favor meds, maybe?
There are other funny things with this surveillance study. Maybe I will ge to them later. But this is plenty enough to: 1. cast suspicion on the reuslts, and 2. wonder how this ever got published.