• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

HIV Vaccine!

To expand on my comment above. Any Medical interventions can have a side effects and carry a risk. From the article some 7800 people were given 6 injections, that is nearly 47,000 injections. This appears to have prevented 19 people from developing HIV.

We do need to weigh up the benefits against the risk. Were this to stop a common cold, it would not be worth it. HIV is far more serious but the cost benefit analysis still needs to be done.

In respect of Ben Goldacre’s percentage point .

A treatment could give a relative 80% reduction in the chance of developing an illness.

If the chances of getting that disease are 1 in 2 then the treatment will reduce this to 1 in 10 and certainly looks worth it. The chance of getting the disease has dropped from 50% to 10% a 40% reduction in absolute terms.

If the chances of getting the disease are 1 in a million then a 80% reduction will make it 1 in 5 million. You have to go to a few decimal places of a single percentage before noticing a difference. In those circumstances people may chose to take their chances.

It is easier for people to decide by reviewing the absolute than the relative risks.
 
My choice of percentage was quite deliberate. I have just finished rereading Ben Goldacre’s ‘Bad science’ book. He suggests that comparative percentages can be misleading and it is far better to express them in terms of absolute risk.

Absolute risk is not particularly useful for comparing rare events with catastrophic consequences.

Thanks, I realise that, and I realise that my 50% example was just as unrealistic. I was hoping someone would know how to work out the confidence from the numbers in the article.

Do your own damn p-value computation (I get 0.01945 or 98.05% "achieved" confidence).
 
Last edited:
How so? If each group similarly engages in riskier behaviour, then the results of the experiment should still hold I think.

Also you could check to see if the participants in the experiment did actually engage in riskier behaviour by comparing infection rates in the placebo group to infection rates in the general population.

That is true that you could check the placebos against the general population, but there is an expected difference there because of sampling, so it would be hard to tell if there is an effect from knowing you are in the study.
 
Will it be possible to design the attempts at replication using placebo? I mean ethically - are these results so small that there is not an ethical dilemma in giving placebo to participants?

Apropos the effect of participating in a study: I would have guessed that participating in a study could change the participants' behaviour so that they are less likely to engage in risk behaviour, just as easily as the other way around. To a lot of people the wish for their clinician to "look good" might be stronger than the "heck, I might be vaccinated, let's see if it worked" side of the coin.

The numbers are tauntingly small, but not small enough to be just brushed off. Since they are doing efficacy trials, I am assuming that safety and tolerability has already been ascertained, so it would be really interresting to see a huge study on this.

I am cautiously optimistic.
 
I expect further improvements in the future. Sure, 31% reduction in risk is not too impressive, but as a first step it's enormous. The first airplane was not very useful either, but you have to learn to crawl before you can walk.
 
But injecting Lawyers with HIV is not so clear cut.

Good grief, man. Not so clear cut? Viruses can sometimes accidentally get a gene or two from a host too, and transfer them to the next host. It's one mechanism for horizontal gene transfer. Just the thought of using lawyers there gives me the creeps.
 
Will it be possible to design the attempts at replication using placebo? I mean ethically - are these results so small that there is not an ethical dilemma in giving placebo to participants?

I think so. Tell everyone to assume that they are not protected and take reasonable protections. It's not worse than not participating in the trial at all.
 
No, 31% less chance of getting HIV.

I don't like when stats are used like that. If the numbers were slightly different, say, 51/8000 and 102/8000 instead of 74, you could say you have a 100% greater chance of getting HIV without the treatment. That sounds drastic when it isn't at all.

Lothians way is more acceptable, to me at least, although the calculation is a little off I think, I got a difference of .2875%
 
I think so. Tell everyone to assume that they are not protected and take reasonable protections. It's not worse than not participating in the trial at all.

That is the type of distortion I was talking about earlier. The study you are suggesting would give no usable data.
 
If the chances of getting the disease are 1 in a million then a 80% reduction will make it 1 in 5 million. You have to go to a few decimal places of a single percentage before noticing a difference. In those circumstances people may chose to take their chances.

That is true, but at least in the sample the chances of getting infected were somewhere in excess of 125/16000, or 1 in 128 (in the group for this study). A 30% reduction makes it 1 in 200. Obviously they aren't stopping there but want to do better, and now have encouraging signs. Your 80% means 1/640 or so. Still unwilling to take chances on that?
 
I don't like when stats are used like that. If the numbers were slightly different, say, 51/8000 and 102/8000 instead of 74, you could say you have a 100% greater chance of getting HIV without the treatment. That sounds drastic when it isn't at all.

Lothians way is more acceptable, to me at least, although the calculation is a little off I think, I got a difference of .2875%
I used 8200 placebo 7800 treatment.

Reading again
The new study was conducted in Thailand, with more than 16,000 people between ages 18 and 30 participating. They were all HIV negative at the beginning of the trial.

Nearly 8,200 received a placebo and a similar number received a combination of six vaccines over six months. All were followed for three years.

perhaps it should be 8200 and 8200.
 
That is true, but at least in the sample the chances of getting infected were somewhere in excess of 125/16000, or 1 in 128 (in the group for this study). A 30% reduction makes it 1 in 200. Obviously they aren't stopping there but want to do better, and now have encouraging signs. Your 80% means 1/640 or so. Still unwilling to take chances on that?
80% was a hypothetical example. Here it appears to be about 1 in 110 (placebo) or 1 in 160 (treatment).

I think the true odds vary based on peoples lifestyle especially with STDs.

If it was another disease I worried about I would have to consider the pros and cons but would probably end up playing the "it won't happen to me" card.
 
It should, but there is a known link between high risk groups and getting the disease.

You brought me up short here. This seems self-evident, redundant, and repititious. Isn't the very definition of a "high risk group" one that is linked to getting the disease?
 
ok, with the new numbers the difference is .2806%

Lo, I said a bit off because you had a decimal place wrong :)

The point is, I can't believe anyone is taking this study as meaningful in any way! The expectation of getting one result when compared to another is highly likely, 98% if I am reading it right?

What if it was a subject closer to many of your hearts?

Suppose I guessed the next card in a stack of 158 decks of cards. I got 51 right. Then I did the same thing, but this time with a Q-Ray bracelet on. Now I guessed 74 right. Anyone want to say that there is any effect at all from the bracelet?
 
It should, but there is a known link between high risk groups and getting the disease. Whether this treatment stops it is not known.

This doesn't make sense to me. Why wouldn't it have been people at high risk of acquiring HIV which acquired HIV in this study?

The difference seems very low; 0.024% less chance of getting HIV in the treatment group.

The difference was 0.28%, which is very low. But this is because the underlying rate of acquisition is very low. If you wish to compare the reduction in a way that does not depend upon the rate of acquisition, then you need to use the relative risk reduction, which is 31%.

I wonder if it could be down to other factors than the treatment. I appreciate that there are limits to what can be done to ensure the two groups are similar and that with 16,000 starting this trial I can't really expect a bigger one. It is the confidence factor that I don't understand. Can we say as result of this that we are 100% certain that this treatment lowers the risk or a case where we are only 50% certain?

The probability that the vaccine is effective, based on this study, is 97% (if I understand your question correctly).

Linda
 
linda, your last statement may be backwards, 97% is the confidence that if the norm is 51 out of 8200, you can get 74 out of 8200. Therefore, ineffective. Another trial of 100,000 would bring the numbers closer together, showing how ineffective the inoculation actually is.
 
My choice of percentage was quite deliberate. I have just finished rereading Ben Goldacre’s ‘Bad science’ book. He suggests that comparative percentages can be misleading and it is far better to express them in terms of absolute risk.

Not really. Sometimes absolute risk is useful, sometimes it's relative risk. There are substantial disadvantages to using absolute risk reduction, as it prevents you from being able to use the information on any group except the one represented by the study. In particular, it is very difficult to compare interventions like vaccination or surgery, which are used once, using absolute risk reduction, since the numbers will vary widely depending upon the base rate of your outcome and the length of the follow-up period - two factors which have nothing to do with the effectiveness of the intervention itself. Relative risk reduction does not suffer from those constraints. If you want to compare treatments or compare treatments in different populations, relative risk gives you far more flexibility. On the other hand, it is difficult to put the results into perspective if you use relative risk reduction when your outcome is rare (as in this case). It is best to simply report both.

Thanks, I realise that, and I realise that my 50% example was just as unrealistic. I was hoping someone would know how to work out the confidence from the numbers in the article.

Using the study power, p-value, and study type (adequately powered RCT), one can calculate the probability that the study result is a true-positive using this model:

http://www.plosmedicine.org/article/info:doi/10.1371/journal.pmed.0020124

Linda
 

Back
Top Bottom