This is false. It is used ubiquitously in drug trials, it might be a requirement in drug trials. But double-blind is also used in tons of other research - any time there is a risk of implicit bias, double-blind is used. It gets used quite a bit in psychological and sociological research, IIRC.
Terminology:
Testable Condition - the characteristic of a specimen that is of interest, and to which a change is expected to occur
Treatment - the action being taken that is expected to cause a change in the specimen
Test Group - the set of specimens with the testable condition
Control Group - the set of specimens
who are otherwise believed to be identical to the Test Group, but who lack the testable condition
Test/Control grouping is used to isolate the effect of the treatment on the testable condition. The difference in the change due to the treatment is measured for the Test Group relative to the Control Group. This identifies whether any observed changes in the Test Group are caused by the treatment, or whether there is a chance that they are caused by some other externality
Blind Study - a study in which the specimens do not know whether they are receiving the treatment or not. This is used very frequently whenever the specimens in a study are human - any time there is a risk that knowledge of the treatment may affect the behavior of the specimen.
Double Blind Study - a study in which neither the specimens nor the administrators know whether the specimen is receiving the treatment or not. This is used whenever there is a risk that knowledge of the treatment may affect the behavior of the administrator as well as the specimen.
Here's an example of a study that doesn't involved drugs, but which would benefit from being double blind. This is, of course, invented out of whole cloth. It is intended to illustrate the principles that go in to experiment design, while taking it out of the context of drug trials.
Let's say we have invented a small, hand-held device that emits a small bit of a pheromone compound. The compound can't be smelled by humans or cats, only by dogs. We want to test whether the device can be used to alleviate separation anxiety in dogs.
We start by finding a set of dogs that have the
Testable Condition (separation anxiety), as defined by some known destructive behaviors. We then go find a set of dogs that is very similar in terms of breed, age, weight, owner family composition, diet, etc. (anything we can think of that we believe might possibly introduce bias into our results)... but they do NOT have the testable condition. These two groups then will be our
Test Group and our
Control Group.
The structure of the study is as follows:
Dogs will be paired with their owners, in a room. They'll spend two days living in the room together. At the end of those two days, the owners will begin leaving the rooms for varying lengths of time. Immediately prior to leaving, the owner will press a button on the device releasing the pheromone into the air - this is the
Treatment. The behaviors of the dogs will be observed, and we will also take blood samples to test the concentration of specific stress hormones.
Now... there's some concern that the motions of the human could alter the behavior of the dog. If one set of human sis observed to push a button on a device, where another set of humans just leave, the dogs being exposed to the treatment run some risk of developing a classically conditioned response, which we want to avoid. To remove this risk, we'll make the study
Blind. We create false devices that still have the button, but which emit a burst of plain air with no pheromone in it. This mitigates the risk of a conditioned reaction to a set of motions, and lets us isolate the impact of the pheromone alone. In application, we'll give half of the Control Group real pheromones, and half of the Test Group just plain air.
But dogs are observant. We're worried that if the owner knows that they're working with the fake, they'll end up giving some subtle indication by body language or facial expression to their dogs. Vice versa of course - if the owners know it's the real thing, they might give indication to the dog as well. Either of those introduces a new factor into the experiment, and we want to avoid that. So we make the study
Double Blind. We don't tell the owners whether it's the real pheromone or just a puff of air. Before each owner was given their device, the serial number was taken down. Back in the office, where there's no risk of exposure, is a list of all the serial numbers identifying which contain the pheromone and which do not.
At the end of the experiment, we should be able to compare the qualitative differences in observed behavior as well as the quantitative differences in stress hormone concentration between the Test Group and the Control Group. We compare the behaviors and hormones of the Test and Control groups that got the actual pheromone to the behaviors and hormones of the group that got plain air. To determine that the device works as expected, we need to be able to show a few things. For simplicity, let's assign some letters:
A - Control Group that were given the false treatment
B - Control Group that were given the real treatment
X - Test Group that were given the false treatment
Y - Test Group that were given the real treatment
What we should expect to see is:
That there is no difference in measurements between A and B (the treatment works on the testable condition, not on something else)
That there is a measurable difference between A and X (the testable condition is present and quantifiable)
That there is a measurable difference between X and Y (the treatment produces a measurable effect)
That's how you do solid, rigorous experiment design, in a nutshell. Of course, that whole sample selection process is a *lot* more complicated than I've listed out above, and the care with which the measurements are done is incredibly important. The overall design has to be solid and experts are needed to ensure that there's no bias and no competing factors present. This was a very basic, very simplistic example... but the idea is there.
ETA: Just for clarity, I am an actuary by trade. But I manage my company's Customer Analytics department. We work closely with several market research firms as well as with our internal marketing departments. We regularly design and implement experiment pertaining to shopping behavior, usage behavior, response to calls to action, the impact of a set of experience models on retention, and the effectiveness of various messaging content and creative concepts. My team doesn't do clinical research... but we apply a *lot* of the concepts of experiment design on a regular basis. Among other things

So I'm not speaking as an authority, but I am speaking with some degree of experience.