• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

TAMV Million Dollar Challenge LIVE...

Antiquehunter

Degenerate Gambler
Joined
Aug 7, 2005
Messages
5,088
OK - I've only been hanging around JREF actively for about a year - and perhaps someone has already thought of this idea. Or perhaps it violates the spirit of the challenge rules (I don't think it does - but I could be wrong.)

I'm posting to this thread as I see it more as a TAMV issue than a challenge issue at this time.

What if, at TAMV, we - the Skeptics - entered the challenge.

My idea - lets do a double blind test of the 'Catania Wine Enhancer'. http://www.wineenhancer.net/

The premise is simple. I personally enjoy wine (particularly the more full-bodied / fruit-forward varietals like Shiraz, Cabernet, Pinot Noir etc...) I consider myself a wine 'hobbyist' (I know what I like, and I like to find and visit wineries that produce wine that I enjoy.) I am not a wine 'professional' - but this is immaterial, since the website doesn't suggest you need to be an expert to realize the benefits from the gizmo.

I believe I would be able to tell the difference between a wine that is delivered to me from the bottle (after 6 - 11 minutes of opening as recommended by the Catania Wine Enhancer site) and a wine that has been 'enhanced to release all the flavor and complexities the wine maker intended' (Quote from their website.)

A protocol would be easy to set up, and creating double blind conditions would not be difficult.

A panel of skeptics could participate, thereby further reducing the likelihood of simply winning by chance. The positive result could be something like 4 out of 6 panel members shall each identify correctly no less than 7 'enhanced' samples out of 10 attempts. I would need to spend a little time to figure out the math, but I'm sure we could arrive at a reasonable number that made the 'luck' factor insignificant.

Remember - I'm suggesting that I (or the panel) are able to tell the difference between wine that has been treated by a wine 'enhancer' which never actually comes in contact with the beverage, and is simply an expoxy casting containing (allegedly) some semi precious stones and 'rare' metals. This ability is paranormal in that the wine has been affected by what are known to be inert objects.

What this test would achieve:

- Renewed interest in the challenge. Most challenges are not 'media friendly' - they are performed in laboratories etc... and tend to be rather 'stuffy'. We could publicize the event as 'the skeptics take their own medicine' - and its a little more palatable to show a panel of skeptics taste wine rather than one person who believes they have ESP struggle to identify Zener cards.

- It would present an opportunity for the JRef to remind everyone that the money and the test exist.

Risks:

- We're testing someone else's product. Is there a legal liability here?

- We are somehow changing the spirit of the challenge by actively using the challenge to debunk someone's device. Normally the challenge is entered by people who actually have a belief in their paranormal ability. What I am suggesting is that IF the device works the way it claims to work, I should be able to tell the difference between a bottle Dominus 1998 and a bottle of Dominus 1998 that has been 'enhanced to release all the flavor and complexities the wine maker intended'. One bottle I will merely like, and one bottle will knock my socks off.

- It would cost about $1000 to do this 'right' - to purchase 20 decent bottles of wine, hire a room to do the pouring in a blinded environment, hire a waiter to bring the samples to the tasting panel (blind from the pourers), purchase one of the silly wine enhancers, and other sundry costs. It would be fun to do, so I'm not opposed to fronting the money - or at least a good chunk of it if people think its a good idea.

- Since the inventor themselves did not ask to complete the challenge, they are likely to attempt to 'disprove' our attempts by alleging non-scientific conditions etc... However, we could mitigate this risk as much as possible through air-tight testing conditions, and sticking only to prove / disprove the claims made on the website. (He says 'your palates aren't sophistacated enough to tell the difference between enhanced and otherwise' - our answer 'Your website doesn't state this only works for experts, but for everyone.')

So - my questions here:

1) Do you think in general this is a good/bad idea?

2) Do you think the JRef would be interested in receiving a carefully thought-out protocol that involved a 'live' testing condition at one of its own events?

3) Any other comments about the pro's / con's of this idea.

(For the record - I don't believe that I have any paranormal abilities. I haven't gone 'woo' working out here in Kabul. My premise is that if this gizmo works, I have drunk enough wine in my life that I should be able to tell. And the only way it COULD work is through paranormal means.)

-AH.
 
What this test would achieve:

- Renewed interest in the challenge. Most challenges are not 'media friendly' - they are performed in laboratories etc... and tend to be rather 'stuffy'. We could publicize the event as 'the skeptics take their own medicine' - and its a little more palatable to show a panel of skeptics taste wine rather than one person who believes they have ESP struggle to identify Zener cards.

- It would present an opportunity for the JRef to remind everyone that the money and the test exist.

Risks:

- We're testing someone else's product. Is there a legal liability here?

- We are somehow changing the spirit of the challenge by actively using the challenge to debunk someone's device. Normally the challenge is entered by people who actually have a belief in their paranormal ability. What I am suggesting is that IF the device works the way it claims to work, I should be able to tell the difference between a bottle Dominus 1998 and a bottle of Dominus 1998 that has been 'enhanced to release all the flavor and complexities the wine maker intended'. One bottle I will merely like, and one bottle will knock my socks off.

- It would cost about $1000 to do this 'right' - to purchase 20 decent bottles of wine, hire a room to do the pouring in a blinded environment, hire a waiter to bring the samples to the tasting panel (blind from the pourers), purchase one of the silly wine enhancers, and other sundry costs. It would be fun to do, so I'm not opposed to fronting the money - or at least a good chunk of it if people think its a good idea.

- Since the inventor themselves did not ask to complete the challenge, they are likely to attempt to 'disprove' our attempts by alleging non-scientific conditions etc... However, we could mitigate this risk as much as possible through air-tight testing conditions, and sticking only to prove / disprove the claims made on the website. (He says 'your palates aren't sophistacated enough to tell the difference between enhanced and otherwise' - our answer 'Your website doesn't state this only works for experts, but for everyone.')
I doubt that there's any legal risk to doing this (provided that the service of alcohol is carried out according to local law), because it could be argued effectively that proper conditions were present to perform a scientific test, and therefore that the result can't be defamatory since it was conducted without deliberate bias. But in the court of public opinion, you'll lose, because there's the obvious question of whether or not collusion played a role. Would you accept it if the manufacturers of the device selected 10 people, and performed a similar test? Not if it showed anything significantly better than chance, right? So asking random members of the public to trust you on this would be a little difficult, if they don't already accept the JREF's credentials. Also, I doubt that you could make a media event of people sloshing and spitting wine; it's just not that poignant, especially when nothing magical happens to please the all-too-credulous viewers.
 
Who said anything about sloshing and spitting?!?!? I'm drinking... ;)

Good points about the credentials argument. However, in this case, they are quoting Wine Spectator on their website (quoted out of context I'm sure) - so we are trying to prove or refute this argument. Another difference on the public opinion side is the fact that there is a million bucks on the line for our test. A manufacturer test is much more likely to be biased to success - our test would most likely show a result consistent with the frequency of 6 people each tossing a coin 10 times and guessing heads.

On the media coverage issue, I was going under the premise that there is no such thing as bad publicity. Getting meaningful press coverage of TAM has been difficult in the past few years - a few puff pieces here or there. But Randi's million actually being on the line... that may generate some interest. That the format of the scientific experience is a wine tasting - could be quirky enough to make the AP 'odd news of the day'... (A bunch of geeks in Vegas drinking wine with a million bucks on the line...)
 
Antiquehunter - Does this interest in drinking alcohol have anything to do with your location?
 
I like the idea; it could formally be set up as a preliminary test; so even if somebody should get enough hits, a real test would have to follow: with tighter controls than what might be possible in the Las Vegas setting.
 
The only problem with this is that normally the JREF negotiates a "non-blinded" test beforehand, so in the case of a dowser they get the dowser to confirm that they can detect the gold in the plastic cup before they covered. Perhaps not surprisingly this stage is passed with flying colours by most applicants BUT if the skeptics are to be honest and the wine enhancer doesn't actually make a difference then this stage won't be passed. Which would make it all pretty boring -

"Nope can't taste any difference, test over"
 
Last edited:
The only problem with this is that normally the JREF normally negotiates a "non-blinded" test beforehand, so in the case of a dowser they get the dowser to confirm that they can detect the gold in the plastic cup before they recovered. Perhaps not surprisingly this stage is passed with flying colours by most applicants BUT if the skeptics are to be honest and the wine enhancer doesn't actually make a difference then this stage won't be passed. Which would make it all pretty boring -

"Nope can't taste any difference, test over"
Just insist on repeated tests.

Not boring at all.
 
The only problem with this is that normally the JREF normally negotiates a "non-blinded" test beforehand, so in the case of a dowser they get the dowser to confirm that they can detect the gold in the plastic cup before they recovered. Perhaps not surprisingly this stage is passed with flying colours by most applicants BUT if the skeptics are to be honest and the wine enhancer doesn't actually make a difference then this stage won't be passed. Which would make it all pretty boring -

"Nope can't taste any difference, test over"
But in the case of the dowser, it's the person himself who's being tested, while in this case its the device. It doesn't matter who does the testing (as their website apparently touts), so you could skip that stage.

And remember, one person getting all the glasses right doesn't win the million either, "on average" every wine tester must be able to taste a difference (i.e. perform better than chance). You can set percentages to that in advance, with the manufacturer claiming 100% success rate, and the skeptics claiming a 50% success rate.

Also, I think it's about time we got something other than the chocolate challenge started ...
 
Also, I think it's about time we got something other than the chocolate challenge started ...
Didn't we try to get Claus to do a double blinded taste test of US beer vs. European beer at last year's TAM? And he chickened out.
 
Having skeptics perform the tasting allows the manufacturer to claim that any failure is due to bias.

I don't know how you were envisioning setting up the test. Does the test subject choose between "yes there's a difference between these two wines" and "no there isn't a difference"? If so, what's to stop a dishonest skeptic from saying "nope, no difference" every time? And as long as that possibility exists, the manufacturer can claim that's what happened.

The result is the same if you force the test subject to choose "which wine is better" out of each pair. (The idea being that if the two wines taste the exact same, the tester will randomly choose the treated or non-treated wine, so success is judged by whether the treated wine scores better than 50% at an appropriate level of statistical significance.) The manufacturer can just say that the skeptic wanted the test to fail, so he lied and chose the worst wine enough times to make the test fail.

The value of the JREF challenge is that the testing protocol is designed so that any "work" is performed by the claimant (who wants to pass the test), and neither Randi, the JREF, nor any skeptic can cause the claimant to fail.
 
I was very impressed with the mythbusters episode in which they filtered vodka 6 times to attempt to make high grade vodka. They brought in a professional vodka tester and he put all of them in correct order.

/tangent.
 
Having skeptics perform the tasting allows the manufacturer to claim that any failure is due to bias.

I don't know how you were envisioning setting up the test. Does the test subject choose between "yes there's a difference between these two wines" and "no there isn't a difference"? If so, what's to stop a dishonest skeptic from saying "nope, no difference" every time? And as long as that possibility exists, the manufacturer can claim that's what happened.

The result is the same if you force the test subject to choose "which wine is better" out of each pair. (The idea being that if the two wines taste the exact same, the tester will randomly choose the treated or non-treated wine, so success is judged by whether the treated wine scores better than 50% at an appropriate level of statistical significance.) The manufacturer can just say that the skeptic wanted the test to fail, so he lied and chose the worst wine enough times to make the test fail.

The value of the JREF challenge is that the testing protocol is designed so that any "work" is performed by the claimant (who wants to pass the test), and neither Randi, the JREF, nor any skeptic can cause the claimant to fail.


My idea was simply:

Blind pourer pours 12 pour 'sips' from bottle X. Then, 1/2 the glasses have the device applied. All glasses are exposed to air (breathing) for equal time. Blinded waiter brings samples A and B out to tasting panel. Panel members choose (blinded - no communication) either A or B is 'enhanced'.

Need 7/10 hits, 4/6 times to qualify for next level of testing.

-AH.
 
My idea was simply:

Blind pourer pours 12 pour 'sips' from bottle X. Then, 1/2 the glasses have the device applied. All glasses are exposed to air (breathing) for equal time. Blinded waiter brings samples A and B out to tasting panel. Panel members choose (blinded - no communication) either A or B is 'enhanced'.

Need 7/10 hits, 4/6 times to qualify for next level of testing.

-AH.

Right, but that still leaves the manufacturer able to claim that a failure is meaningless, because those damn skeptics (or at least enough of the panelists to cause a failure) deliberately chose the unenhanced wine as enhanced.

I guess it depends on what the point of the test is. If we're really just curious about whether this thing works, then this test would do the job of satisfying that curiousity to ourselves. But if we're trying to provide evidence that a third party would accept, it's a "heads-we-lose, tails-we-flip-again" scenario: if the test shows an enhancing effect, the manufacture can crow that its product is proven, while a negative result is brushed away as the product of bias.

And if we're just trying to do a demonstration of how skepticism works, it seems flawed to me to put on a test that is open to accusations of bias and doesn't follow the protocols of the Million Dollar Challenge.

I don't mean to rain on your parade; I'm hoping someone here can come up with a way to make this work. The only solution I can think of right now is to get the manufacturer to send representatives to be the tasters (or agree to a panel it considers acceptable).
 
Lets do a American chocolate bars vs. British Chocolate bars challange.

Why is it that American chocolate (Hershey and similar) have that unmistakable aftertaste of baby vomit?

Before you ask, yes, I am unfortunately aware of what baby vomit tastes like, and for those of you that aren't...eat a Hershey bar :p

.
 
Why is it that American chocolate (Hershey and similar) have that unmistakable aftertaste of baby vomit?

Before you ask, yes, I am unfortunately aware of what baby vomit tastes like, and for those of you that aren't...eat a Hershey bar :p

.

From what I gathered... corn syrup plays a large part.
 
From what I gathered... corn syrup plays a large part.

I'm not over fond of the flavour of corn and it did seem to be in just about everything making eating less than pleasant.

I didn't think it was corn though...more a sickly, sour sterilized milky sort of flavour.

.
 
Why is anyone arguing with this. I think it's a fabulous idea, and would like to be the first to offer my tastebuds in the name of science. Although I agree wit the non-blinded test first, which, unfortunately, means that more wine will have to be consumed. Which I'm willing to do.

We'd have to figure out a per person cost, and possibly turn it into a fundraiser of sorts?
 
Right, but that still leaves the manufacturer able to claim that a failure is meaningless, because those damn skeptics (or at least enough of the panelists to cause a failure) deliberately chose the unenhanced wine as enhanced.

So what?

They are a lying bunch of dishonest fraudsters - why should anyone care what they have to say to the test?

I guess it depends on what the point of the test is. If we're really just curious about whether this thing works, then this test would do the job of satisfying that curiousity to ourselves. But if we're trying to provide evidence that a third party would accept, it's a "heads-we-lose, tails-we-flip-again" scenario: if the test shows an enhancing effect, the manufacture can crow that its product is proven, while a negative result is brushed away as the product of bias.

The GSIC Audio Chip Test was also done by a scpetic.

The manufacturers claim (in both cases, really) that the improvement is easily detectable. Yes, everyone could cheat, but I don't think this is the issue with a public test like this. A public test would be performed to generate PR more than anything (not to mention that drinking of lots of wine)

Still, if a good handful of people take part in the test, then that will deliver a strong message: Each participant would have to be bribed with well over a Million Dollar each for them to have an incentive to fake the results.

Whatever reasons there exist against letting a sceptic do the challenge - they applied in the GSIC scenario and were apparently trumped by benefits of having the test performed at all. It might be that wine enhancing fraud is not as much of an issue to the JREF, but I doubt that, so there is nothing in principle that should prevent this test from occurring.

And if the producers decide to complain, they can at any time submit their own challenge. They can send in their own specialists and fall onto their own noses. Until then, I still think this is an excellent opportunity to

- have some fun
- drink some wine
- let people see what the JREF challenge is all about and what it is like
- to promote scepticism in public
- to teach people how they can test if the things they are being offered do actually perform as advertised.

I think the last point is very important, too. A public test, if it gets enough media attention, has the opportunity to show that it doesn't take a bunch of people in lab coats to check if things work, and that it is easy to fool oneself.

None of this will stop the woos and the frauds, but i might make somebody think whether they should fill the pockets of those offering dubious magic devices.

The test will not prove that the thing doesn't work. But it will still show that it doesn't work. If will show that a bunch of average people cannot taste a difference.

You could even announce that the test won't prove anything to the negative on the site of the test and it wouldn't matter. Just make everyone wonder if every single one of the participants hates the woos enough to forfeit an easy milion bucks. I know i wouldn't.

Oh and btw: I don't see why a majority of participants should have to pass the test. I say let anyone move to the final testing stage that performs above the threshold. It is not fair to subject every tester to the results of everyone else. No dowser that is tested will have to reach an average that accommodates for all those that have failed, either.

And if we're just trying to do a demonstration of how skepticism works, it seems flawed to me to put on a test that is open to accusations of bias and doesn't follow the protocols of the Million Dollar Challenge.

How would this test not follow the protocols?

Each participant could claim that their magic taste buds can tell the treated wine from the untreated. Each participant agrees to the protocol that establishes proper blinding etc. (Granted, there would be no protocol negotiation, but everybody that doesn't like the protocol can submit their own for separate testing.) Each participant is then tested according to the protocol. If any one participant passes the test, they can move onto the final stage at a later time.

I don't mean to rain on your parade; I'm hoping someone here can come up with a way to make this work. The only solution I can think of right now is to get the manufacturer to send representatives to be the tasters (or agree to a panel it considers acceptable).

I doubt that they would be so dumb ....

Rasmus.

Edited to fix url
 

Back
Top Bottom