That's not quite the fallacy. Let's say you have a hypothesis K that the possible outcomes of a die roll are random and equally likely, and therefore that the probability that the outcome will be "three" is 1/6. You then roll the die and the outcome is indeed a three. Then, the following is still correct: P("three"|K) = 1/6. The outcome of the experiment does not change the probability of the outcome under the hypothesis. If you don't believe this, roll the die a million times and calculate the empirical relative frequency of the outcome "three."
Jabba's fallacy is that, under his "scientific" hypothesis, he is unwittingly conditioning on the observed outcome as well as the hypothesis, and he is not taking this into account when he states the probability. He observes "Jabba exists" and states that, under the hypothesis R that "Jabba" was a random outcome, P("Jabba"|R) is very small. This is indeed true. But Jabba exists. Even if Jabba is the outcome of a random process, it was the random outcome that actually occurred. Furthermore, Jabba could only make the observation "Jabba exists" if Jabba exists. Therefore, the event he is observing is not an event in a sample space that contains the events "Jabba exists" and "Jabba does not exist," but rather, it is an event in the conditional sample space in which "Jabba exists" is the only element. Therefore, he is actually calculating P("Jabba"|R, "Jabba"), the probability that Jabba exists given that a random process occurred and he was the outcome. This probability is 1, and, since he is conditioning on his own existence, it is this probability he must use in Bayes' formula.
This fact that he is conditioning on his own existence is fatal to his argument. Since P("Jabba"|R, "Jabba") = 1, the posterior probability of his immortality hypothesis cannot be greater that its prior probability (because P("Jabba"|~R) ≤ 1). So his argument, which is intended to increase the posterior probability of immortality relative to its prior, can only lower it.