First, and importantly, Nonpareil said nothing about atomic propositions and I don't think that they are necessarily relevant, since the sort of propositions we had in mind didn't seem to be atomic.
Sorry, I should have been clearer but it was getting late at night. I automatically jumped over a few steps that are obvious to me because I almost always do those and forgot that they are not obvious to others. This is getting quite off-topic but I'll try to expand things a bit here.
There are two aspects for default reasoning: theoretical and practical. The theoretical basis comes from the observation that there are concepts that you can't represent in classical logic in a usable way but that occur in everyday reasoning. Things like 'Today is a work day so Jack is likely to be at his work place. I need to meet him soon so I should go there.'. If we want to capture things like that, we need to add some sort of defaults to logic. What precise form the defaults take is not that important, just that we need some.
From the practical side, having defaults available often makes it simpler to formalize things that you could represent also with classical logic. The classical logic is a fragile and highly unintuitive system. It is easy to make simple mistakes that make the formalization inconsistent or otherwise incorrect. Defaults can be used to eliminate some common sources of errors.
As an almost on-topic side note, I'm not a fan of trying to use symbolic logic to argue about the nature of reality. With logic, you always start by doing an abstract formalization of whatever you try to reason about. All the reasoning that you do is about properties that the formalization has. If the formalization is correct, then the results apply also to the thing that you modeled. If it is incorrect, you may get interesting results that are completely wrong. When the subject gets too philosophical, you have absolutely no way of knowing whether you have a correct formalization or not, so the results are completely useless.
Second, while my familiarity with default logics is only superficial, I don't recall the principle you invoke here and it doesn't strike me as a particularly good principle. It makes the logic very sensitive to the selection of atomic predicates vs. defined predicates.
I personally like default negation of atoms because I find it that it helps quite a lot in formalizing practical problems. Why it does so needs a bit longer explanation.
When you want to solve some real-world problem with symbolic logic, there are two basic approaches that you can use:
* theorem proving where you express the problem as a theorem and create the solution as a part of proving it.
* model-theoretic where you construct a model of a set of sentences and that model contains your answer.
In both cases you will be using a computer to do so because pretty much always the problem will be far too large to do without an automated solver. In very general terms, theorem proving is the kind of stuff that traditional logic programming languages like Prolog do while the model-theoretic approach is SAT solvers and the like. (SAT = the satisfaction problem of propositional logic).
When you have a problem that's large enough, you need to go with the model-theoretic route. This is because theorem proving is much more expensive computationally and you can solve problems that are a few orders of magnitude larger than than you can with theorem proving. SAT and its ilk are computationally difficult problems, but theorem proving is even more difficult.
Finding a model for a set of sentences of predicate logic is more difficult in practice than finding one for a set of sentences of propositional logic so you want to go down the propositional route. However, formalizing a large problem directly with propositional logic is very difficult. So difficuly, in fact, that I'm willing to say that no living person can create a correct propositional formalization of a problem that has as few as 1000 propositional atoms in it.
The way to create a propositional formalization is that you first encode the rules of the problem domain with sentences of predicate logic and then give the parameters of the specific problem instance as a set of facts. The next step is to use an automated tool to translate the rules into propositional logic with respect to the facts that defined the instance. This gives you an instance of propositional logic that you can feed into a SAT solver to get the answer.
For example, if we have a rule 'for all x . P(X) implies Q(X)' and the universe contains two constants, a and b, then we get out four propositional atoms P_a, P_b, Q_a, Q_b and two rules: 'P_a implies Q_a' and 'P_b implies Q_b'. (Usually we continue using the predicate notation also when speaking about the propositional atoms and write P(a) instead of P_a because predicate notation tends to be clearer and there is little danger of confusion.)
From knowledge representation point of view, the largest problem of using the model-theoretic approach with classical logic is that we have to add a large number of rules to weed out obviously-incorrect answers. If we want to solve a logistics problem where we transport packages from some location to another, the vocabulary will include stuff like 'at(p, l, t)' that means 'package p is at location l at the time t' and 'transport(p, l_1, l_2, t)' that means 'transport the package p from l_1 to l_2 at time t'. We would have rules that look something like:
forall p, l_1, l_2, t ( at(p, l_1, t) and transport(p, l_1, l_2, t) implies at(p, l_2, t+1))
The intuitive meaning for this is: 'when we move package from a place to another, it will be at the destination'.
The problem is that when we have implications like this, there is a trivial model that where all atoms 'at(p, l, t)' are true, which is not that useful in logicstics planning - everything is everywhere all the time. We have to add a number of rules (called frame axioms) that enforce the rules that a package may not be in two places at the same time, that it moves when it is transported, and that it stays in the same place when not.
Creating the frame axioms is in practice much easier with default negation because that is much closer to the way that people actually reason. You don't have to remember to prevent spontaneous teleportation because at(p, l, t) is always false unless you force it to be true with an inference rule. Though, you can't escape writing some frame axioms, for example, you need to prevent a package from being in two locations at the same time. But you often can cut down the number of axioms by half or more and you usually keep the simpler half and eliminate the more complex rules.
As a final step you may want to translate the encoding that uses default reasoning back to pure propositional logic using an automatic translation that encodes the rules of the default semantics into classical logic. The reason why you do that is that SAT solvers tend to have better optimizations than solvers for other semantics and you may be better off using them even with the overhead of the final translation. (And the reason why you can do that even though I earlier wrote that you can't represent defaults is that we are here looking at one single specific case, take the closed-world assumption that we have included all relevant data in the encoding, add a number of additional propositional atoms for default handling, and end up with a SAT instance that no one can understand without knowing the details of the transformation).
Is this "simplest" form of default logic actually defended? I don't recall running across it when I briefly looked into the topic a decade ago.
I used the term 'default logics' a bit imprecisely by clumping all other forms of non-monotonic reasoning together with it. My reason for that is that they aim to solve the same problem and 'default' is the term with most human-readable meaning (for examplee, in my experience no one can guess what 'circumscription' means in logic and every time I write it I have to be extra careful so that I don't make unfortunate typos). The standard though sorely dated reference for non-monotonic reasoning is Marek & Truszczynski's
Nonmonotonic Logic, 1993. Or, it was the standard back when I used logic actively, it has been some years from those times.