(B)
The super manifold hypothesis extends the manifold hypothesis in deep learning, to enable learning as fabrics that are more than mere points/differentiable manifold sequences.
A popular problem in typical interpretation/paradigm, is that to learn, models need to be able to transfer knowledge.
My equations may point to a paradigm where that knowledge at basis, is represented as causal laws of interactions of physics units. These units may then compose to form pseudo novel representations of the units, in general reinforcement learning.
(D)
The causal laws of physics are akin to Chris Lu's pseudo code, or something like the '
learning physics intuition from tower blocks' paper.
I first got the idea for super-m by observing deep mind's Atari q player (that removed pooling layers to enable translation variance) and the above physics learner (that included pooling, to enable translation invariance).
I wanted a way to reasonably have a model that included
both of these properties at once, because humans are observed to both do reinforcement learning, and benefit from learnt causal laws of physics. (pertinently from the baby stage)