That "definition" is about as useful as tits on a bull.
Then you haven't really understood it, or at least, not encountered what it's responding to.
This definition is
incredibly powerful. It cuts straight through all the crap that's been said about consciousness over the millennia and provides a functional explanatory framework that matches what happens experientially and experimentally.
It's not meant to be an operational theory of the human mind. What it is is an answer to so-called "hard problem consciousness".
What do we mean when we refer to consciousness? We mean self-referential information processing. The ability to think and to
think about your thinking. If you can answer the question "What are you thinking?", you're conscious.
Do we observe this behaviour in physical systems? Yes.
Can this account for all the attributes we ascribe to consciousness? All those shown to exist, yes.
There's a huge amount of detail to be filled in on the workings of the human brain. What this explanation shows, though, is that the explanatory gap is just a string of potholes to be filled, not an unbridgeable chasm as Chalmers would claim.
Can you flesh it out at all or is that really as far as you've got so far?
Read
Godel, Escher, Bach. Yes, I can flesh it out as much as you like, but you'll be better off reading the book. It's a wonderful book.
Do the set of integers become conscious when a mathematician is working through the proof of Godel's Second Incompleteness Theorem?
The set of integers is a fixed abstract concept; it's not about to become anything. You also need the arithmetic operators; you need something to be
happening.
Consciousness is a verb.
Does a higher degree of self-reference create a more conscious system?
What is a "higher degree" of self-reference? Do you just mean more self-referential activity?
Consciousness is not all or nothing - humans are infamously only partly conscious - and certainly not all conscious systems have equal computational capacity. But there's no higher order of consciousness, just more of it.
Is the Mandlebrot set or any other similar fractal system conscious?
No, they're sets. They don't change. The Mandelbrot set is always the Mandelbrot set.
If I give you some arbitrary (but not too large) neural network to examine, can you tell me if it is conscious or not while processing data?
Not in the sense of a generalisable formal proof; I think you'd run into Halting Problem type difficulties. But in general, yes, you should be able to tell by examining the network and its activity.
Are there any forms of self-referential information processing that are not conscious?
No. Because that is what we
mean when we say that a system is conscious.
Does your definition allow for arbitrarily nested conscious systems? Perhaps something like conscious bees in a conscious beehive for example.
That's a good question. Yes, certainly. Which is not necessarily saying that bees or beehives are conscious, but that this sort of thing is clearly possible.