Well I understand that part, but there's something missing still in going down to N--either an explicit claim that N would produce my consciousness, or an explanation of something left out.
Assume that A is your simulation of a neuron's calculations--include whatever inputs happen to come in whenever/wherever they occur (just picking this for discussion--we could easily aggregate these... consider it a functional decomposition style analysis).
Now I build A', which is an equivalent simulation, using only clocked NAND's. So I want to use A' to define the "random" ordering, as follows. I have t NAND gates total. I will label all of them uniquely with integer numbers, from 1 to t. What is significant here is that I defined an ordering (let's further assume they have labeled inputs, and we always represent them in a certain order, so that we know 01 from 10, in case that matters).
During each clock cycle in A', p of my t NAND gates are computing 00, q are computing 01, r are computing 10, and s are computing 01. So let P1 through Pp be the NAND gates calculating 00, in order, Q1 through Qq be the ones calculating 01, in order, and so forth. For the next clock cycle, we do the same thing, but start with Pp+1, Qq+1, etc.
When we're done, I want to calculate the NAND gates in A', in this arbitrary order:
P1, Q1, R1, S1, P2, Q2, R2, S2, etc.
Once I hit the end of one of the P, Q, R, and S sequences, while I still have the other three to calculate, I'm just going to start running some other "program" in the background in a random order (effectively I want to just fill it in, so that I keep iterating).
Now I wind up with N, which is a program that's just calculating, in this order, (00)->1, (01)->1, (10)->1, (11)->0, and going back to start. So are you claiming that this program implements an out of order equivalent A? If we keep running it, will it produce a conscious mind--albeit an out of order one?