I agree with the argument. If you don't have a definition of SRIP that excludes things you don't consider conscious then it's useless as a definition/explanation of consciousness. The onus is on you to properly define the terms you propose being relevant.
Of course, most people would not consider electronic toasters or computers conscious, so even if you manage a necessary and sufficient definition that includes toasters, computers, certain other machines and animals with brains, while excluding everything else then you will not have come up with a definition capturing what most people consider consciousness to be.
Just forget about SRIP and consciousness for a second.
Think only about the simpler abstractions that computer science uses. You know, things like state machines, basic operations, the fundamentals of computation, etc.
Can these describe any system in the universe? Perhaps.
Now think about some of the more complex abstractions, for example a certain algorithm like Dijkstra's shortest paths or quicksort or whatever. Or, more pertinent, the working of an artificial neural network.
Can these describe any system in the universe? Absolutely not.
They can only describe a very small subset of systems that happen to satisfy the constraints required. The idea that some system in a plain old rock somehow satisfies all the constraints for it to be modeled as an artificial neural network is just wrong.
Do you disagree?
ETA-- let me put it another way.
I can take an abstract description of a certain algorithm and build up any number of actual physical systems that instantiate that algorithm. And in every case, as long as the systems satisfy the necessary constraints, the behavior of the systems is consistent with the predicted behavior from the abstract description.
Likewise, I can look for systems that satisfy the constraints. If I find one, it also will behave in a way consistent with the abstract description.
This isn't limited to computer science, it applies to anything. If you find a system or build a system that satisfies the constraints for what we call "running," the system
will propel itself forward on the ground. Plain and simple. Otherwise, it would simply not satisfy the constraints.
Likewise, if you build a system or find a system that satisfies the constraints for what we call "spatial and temporal summation" ( or whatever the agreed upon "thing" a neuron does is called ) it
will do what a neuron does. Plain and simple. Otherwise, the system wouldn't satisfy the constraints.
The argument that any system can run, or that any system can do what a neuron does, is just wrong. So also is the argument that any system can instantiate all of the complex algorithms of computer science.
That just doesn't happen and, as has been said like 100 times, that is why Westprog is typing on a computer instead of a block of cheese.