That is by far the biggest nail in the coffin of whether LLM's are "sentient". Philosophical sophistry about "what is the difference between real and simulated" aside, barest fact of the matter is that a program like ChatGPT starts working when you ask it a question, and once it produces an answer, it stops...period. Stops doing anything. It doesn't think, it doesn't dream. It doesn't wonder what you're going to ask it next. If you were running it in a console, there would be no output, just a blinking cursor waiting for a prompt.
1) Why is that such an important point?
2) That "stopped" state is not as clear cut as you seem to think, one if the prompt is there it is doing something, waiting for input, even if it wasn't waiting for input if the computer was still switched on there is all the "autonomic" processes maintaining its environment.