The data fed into the AI program ultimately comes from the outside world, same as it does for you or I. And it's the immense quantity of that data that makes it so difficult, if not impossible, for experts to understand exactly what's going on inside the AI, once it has learned it.
As for independence, you could ask the same about humans. AIs appear to be using many of the same methods of constructing complex models of everything as we do, as a way of understanding, and then they're able to make use of and build off of those models. And importantly, they can change those models as they learn new things.
Not sure what you mean here. I was not saying that AIs were subjective. I was say that we humans are being subjective when we empathize with another human, or our cat, or possibly an AI. And that that empathy correlates well with most people's claims of sentience for those beings or agents.
But I myself can't empathize with any AIs, so, *if we're using this correlation*, I would say AIs are not sentient. Some other people may empathize, so for them, those AIs are sentient.
My reason for not empathizing with AIs is simply because they don't have the fragile single-threaded existence that we humans/animals all have. All AIs, as far as I know, can have their state saved and restored at will, and can have many separate threads of experience (sessions in ChatGPT terminology), each which can be paused and restarted at any point, etc. It's not that they can't feel, it's that it doesn't have the consequences that it does for us.
Our human ethics and laws would have to be greatly modified to include AIs.