Who cares? Even the good parts of Western philosophy are 90% useless, and everything else is even worse.
That's a pretty impressive statement. Assuming that you are making this statement from the perspective of a sceptic and, as such, is suspicious of what you might consider philosophy to be; I would remind you of the relationship between philosophy and the sciences, both historical and current.
To reject one is impossible, the other, absurd.
I admit in modern times, especially post-Bacon, the association between the two disciplines has become lessened (though, I note, this a consequence he did find desirable), and the again modern desire to always further categorise, define and as a consequence, isolate the systems in which we work has driven the apparent wedge further between the two.
However I use the word 'apparent' since it is a deception. Both disciplines are and always have been inextricably entwined. By philosophy's own definition, scientific concerns are too concerns of philosophy, and on the other hand scientific enquiry is framed by philosophical propositions.To paraphrase Massimo Pigliucci, one is empirically-based hypothesis testing, the other reason-based logical analysis and they inform each other in an inter-dependent fashion (science depends on philosophical assumptions that are outside the scope of empirical validation, but philosophical investigations should be informed by the best science available in a range of situations, from metaphysics to ethics and philosophy of mind).
More on Google Consciousness
Speaking as a person who is currently working in the field of AI, I would suggest that the concept of Google conciousness, or infact any current artificial entity is being considered sentient, is only reasonable if your definition of conciousness is inexplicably broad. Somewhat amusingly you could argue that PixyMisa's definition of conciousness would be fine:
Consciousness is self-referential information processing. That's all there needs to be.
Though I am sure that was not what was intended

.
Simply put, the confusion between biological intelligence to electronic intelligence is that, though the complexities of the systems might become or be comparable, the actual function of the entities tend to consistently differ. It is hard to create an artificial intelligence without having the focus of getting it to mimic one or more observable biological functions, and as soon as that goal is in place, the intelligence is framed. And essentially, all that can occur is a clever rouse. Getting robots to mimic bird flocking behaviour is possible. but it does not make them concious of it.
I wouldnt suggest that it is not possible, but not in the sense that most suggest.
Yeah, I often wonder if the internet could become a sort of golem. If "true" AI ever does surface, it will be because humanity has psychically infused a system with "part" of its own consciousness. Not because some engineering or programming genius figured it out or stumbed on the answer.
I agree with your second statement, but not your first. I doubt the current direction in software based intelligence will yield anything truly intelligent, but to suggest "consciousness" is transferable like you are suggesting is wishful thinking. Dualism is provably wrong; the causal interaction problem it creates is very real, and as far as im concerned, not defeatable. Descartes' mind–body dichotomy lives on it seems. Wish he had stuck to maths
My perspective on the matter is centred from much more of a phenomenological point of view. I do not believe that the concious self and the physical self can be separated; we are embodied creatures. That is simply just how our existence is. Incidentally, one of the great hurdles I perceive in robotic AI is the need to introduce a true form of proprioception linked with short term memory. But I will leave it there since I am beginning to wander it seems.