Any recording device can "fake having had experiences" by playing back the recording. Such faking is easily detected due to the lack of summarizing.
Imagine a video of a woman cooking in a kitchen, who needs an ingredient that's in a canister on a high shelf out of reach, stands on a chair to reach it, but unexpectedly the chair breaks, causing her to fall, knocking a soup pot over and spilling soup all over the floor.
Do you think there will soon be an AI that can examine that video file, and output "A woman cooking in a kitchen, who needs an ingredient that's in a canister on a high shelf out of reach, stands on a chair to reach it, but unexpectedly the chair breaks, causing her to fall, accidentally knocking a soup pot over and spilling soup all over the floor."?
I don't think so, although I don't think the task is inherently impossible. In the 50+ years since Marvin Minsky assigned visual object recognition as a summer project for some undergraduate students, that problem has been partially cracked; we probably wouldn't have too much trouble getting the AI to recognize the woman, chair, soup pot, and canister in the video frames and track their movements over time. But imagine the processing and real-world knowledge integration needed to recognize that the woman's actions represent an attempt to reach the canister, and that the reason she does so is because she is cooking and it contains a cooking ingredient; and the agent modeling needed to recognize the woman didn't expect the chair to break and that knocking the soup pot over was therefore unintentional. Imagine the sophistication of the module that decides that the parts of the video when the woman stirs the soup, and when she puts down a measuring cup before moving the chair into position to stand on, are less relevant to the main history and therefore should not be mentioned in a brief summary.
That's all just passive observing and analysis, but one can then imagine extending such an AI to include an agent model of itself (just as it already has for the people and animals and moving things it analyzes in videos) and to pursue goals of its own by participating in actual events. (Biological evolution, of course, would never produce a solely passive observer as that would never confer any adaptive advantage, so participating in events would coexist with the cognitive processing from the start.) At that point we'd almost surely have a conscious being, not merely an AI version of a p-zombie.
We might get (although we might have no way to recognize) rudimentary consciousness arising in self-driving vehicles, depending on what capabilities turn out to be needed to make them functional.