As pointed out before, we can "replicate" humans, so that's not the problem. One important question Randfan pointed out is the question of whether or not sentience and intelligence are "substrate neutral". Essentially, it's the hardware problem, and that question seems to be often handwaved in the discussion. I mean, take life for example. There shouldn't be anything "magical" about it, but we still can't make a living cell out of purely lifeless chemicals, from scratch, and I'm not willing to make a prediction on when we'll be able to achieve that if ever. So the same goes for intelligent machines. You can't expect some singularity to magically occur out of playing with the software and current type of hardware and architecture. Now if there was some important discovery/complete paradigm shift either on the concept of intelligence/awareness, human brain understanding or computer architecture/hardware, then maybe we can start thinking about "when" we'll see intelligent (non-human) machines.