I don't think you've justified this prerequisite. Strictly speaking, it just doesn't seem necessary given the existence of AI with little more intelligence than an insect to literally teach itself how to perform sophisticated tasks, such as walking upright, attacking an opponent, defending and avoiding against attacks. We've also seen how unintelligent processes can find innovative solutions to engineering problems, such as designing efficient wind turbines. (
Detailed description here.)
Your other objection is that a machine may be a bit to cavalier about destroying itself, but I don't think you've justified that presumption either. Some of the AI in the link above, particularly
this video, independently discover how to avoid being hit by an opponent's attack, without being told whether or how to do that in the first place; it's interesting that the AI doesn't "know" what happening, but that the fitness function for these systems has the simple objective to maximize wins, and the genetic process rwhich constructs the AI's neural net tends towards configurations which cause the creature to veer away from on-coming opponent attacks.
Hawkings particular worry regards AI with human-like intelligence. Given plausible developments in the field, I believe very sophisticated AI will approach human levels of intelligence within my lifetime, for reasons described
in another thread.