The issue raised by the OP is the question of generalizing from experience.
We examine a volcano and find that it's caused by subterranean heat and pressure, rather than by the anger of a volcano god. We examine fifty volcanos and find that all of them are caused by subterranean heat and pressure, and not one by the anger of volcano gods. But then a new volcano erupts somewhere that we've never examined before. Can we say with any confidence that that one's also caused by subterranean heat and pressure, and not by the anger of a volcano god?
Most people (nowadays) would say yes, though some of them would also caution against absolute certainty, and suggest we should probably go there and take a few measurements.
Now instead of fifty volcanos, let's say we've investigated the causes of fifty different natural phenomena. In all cases where we could identify causes, we found causes other than gods. How justifiable, how rational, is the conclusion that no natural phenomena are caused by gods?
On the one hand, the pattern established by experience so far is clear. On the other hand, the fifty-first, fifty-second etc. phenomena that remain unexplained aren't just randomly chosen phenomena that we haven't got around to looking at before. They're unexplained because for whatever reasons, they're harder to examine or explain than volcanos and the rest of the previous fifty. Those reasons might be complexity, scale, remoteness, etc. but one of the conceivable reasons certain things might be hard to explain is the involvement of gods.
The question of how far we're justified in generalizing isn't specific to science. It applies to all learning. It's a fundamental issue in all learning.
Under-generalizing is "not getting the concept," not seeing the pattern or "not seeing the forest for the trees." When I studied machine learning in the early 80s it was the crux of the "open minded learner" problem demonstrating that built-in biases were necessary for all but the crudest of learning. A completely open-minded learning algorithm could be told that for an input of 1, the correct output is 1; for an input of 2, the correct output is 2; ditto for 3, 4, 5, and 7. But if then told to figure out the correct output for 6, it could only output "unknown" because it hadn't been taught that specific rule. In other words an open-minded learner cannot learn; or more precisely, it can learn only in the sense of being able to parrot specifically taught instances. That falls under our definition of learning (a multiplication table, for instance) but doesn't encompass all that we mean by learning. To go farther the learning algorithm needs a "model" which amounts to a system of biases. For instance if programmed to find the shortest computation that relates output to input for all known instances, the learning algorithm will no longer be "open minded," instead having a strong bias toward simpler rules, but will quickly learn the rule "output = input," and will offer the output of 6 for an input of 6.
Over-generalizing is "jumping to conclusions" or "mistaking the map for the territory" or "comparing apples to oranges" or "every problem seeming like a nail." Fictional AI-gone-wrong scenarios usually feature something to that effect. HAL is programmed to "protect the mission" and tries to do so by killing the astronauts. Colossus is programmed to "defend the nation" and does so by taking over the world.
Both possibilities loom in every learning situation. Both are the basis for jokes, sometimes but not always about silly things children do. From my own childhood, a comedy bit on Laugh-In by Totie Fields stuck in my memory: a toddler-age character (much like Lily Tomlin's "Edith Ann" character, but different) recites a series of laments: "Nobody TOLD me I shouldn't paint the baby. They said not to paint the walls, or the floor..." That's failure to generalize, which we associate with cognitive immaturity. (Fields' character appeared to be regressing in age with each transgression in the series.) In other family comedy bits, toddlers do things like try to feed the goldfish by dumping dog food in the bowl. That's generalizing too far, which we often associate with inexperience. It's the reason why "a little learning is a dangerous thing." It's usually the reason for the problems the "FNG" causes at work. FNGs know the dozen fundamental rules for how things should be done, but not the eight hundred exceptions.
Clearly the issue goes way beyond early childhood learning. Every romance is about learning not to generalize. "Can't you see Broody McLeatherpants isn't like all the other vampires?" Every recovery narrative is about learning to generalize. "I've just realized that every time I reach this point and go get a bottle of cheap whisky and open it up and drink it, it doesn't help! Maybe I should try something else instead."
The issue of this thread is not how science should be done, but how we should or should not learn from it. Some want to limit our learning to only the specific results investigated, like the "open minded learner" algorithm that can only repeat what it's been taught. "Be open-minded! That fifty-first volcano might have a god in it!" Others want to take models that might be shaky or incomplete and extrapolate them literally to the ends of the universe (I'm looking at you, cosmologists, though some other examples come to mind, such as the "central dogma of genetics" that proved over-generalized).
More specifically, some see only a small distinction between "no volcano gods" and "no gods at all," while others see a huge gulf. The first view can be justified on the basis of volcanos once being among the natural phenomena that were very consistently and solely attributed to the actions of gods. (They're even among the events that insurance companies technically term "acts of god" to this day.) The second can be justified on the basis of the still-unexplained things such as conscious experience, the origin of life, and the origin of the universe being, in present day eyes, so much very more mysterious than mere volcanos.
Hawking's viewpoint adds a bit of a new wrinkle: the suggestion that even if the second view is correct, and there is a huge divide between "no volcano gods" and "no gods at all," that science has succeeded in crossing it. That is to say, "no gods exist" is a reasonable generalization to have learned by now.