If your glue turns to dust, you're going to have a bad day. But trying to sensationalize the surface area of the glue layer doesn't really impress me. We've been laying up aerospace structures using epoxy for many decades without issue. The wing spars on a Boeing 777 routinely exceed 500 sq ft of glue area. The shocking fact is not that so much glue was used, but that the engineering process did not include any direct-examination exercise to determine the effectiveness of their layup plan. They simply inferred success from final strain testing.
They ground (by hand) the layers of fibre as they added layers. This surely broke some of the fibres.
It necessarily did. The point was to smooth out imperfections in the fiber layup so that new fiber layers would have a perfectly cylindrical base to adhere to. In theory the layers were glued together and the glue baked to join it all together in one integral hull. But the video shows that this process did not work, and the engineers relied on tests that did not have the capacity to reveal that.
The micrographs in the video did not explicitly connect the failure of the glue layer to adhere to the grind areas. The voids seem fairly evenly distributed.
I might be inclined to argue that you can still" move fast and break things", in the sense of doing a lot of all up prototyping and test flights, flushing out as many bugs as you can as quickly as possible.
I don't disagree. I build a lot of prototypes. But "Move fast and break things" is not about learning more from prototypes than from whiteboard analysis or simulation. It's about a culture that rewards being first to market and can tolerate a high failure rate. The Silicon Valley software model emphasized a minimum viable product (MVP). Whoever got that first enjoyed an essentially winner-take-all payout. And as you conclude, that's acceptable for low-stakes stuff like some software that can tolerate a high failure rate.
But I bet it probably takes time and care to get to root cause of each component failure.
Also it probably requires a lot more money to do all that breaking and analyzing quickly, in the engineering world.
Yes, for engineering but not so much for software. Software cost is just guys sitting at chairs. This is not to make light of the effort and skill involved. It's simply to say that the cost of iteration is considerably lower than building physical prototypes of actual objects. Either way, however, you have to be prepared to sacrifice economy for speed. You're trying to get to market first, or be the first to roll out a viable disruption. There really are few prizes for second place.
The whole point of the maxim is find "good enough" solutions quickly and cheaply.
Exactly. Good enough in consumer or commodity engineering is general a much lower bar than good enough in something that people are going to trust their lives to. Your tolerance for failure in that case has to be much, much higher. That's when the procedure has to be, "Go slow and check each step."
So maybe it's best left to things like commodity software development, and more simple manufacturing challenges.
I would agree. My manufacturing process involve some pretty straightforward steps that we can prototype effectively using 3D printers before we get into injection molding, chem milling, and 5-axis machining. We don't need a lot of whiteboard time, and we do a lot of physical modeling to shake out bugs in the manufacture and assembly. This used to take a whole DFM/DFA cycle that was its own department.
But we also have some epoxy encapsulation processes that we invented. We're putting sensitive instruments into highly hostile environments rife with radiation, chemical contamination, high heat, and the absence of atmospheres. At least half of our engineering effort there is in figuring out ways to test the manufacture to ensure ourselves that our design assumptions are being met. These involve x-ray and ultrasound methods. But we also just sacrifice a few units to saw apart and verify the epoxy layup.