Precautionary and Preactionary Principles
There’s a concept in the world of public policy-making often called Chesterton's Fence, which says that it's important to understand the reasons behind the default way of doing things before changing that default.
This heuristic and monicker stem from the idea that you might encounter a fence with no obvious utility and decide to get rid of it, thinking it's useless.
But then later maybe a pack of feral hogs (or another destructive force) rolls through that previously fenced-off, protected area, demolishing everything in their path.
There was no way to know, when you removed the fence, that these hogs passed through that region every few months or years, but the person who put up the fence knew.
So the general theory here is that if something exists—especially some piece of infrastructure or other asset—it probably exists for a reason. Someone invested time and money in that fence (or whatever else), and that means they probably did so for a purpose.
It's generally prudent, then, to figure out and understand that purpose before removing the fence.
It may be that the original intention is no longer relevant: those hogs are maybe gone for good.
But it may also be there's a variable you don’t yet perceive, and it's important—according to this heuristic—to check for and grasp those variables before removing potentially vital fences.
Another term we might use for a somewhat more extreme versions of this way of thinking is the Precautionary Principle.
This principle says, in essence, that if we can't say for certain what the impact of some new thing or change will be, we should resist it—just in case.
The theory is that unknowns can be incredibly dangerous, and in some cases devastating. The rapid development of artificial intelligence could result in all matter in the universe converted into paperclips by superior, artificial beings, and the introduction of genetically modified foods into supply chains could devastate ecologies and wreak havoc on human digestive systems, even if we don’t have any reason to think such things will happen in any given case.
Thus, considering the scale of those potential risks, it makes sense (according to this principle) to just not follow those paths; keep things as they are, because we understand the risks associated with our current status quo, but not the risks associated with wherever those paths might take us.
An opposing, more recently developed heuristic is sometimes called the Proactionary Principle, and it basically says that imposing restrictions on iteration and evolution enforces substantial-enough costs on society that we'll almost always suffer more from not taking (not-obviously-dangerous) risks than from taking them.
In other words, the opportunity cost of holding still and not trying new things is high enough to counteract the benefits of sticking with what we know.
These oppositional ideas are wielded by folks on various sides of many important conversations, and some ideological positions are heavily reliant on the baseline arguments they offer ("stay put if there's any doubt about where a path leads" and "ever-forward, because the potential rewards justify the risks," respectively).
Strict adherence to either extreme, however—constant movement or enforced stasis—can amplify the risks associated with both, arguably meaning the most rational approach is somewhere in the middle: intentional forward-movement informed by an understanding of the risks we face and bulwarked by careful preparation for those risks.