How is this possible in the year of our AI overlords 2023, when our inventions are smart enough to write sonnets? More to the point, given that we humans are never going to stop mucking up, is there any way of spotting mistakes before they do damage?
Part of the problem is that our technology will do what we tell it, and the difference between very useful and existentially threatening can be wafer thin. Take the infamous Unix/Linux command. For those of you whose palms aren’t sweating instinctively at the sight of this little beauty, it means"Remove all files in this directory and all directories beneath it.
Don't try this at home? You absolutely should. Nobody who has seen this happen ever forgets - or repeats - it. Just spin up a new virtual Linux machine and have at it. This principle, of making your mistakes in a place that mercilessly demonstrates their consequences without them being consequential, is the gold standard in safety nets. In aviation, those places are called flight simulators. In electronics, circuit simulators. In humans, Ibiza.
This sort of thinking is a failure of imagination and engineering. Go back to flight simulators, which for regulatory reasons have to be developed alongside the aircraft they train pilots for. In devops heaven, test scripts and protocols are developed alongside the actual software - well, maybe. Once the software's out in the wild and interacting with other systems, all that falls away.
All software comes from a functional specification - or at least, let's pretend. That same spec is used in testing and validation. Why not use it further, to create a simulated model of the software that can be used in a virtual environment? It can pretend to do the work that takes up tons of physical resources, in order to model its behavior and test its logic.