A significant part of the activity of software Engineers working on automated testing, continuous delivery, service integration and monitoring, and building SDLC infrastructure strives to reduce operational risks lying on the system they are working on. To estimate the economic value of these efforts require a risk model. As a first approach, it seems reasonable to throw an ad hoc model where each risk occurrence is governed by a Bernoulli variable (roughly speaking a coin that can be weighted to fall more or less on a given side). But how to find sensible parameters for this model.
To take a concrete example, if I develop a disaster recovery procedure and run drills each month to prove it working, I would need to estimate the odds of my company going bankrupt because it is unable to recover quickly enough from a disaster, and I was unable to find any sensible data on this.
Is there any sensible risk model, together with its parameters, that can be used to estimate the value of efforts invested in risk-reduction?