In uncertainty quantification (UQ), as in other disciplines within engineering, it's never enough to just "get a number"; that number needs to connect to reality in a meaningful and reliable way. Unlike most disciplines within engineering, though, UQ is lousy with methods and practitioners that often deliver results that are neither reliable nor grounded in reality. To put it bluntly: UQ is an immature field.
The enduring Bayesian-frequentist debate in statistics is a key touchstone for the immaturity of UQ in general. It boils down to a simple question: Does it make sense to try to compute the "probability" of a non-random event? The Bayesians say it does, and the frequentists say it doesn't.
As the scare quotes around "probability" indicate, at AVCLLC, we take the frequentist position on this question. At the end of the day, engineering notions of reliability are frequentist notions of reliability. And those notions of reliability are fundamentally and provably incompatible with Bayesian notions of "coherence," which require that all uncertainty be represented via probability theory. In fact, Dr. Balch helped to prove that incompatibility. Forced to choose between a practical notion like reliability and a philosophical notion like "coherence," we at AVCLLC choose reliability every time.
That being said, we do think that the core Bayesian goal – to deliver a belief or plausibility function that describes what the data say about a given situation – is essentially correct. That is the goal of uncertainty quantification. It's basically in the name! But that goal, taken on its own terms, is not generally achievable via pure probability theory.
Fortunately, we live in the 21st century, and probability theory is not the only game in town. It is, of course, still the appropriate framework for representing aleatory uncertainty, but epistemic uncertainty requires a different approach. In particular, possibility theory appears to be an adequate framework for representing most forms of epistemic uncertainty, including uncertainties arising from statistical inference. Moreover, problems involving a mix of aleatory and epistemic uncertainty require a framework that subsumes both possibility theory and probability theory, like Dempster-Shafer theory or credal set theory.
At AVCLLC, we specialize in novel UQ methods rooted in these frameworks. Our methods run the gamut from strategies for computing nuanced epistemic uncertainties on model inputs to numerical tools for propagating those uncertainties through models of varying complexity to heuristics for diagnosing failures in more traditional methods. That last item may seem a little esoteric, but it's vitally necessary in a field as immature as UQ. It's one thing to instinctively know that a UQ result is wrong; it's quite another thing to be able to explain why it's wrong, especially when the methods by which it was derived have been uncritically adopted in your technical community.
So, if you're picky about things like the reliability of your UQ analysis, then AVCLLC is likely the consulting shop for you, because we're picky about those things too. We have the full suite of capabilities that you would expect from any VVUQ shop, plus a whole other suite of skills that you won't find anywhere else. It's that second skill-set that allows us to obtain reliable and intuitively sensible results where more "traditionally" oriented VVUQ practitioners have failed. |