Traffic was rather heavy as I was driving home from work today. At some point, I noticed the lane to my right was clear, whereas a few feet ahead my lane was jammed. I started changing lanes, but then the car ahead of me (which was fully stopped) attempted to do the same. As I had more room, I stepped a bit firmer on the gas, hoping the other car noticed and let me pass to its right. It worked.
As I pulled away from the jam, I pondered my rather trivial feat. Unconsciously, I had performed a flawed risk/reward analysis: for the perceived benefit of pulling into my driveway a few seconds earlier, I had risked entering a car crash — even a fender bender is annoying enough as to deny any real or perceived time benefits.
Obvious, right? Yet we do it all the time with much more critical things. I’m not talking about flawed probability percentages or delusional rewards — though those are serious problems in their own right; I’m talking about risks and rewards that are not really exchangeable in terms of units or dimensions. Thus, for the prospect of a won argument, we risk a long-term relationship. For the reward of making it to production a couple days earlier, we risk data integrity, customer satisfaction and architectural quality. For the sake of familiarity and transferred responsibility, we enter unacceptable risk as we plan and execute projects using known-flawed waterfall methodologies, with vendors that should know better.
There doesn’t seem to be much written about this, and it makes a lot of sense: risk-reward analysis originates in the financial industry, where the one ruthless unit for all measures is money. We are supposed to do that as well (make a business case or somehow else monetize much of our IT project decisions), but all too often we lack method, discipline, or both — and yet we plow ahead based on questionable proxies for actual business risk and value.
Next time I carry out a risk-reward analysis, I’ll try to make sure that both ends are measured in the same units. I hope you do too!