How do you hold forecasters accountable, when the forecast is only a probability? The answer appears tricky, then simple, then tricky again, then ends up being simple enough to answer it with Google Spreadsheets.
It’s a journey worth taking, because building better forecasts is invaluable for businesses:
Take lead scoring — putting a value on a new sales lead, predicting the ultimate value of that lead after 9 months have passed and it’s either converted or not. The forecast is what chance this lead has of converting, or what dollar value it has. Like the weather, the lead will convert or it won’t, and if it does, it has a definite dollar value.
If you could predict the chance that a given customer might churn in the next thirty days, you could be proactive and perhaps avert the loss.
If you could predict the chance that a given customer would be amenable to an upgrade, you could focus your internal messaging efforts accordingly.
But how do you measure the accuracy of a prediction which itself is expressed only as a probability?