I spent most of my twenties managing direct marketing campaigns for the financial services industry, where we would release several multi-million dollar programs per year in the hope that they would generate enough customers to justify our salaries. Measuring and predicting the performance of these campaigns was a simple statistics problem – given X expectation and Y performance Z effective days since delivery into the campaign, estimate the difference between actual results and expected results.
The tricky part was getting Z (effective days since delivery) right early in the campaign, since you were trying to predict how fast a slug of bulk mail moves through the postal service system. This was subject to all kinds of minor disturbances such as holidays, weekend timing, and potentially even how your letter shop sorted the file for production. This was before the Internet-based tracking systems we have today, so we were very old school and did a lot of estimation.
As the program manager, I cultivated this Zen state for the first couple weeks of a major program. We knew there was a zone of uncertainty: the most rational response to any developments was “wait and see what happens next”. While we could make good predictions several weeks into a campaign about the outcome, it was actually better not to look at anything from a strategic perspective for the first couple of weeks.. There was just too much noise to get a good read on the signal.
Flashing forward to the present, we put up our first website (a little AJAX based word game solver) a couple of months ago and had to keep tabs on what was happening.