A couple of smart guys working for a giant company everybody has heard of are given an impossible task: Build a model that generates usable economic numbers reflecting behaviors for which there are no data.
Sorry if I’m being too vague here – it’s just business. The model is fairly straight-forward: a few user inputs, a bunch of basic arithmetic, a tiny bit of algebra – and, bada-boom, you got some numbers that look reasonable.
Except for one thing: one critical piece of information comes from a Monte Carlo simulation of expected behavior for which they have no data. The philosophical question: what does it even mean to do a Monte Carlo simulation on data you don’t have?
Now, these guys are smart – they know they don’t have the data. But the boss wanted a model, so they built one, right on down to an embedded Monte Carlo simulator. What does it simulate? Why, whatever you want it to! They thoughtfully built an interface that allows four different methods by which a user could input (made up) data to drive the simulation, which dutifully provides a range of possible outcomes, each with an estimated likelihood expressed as percentages out to 2 decimal places – an outcome of X is 13.72% likely to obtain, for example. These values, when plugged into the rest of the model, generate numbers upon which economic decisions will be made. Real money is intended to change hands as a result of the output.
It’s all very beautiful and accurate sounding. My role in all this: to give an opinion as to whether the model as explained could be implemented as part of project I’m working on. Sure! It’s just a bunch of math. We can implement math all week long and twice on Sundays! (Whatever that means.) Of course, I did venture that actually using it would in fact generate the sort of data needed to make it useful – that they would obtain as feedback the actual pertinent information they currently lacked. Therefore, the strategy should be to try hard not to loose their collective shirts before they have enough real data to populate the model well enough for it to generate meaningful results. Then, lose the model, use the hard-won data, and Bingo! everybody makes a lot of dough. In theory, at least.
We’ll see how this goes. The model is expected to be refined over time. No plan for how the key refinement – getting the actual data upon which you build the simulations – was offered. Maybe they have one they didn’t share, who knows.