Discussion about this post

User's avatar
WillWorker's avatar

Utilizing imperfect information properly also requires a certain precision of thought. One has to be aware of all the contextual elements which go into relying on that data. While it varies amongst the population, everyone has a maximum capacity of variables they can simultaneously hold in their mind before the complexity falls apart into chaos. When people hit their personal limit, it is always interesting (and sometimes disappointing) to see who defaults to, "This is too complex for me to properly understand" from "This cannot be understood." My general preference is to avoid the latter.

Another element I find is that, in their quest for certainty, people seem to heavily discount how much fun imperfect information is. Combining a variety of reasons into a hunch and finding out whether or not one got it right is way more entertaining than adding two-plus-two together and always getting four.

Or the fact that the more one practices making and refining decisions with imperfect information, the more one hones their instincts to navigate its blind spots. People who scoff at thinking about a situation in probabilistic terms and updating those probabilities based on new information are usually more likely to dismiss someone as simply lucky or blessed with good fortune (source: anecdotal inkling).

Which is partly true. An intuited probability model still involves chance. But, for similar reasons you raised in your post, it unfairly discounts the fact that someone intentionally (if imperfectly) positioned themselves to heighten their odds. While simplistic, overtime a small edge derived from imperfect information can compound into a completely different life experience compared to those who simply shrug and say, "It can't be known".

Expand full comment
bpanak's avatar

Good post. You are very good at survey methods and you will get multivariate regression and factor analysis in due time. Really nothing to critique in your essay, here is a man observation: if the critique is so broad and general that “anyone can say the sample is to [large / small / restrictive / biased / … ]” and the critic is not providing details on why the critique is relevant to the specific analysis or interpretation you are working on, then put little weight on that feedback. When the critique is targeted, backed up with citation or thoughts experiment that fits the situation, and paired with recommendations on how to improve your game, weigh that feedback more. Anyone who has taken one course in research methods can come up with 15 different “rival hypothses”, the good critic is one who is discerning and helpful and who critiques the most relevant issue with tact, like a good teacher would.

Expand full comment
16 more comments...

No posts