Menu ▼

<< Back to all book notes

My notes from:

How to Measure Anything: Finding the Value of “Intangibles” in Business

The premise of the book: If something can be observed, it can be measured.

A measurement is a quantitatively expressed reduction of uncertainty based on one of more observations.

The illusion of intangibles

Some examples of what people call intangibles that can’t be measured:

  • management effectiveness
  • the forecasted revenues of a new product
  • the public health impact of a new government environmental policy
  • the productivity of research
  • the flexibility to create new products
  • the value of information
  • the risk of bankruptcy
  • the chance of a given party winning the White House
  • the risk of failure of an IT project
  • quality
  • mentorship
  • security
  • public image

It’s helpful to ask, “What do you mean by [intangible term]?” When it’s clearly defined, it doesn’t look so immeasurable. The quote, “A problem well stated is half solved” is usually attributed to Charles Kettering, who was head of research at General Motors from 1920 to 1947.

A clarification chain:

  1. If we care about something, it means that we can detect it in some way because it has desirable or undesirable effects.
  2. If we can detect it, then it can be detected relatively (more or less than something else) or absolutely (by a specific amount).
  3. If it can be detected absolutely, it can be measured.

Methods for measuring

People often don’t know a method exists, so they say “it can’t be measured” even though “I don’t know how to measure it” is usually closer to the truth.

Here is an example: the rule of five

The median is the middle point in an ordered set. Half of all values are smaller and half are bigger than the median of the set.

A horizontal line representing a range of values. There is a
letter M in the middle denoting the median of the set of all
values.

If you take one random sample and measure it, there is a 50/50 chance that it will be smaller or larger than the median, the same chances of a coin flip. If you take five samples, the probability that all of them will be smaller or all of them larger than the median are 0.5^5=0.03125 respectively, or 3.125%.

Five crosses denoting five measurements all fall on the right
side of the median along the horizontal
line.

Five crosses denoting five measurements all fall on the left
side of the median along the horizontal
line.

So a chance that some of them will be smaller and some larger, or in other words, that the median is between the smallest and the largest value of a random sample of five, is 1 - (2 x 0.03125) = 0.9375, or 93%. It’s not perfect, but it dramatically lowers uncertainty with a small sample.

Out of five crosses denoting five measurements, three fall
on the left side and two on the right
side.

Four useful measurement assumptions

They are not always correct, but they help move the conversation forward.

  1. Your problem is not as unique as you think
  2. You have much more data than you think
  3. You need far less data than you think
  4. New data is more accessible than you think

Before you measure

The most important question before measuring anything is, “What are you trying to change?” or “What decision will this measurement support?” Before committing to measuring, one should estimate the value and cost of getting information:

  1. Expected value of (imperfect) information
  2. Expected value of information that could reduce uncertainty and get closer to perfect information

McNamara fallacy. Deciding that what is hard to measure doesn’t matter or even doesn’t exist. Focusing just on easily available numbers.

We can make better bets and decisions if we can reduce uncertainty. Don’t get stuck in thinking that high uncertainty is an impenetrable barrier. Methods like Fermi decomposition uncover where the highest uncertainty is.

Some definitions:

  • Uncertainty is the lack of complete certainty. The existence of more than one possibility. The true state, outcome, value is not known.
  • Measurement of uncertainty is a set of probabilities assigned to a set of possibilities.
  • Risk is a state of uncertainty that involves a loss or some other undesirable outcome.

Well-calibrated people make great estimates and significantly reduce uncertainty. When someone is calibrated, their forecasts match the actual outcomes.

Estimates and calibration

The author describes a process of calibrating someone. An example exercise is asking people—who are not experts in aviation—to provide a range for which they are 90% certain that contains the wingspan of a big passenger aircraft. If people say they have no idea, the author offers, “Is it somewhere between 1 and 1000 meters?” Everyone agrees with 100% certainty. The author then guides people through a reasoning process where they try to narrow down the initial range to get to 90% certainty. “The fuselage of the aircraft is at least 4m across because that’s two adults lying down; the wingspan has to be wider than that. At the same time, it’s not wider than a football field.” That brings the estimate to the range from 4m to 100m. Further steps narrow down the range even more.

Don’t provide estimates as “high” or “low”, or on a normalized scale like one to five. Those don’t help the decision maker.

A section in the book talks about measuring risk through modeling (chapter 8), mentioning these topics:

  • Monte Carlo simulations
  • Random sampling
  • Normal, binary (Bernoulli), uniform distributions
  • Adding and subtracting ranges

Transitioning from what to measure to how to measure.

Measurement methods

  • How much do we really need to measure it?
  • What are the sources of error?
  • What measurement instrument do we select?
  • Can we leverage research done by others (secondary research)?

Measure just enough. Keep the information value in mind because it’ll define the upper limit of what and how much you should measure.

Decomposing an uncertain variable into its constituent parts so they can be better measured. From experience, those parts are straightforward and it’s only a small subset of them that significantly add to uncertainty and need to be explored deeper.

Instruments can have some of these benefits:

  1. Detect what people can’t detect
  2. More consistent than people
  3. Can be calibrated for error
  4. Not biased, they don’t deliberately see something that’s not there
  5. Objective recording, not spotty human memory
  6. Often faster and cheaper

Biases and systemic errors:

  • Expectancy bias - researchers might see things that are not there
  • Selection bias - is it a truly random sample?
  • Observer bias - people might perform differently when they are observed

Some repeated tips to help select an instrument:

  • Work through the consequences: if the value is surprisingly high/low, what should you see?
  • Be iterative: start the observations and recalculate the information value , don’t do one big observation
  • Consider multiple approaches: one approach might not be feasible or won’t work
  • What’s the really simple question that makes the measurement moot: sometimes a simple measurement can make a complex measurement irrelevant
  • Just do it

Some specific observation approaches:

  1. Traditional
  2. Bayesian

Sampling reality

One can’t count or track everything in most cases. Just take a sample.

The author goes into various topics related to sampling:

  • T statistic or student’s T statistic for small samples
  • For bigger samples, it turns to the normal distribution
  • Average and variance
  • Sample sizes and statistical significance (and how it’s abused by people don’t understand them)
  • Population proportion sampling
  • Bayes

Beyond the basics

Is there a clear delineation between objective and subjective measurements? For example, the price of gold. Is it objective because there is a market price? The price is just a collection of people’s subjective judgments.

Sometimes human preference is the only available measure, like for quality of certain things or services, or brand perception. Similarly, one can compare willingness to pay or valuation by trade-off.

An additional challenge in measuring those is the difference between stated and revealed preferences. Stated preferences are those that people say to others or in a survey; revealed preferences are those that manifest when people actually do something. It’s not uncommon for what people say and what they do to be different.

Surveys are a common instrument method. They suffer from the response bias and could contain loaded terms and leading questions. Something to be aware of.

The Lens model (Brunswick). The Lens model was one of the first models to use a probabilistic approach to decision making, doing so through the use of linear regression. The basic premise of this model is that a finite set of cues can be mapped onto a decision object through a weighting scheme.

The Lens model was mentioned in the book, but the above definition comes from: Humphrey, Stephen & Hollenbeck, John & Meyer, Christopher & Ilgen, Daniel. (2002). Hierarchical team decision making. Research in Personnel and Human Resources Management. 21. 175-213. 10.1016/S0742-7301(02)21004-X.

New measurement instruments for management

  • RFID
  • Internet (Google Trends, scraping, activity logging)
  • GPS
  • Prediction markets

Applied information economics

  1. Preparation (planning and secondary research reading)
  2. Define the decision and the variables that matter to it
  3. Model the current state of uncertainty for that variables
  4. Compute the value of additional measurements
  5. Measure the high value uncertainties in a way that’s economically justifiable
  6. Make a risk-return decision after the economically justified amount of uncertainty is reduced

Explore other book recommendations or read my book notes.

Stay up to date:
Email · RSS feed · LinkedIn · Mastodon

Back to top ▲