Skip to content
Diosh Lequiron
Governance10 min read

Decision Matrices That Actually Get Used

Decision matrices are built and then ignored because they are designed to document decisions already made, not to produce decisions. Four structural conditions determine whether a matrix will actually be used.

Decision matrices are among the most frequently built and most rarely used governance artifacts in organizational life. Organizations commission them, consultants design them, workshop participants populate them, and then — with a regularity that should prompt serious reflection — they sit in shared drives until the next restructuring renders them obsolete.

This is not a failure of analytical rigor. The matrices themselves are often technically correct. The problem is structural: most decision matrices are designed to produce a document, not to produce a decision. And an artifact optimized for documentation will fail the test of use, because the conditions that make a document compelling are almost exactly opposite to the conditions that make a decision-making tool reliable.

Understanding why matrices get built and then ignored is the necessary first step. The second step is designing matrices that actually work — ones that survive disagreement, that accommodate new information, and that produce decisions rather than providing cover for decisions already made.

Why Decision Matrices Get Built and Then Ignored

The failure modes cluster into four categories, each of which has a different cause and a different remedy.

The first failure mode is criteria selection theater. Organizations frequently select evaluation criteria through a process that feels participatory but is actually backward-engineered from a preferred conclusion. The team assembles, someone with authority or strong conviction steers the criteria list toward dimensions where the preferred option excels, and the matrix is populated in a way that surfaces that option as the winner. No one in the room says this out loud. Often no one in the room is consciously aware it is happening. But the result is a decision matrix that documents a decision rather than producing one.

When the matrix is built this way, it cannot survive the first disagreement. Someone who preferred a different option will immediately identify that the criteria selection was loaded. They will reject the matrix — often correctly — as a post-hoc justification rather than a genuine analytical tool. The matrix is discarded not because matrices are useless but because this particular matrix was designed to create the appearance of analysis rather than to produce it.

The second failure mode is option incomparability. A matrix requires that its options be evaluated on the same dimensions. But organizations routinely construct matrices where the options differ in kind, not just in degree. Comparing a build decision to a buy decision on a cost axis seems straightforward until you realize that the build cost is amortized across multiple uses and the buy cost includes ongoing licensing while the build cost includes ongoing maintenance that neither party has costed. The numbers in the cells are not wrong exactly, but they are not measuring the same thing. Evaluators implicitly apply different assumptions to different cells, the matrix becomes internally inconsistent, and the result is a number that no one trusts.

The third failure mode is no bias audit. Every person who contributes to a decision matrix brings assumptions, preferences, and blind spots to the exercise. Some of these are visible and can be accounted for. Most are not. The matrix's apparent objectivity — its rows and columns, its numerical scores — creates the impression that the results are free of the bias that infected the inputs. This is the most dangerous failure mode because it is the hardest to detect from inside the process. The matrix looks rigorous. The participants feel they have done careful work. The result is a false confidence in a biased output.

The fourth failure mode is no update protocol. Decision matrices are built at a point in time. The information they encode reflects what was known, what was valued, and what options existed at that moment. Organizations change. Priorities shift. New options emerge. Constraints dissolve or harden. A matrix without an update protocol becomes stale — but because it was built through a structured process, it carries a residual authority that discourages challenge. People reference the matrix as if it reflects current conditions when it reflects the conditions of six months ago. Decisions made on stale matrices are often wrong in ways that are attributed to bad luck rather than to the governance failure of using outdated analysis.

The Difference Between Producing a Decision and Documenting One

This distinction is operational, not philosophical, and it has specific design implications.

A matrix designed to produce a decision must be built before the preferred option is identified. This means the criteria must be selected and weighted before options are evaluated — not simultaneously, and certainly not after. This sequencing prevents backward engineering. If the evaluation team cannot change the criteria after seeing how options score, the criteria selection process is protected from the confirmation bias that corrupts most matrix-building exercises.

A matrix designed to produce a decision must have explicit rules for how it handles ties, near-ties, and cases where quantitative scores conflict with qualitative judgment. Most matrices are silent on this. They produce a ranked list and assume the top-ranked option will be selected. But if the top-ranked option is second in three criteria and first in none, or if the matrix winner is strongly opposed by a key stakeholder for a reason the matrix does not capture, the matrix fails to bridge analysis and action. The decision-producing matrix anticipates these cases and specifies how they will be handled.

A matrix designed to produce a decision must specify who can use it, under what conditions, and what happens after it is used. If the matrix is a governance tool rather than an analytical display, it needs to be integrated into the decision process itself — not as optional background material but as a required step with defined consequences. This means specifying who is authorized to invoke the matrix, who reviews the results, and what the relationship is between the matrix output and the final decision.

Decision Matrix Usability Conditions

Through working with organizations that have built and used decision matrices across a range of contexts — technology selection, resource allocation, organizational restructuring, program prioritization — I have observed four conditions that determine whether a matrix will actually be used.

The first condition is criteria weight transparency. Every matrix involves implicit or explicit weighting of criteria. An unweighted matrix treats all criteria as equally important, which is almost never the organizational reality. A matrix with hidden or implicit weights cannot be challenged, revised, or audited. Criteria weight transparency means that weights are explicit, that the rationale for each weight is documented, and that the weights are set before options are scored. When weights are explicit and rationale-documented, disagreement about the weights becomes a productive conversation about priorities rather than a covert conflict about the conclusion.

The second condition is option comparability. Before a matrix is populated, each option must be evaluated against a comparability checklist: Are we measuring the same thing across all options on each criterion? Are the time horizons consistent? Are the assumptions about resource availability, risk tolerance, and organizational capacity the same for each option? If comparability fails on any criterion, that criterion must be redesigned or the matrix will produce unreliable results regardless of how carefully the scoring is done.

The third condition is a bias audit mechanism. This does not require a psychologist or a formal debiasing protocol, though either can help. At minimum, a bias audit mechanism means that the team conducting the evaluation identifies, before scoring, the preferences and prior positions of each evaluator. These are recorded. After scoring is complete, the evaluators compare their scores to their stated preferences and flag any cases where their scores tracked their preferences across all criteria. This is not about accusing evaluators of bad faith — it is about creating a structural check that makes systematic bias visible before the results are accepted.

The fourth condition is an update protocol. Every decision matrix should specify: what events trigger a review of the matrix (time elapsed, organizational change, new information about a scored option), who is responsible for initiating the review, what the review process looks like, and how the matrix output changes if the review produces different scores. A matrix with an explicit update protocol is a living governance tool. A matrix without one is a historical document that will be misused as a current one.

Designing for the First Disagreement

The test of a decision matrix is not whether it produces a result under conditions of agreement. Under conditions of agreement, any moderately structured process will produce a result. The test is whether the matrix survives the first serious disagreement — when a participant rejects a criterion weight, questions the scoring methodology, or argues that the winning option is wrong despite the matrix result.

A matrix that is not designed for disagreement will collapse under disagreement. The result will be that the matrix is abandoned and the decision is made through political rather than analytical processes — which is often fine, but which defeats the purpose of having built the matrix.

Designing for the first disagreement requires three specific structural choices. First, the disagreement resolution protocol must be specified before the matrix is used. When a participant contests a criterion weight, the matrix must have a defined pathway: who has authority to change weights, under what evidence, and what happens to all scores if a weight changes. Without this, weight disputes devolve into authority contests.

Second, the distinction between matrix output and final decision must be explicit. A decision matrix does not make decisions — it informs them. This is obvious when stated, but most matrix-building processes obscure it by presenting the matrix output as though it were determinative. When the matrix output and the preferred decision diverge, the disconnect produces cognitive dissonance and often results in the matrix being discarded rather than the decision being reconsidered. Explicit separation of output from decision — "the matrix recommends Option B; the decision authority will decide whether to follow the recommendation and must document the rationale if not" — removes the cognitive pressure that causes matrices to be abandoned under disagreement.

Third, the matrix must have a defined lifespan. A matrix built for a one-time decision should be retired after that decision is made. A matrix built for recurring decisions should have a scheduled review cycle. The lifespan boundary prevents the matrix from accumulating inappropriate authority — from being cited years later as a rationale for decisions in a context the matrix was never designed for.

Making It Work in Practice

The failure modes described above are systemic, not accidental. They emerge from the same underlying dynamic: decision matrices are more often used to provide the appearance of analytical rigor than to produce it. Changing this requires changing the process by which matrices are commissioned and built, not just improving the matrices themselves.

In practice, this means starting with a small number of criteria — usually three to five — rather than comprehensive lists. More criteria do not produce more rigorous analysis; they produce analysis paralysis, contested weightings, and matrices so complex that no one will actually use them. The discipline of restricting criteria forces clarity about what actually matters in the decision.

It means separating the criteria selection and weighting session from the option scoring session, with at minimum a day between them. This temporal separation is not theatrical — it structurally prevents the backward engineering of criteria toward preferred conclusions, because the people setting weights cannot simultaneously be computing how each option will score under different weight configurations.

It means treating the bias audit as a closing step before the matrix results are presented to decision authority, not as an optional afterthought. The audit takes fifteen minutes. The protection it provides against systematically biased results is substantial.

And it means building the update protocol into the matrix itself — as a named section, not as a note in a covering memo. The update protocol should specify dates, not just conditions. A matrix that says "review if organizational context changes" will never be reviewed. A matrix that says "review on the first Monday of each quarter or within two weeks of any of the following trigger events" will be reviewed.

Decision matrices earn their place in governance by producing decisions that would not otherwise be as clearly reasoned or as broadly accepted. When they fail — when they produce conclusions no one trusts, when they are built and ignored, when they provide cover for choices already made — the failure is usually traceable to one or more of the four usability conditions being absent. The remedy is structural, not motivational. Build the structure correctly, and the matrix will do what it was designed to do.

ShareTwitter / XLinkedIn

Explore more

← All Writing