Skip to content
Diosh Lequiron
Systems Thinking12 min read

The Real Cost of Moving Too Fast in Complex Systems

In complex systems, excess speed does not just produce errors — it produces errors that compound before they can be detected. A first-person account of how this plays out across technology implementation and organizational change.

Speed is treated as an unambiguous virtue in almost every organizational context I have encountered. Move fast. Ship early. Iterate quickly. The implicit model behind this advice is that faster is better, that the costs of speed are either negligible or recoverable through subsequent iteration, and that the risks of moving too slowly outweigh the risks of moving too fast.

In simple systems — ones with few interacting parts, short feedback loops, and bounded consequences for any individual decision — this model is roughly correct. In complex systems, it is wrong in a specific and predictable way, and the wrongness compounds.

This is not an argument for moving slowly. It is an argument for calibrating speed to the complexity of the system you are operating in, and for understanding the specific mechanism by which excess speed in complex systems produces costs that exceed the value of the speed itself.

What Makes a System Complex

The distinction between complicated and complex matters here. A complicated system has many parts and requires expertise to understand, but its behavior is predictable from its components. A jet engine is complicated. An agricultural cooperative network is complex.

A complex system has parts that interact with each other in ways that produce emergent behavior — behavior that was not predictable from understanding the parts individually. The interactions create feedback loops, where the output of one process becomes the input of another, and the system''s state at any given moment is partly the consequence of its history. In a complex system, decisions have downstream effects that take time to propagate, and those effects change the conditions under which future decisions are made.

This is the structural property that makes speed dangerous. In a complicated system, a decision made with incomplete information is a local error — it affects the component it touches, and the error can be corrected by correcting that component. In a complex system, a decision made with incomplete information is a perturbation to a network of interdependencies, and the correction required is not local — it requires unwinding the downstream effects of the original decision across all the parts of the system it touched before those effects compounded further.

The Compounding Mechanism

The specific mechanism by which speed creates compounding errors in complex systems:

When you make a decision in a complex system, that decision becomes a condition that subsequent decisions are made in. If the first decision was made without full feedback from the state of the system at the time — because you moved before the feedback from the last decision had propagated — the second decision inherits the distortion introduced by the first. The third decision inherits the distortions introduced by both.

This is not iteration. Iteration is a feedback loop: you make a decision, you observe the consequences, you make the next decision informed by what you observed. Iteration requires a pause — long enough for the consequences of the prior decision to become observable. When you move faster than that pause allows, you are not iterating. You are accumulating decisions made in conditions increasingly distorted by decisions whose consequences you have not yet seen.

The concrete version of this pattern that I encountered in Bayanihan Harvest''s early implementation phase: we were moving through cooperative onboarding at a pace that the governance side of the team could not track. Technical implementation decisions were being made — about how modules were connected, how data flowed between harvest tracking and financial reconciliation, how compliance reporting was structured — before the cooperatives had demonstrated how they would actually use the system in practice.

Each technical decision was, individually, reasonable given the information available at the time it was made. But the decisions were made faster than the cooperatives could provide feedback on the prior decision, which meant each new decision was built on an untested assumption about how the prior decision would work in field conditions.

The consequences showed up four months into deployment. The module interconnections we had built assumed cooperative members would enter data at the frequency we had designed for. They entered data at a much lower frequency, in batches, at irregular intervals. The financial reconciliation logic, which depended on harvest tracking data being reasonably current, produced outputs that were systematically off because the input data was systematically less current than the design assumed. The compliance reporting, which pulled from both, compounded both errors.

None of these were individually large errors. The compounding made the correction cost large. To fix the financial reconciliation logic, we had to fix the data frequency assumption, which required redesigning the harvest tracking module''s tolerance for irregular input, which required updating the compliance reporting logic, which required re-validating with the cooperatives that the revised outputs matched their actual operational picture. The correction touched five modules and required field re-training for cooperative officers who had already been trained on the prior version.

The field re-training cost was about twelve weeks of reduced operational adoption as officers worked with a system that was different from what they had been trained on. The adoption curve, which had been climbing, flattened and partially reversed during that period. This cost was not in any way recoverable through subsequent speed. The twelve weeks were gone.

How This Shows Up in Organizational Change

The same compounding mechanism operates in organizational change contexts, with longer time constants.

When organizations change faster than the people in them can adapt and provide feedback — when new structures, new processes, new roles, and new reporting relationships are implemented in sequence too rapid for the organization to demonstrate how each change is working before the next change is made — the organization''s behavior increasingly diverges from what the redesign intended.

This is a version of the same compounding problem. Each structural change is made in conditions that include the unprocessed consequences of the prior structural change. The organization''s actual behavior — how people interpret their new roles, how informal networks re-form around the new formal structure, where friction accumulates — is feedback that the subsequent change needs to account for. If the subsequent change is made before that feedback is available, it is made in conditions that include distortions from the prior change that will shape how the new change lands in ways the change designer cannot predict.

In WonderScape, during a period when we were restructuring the curriculum delivery team and simultaneously revising the curriculum itself, the interaction between the two changes produced effects neither change alone would have produced. The team restructuring changed who had authority over curriculum decisions in ways that the people making the curriculum revision did not fully understand yet. The curriculum revision required collaboration patterns that the team restructuring had inadvertently disrupted. The resulting friction was not attributable to either change individually — it was an emergent property of making two significant changes to an interconnected system faster than the system could demonstrate how the first change was working before the second was introduced.

The outcome: a curriculum version that was technically well-designed but implemented by a team that had not yet stabilized around the new structure, delivered to students in a semester where the uncertainty was visible in the quality of instruction. The next semester was better. But that semester''s students did not get the curriculum quality the design intended, and they did not get it because of a speed decision made upstream of their experience.

Calibrating Speed to Complexity

The question "how fast should we move?" is not answerable in the abstract. It is answerable relative to a specific system''s feedback loop timescale and error compounding rate.

In a simple system with short feedback loops, you can move as fast as execution allows. The feedback catches errors quickly enough that the next decision can correct for them. In a complex system with long feedback loops, the appropriate speed is slower — calibrated to how long the system takes to demonstrate the consequences of the prior decision before the next decision is made.

The practical approach I have landed on: before beginning a significant implementation phase in any of my ventures, I identify the longest feedback loop in the relevant subsystem — the longest time between making a decision and being able to observe its actual consequence in the part of the system I care about most — and use that as the minimum pause between significant decisions that affect that subsystem.

For Bayanihan Harvest''s cooperative onboarding, the relevant feedback loop is the cooperative''s deployment cycle — how long it takes a cooperative to work through a module and demonstrate, in actual usage, whether the module''s design assumptions held. This is measured in weeks, not days. An implementation pace that makes multiple significant module interconnection decisions per week is too fast to allow field feedback to inform the next decision.

This is a slower pace than the venture''s development team''s capability would dictate. Development can build faster than the cooperatives can test. The constraint is not development capacity. It is the complex system''s feedback loop, and the development velocity needs to be calibrated to it, not the other way around.

The Human Capacity Constraint

There is a third dimension to the cost of excess speed in complex systems that the compounding-error framing alone does not capture: human context exhaustion.

When decisions in a complex system are made faster than the people involved can rebuild context between them, the quality of decision-making degrades in ways that are not visible in individual decisions. Each decision, taken alone, may look reasonable. The problem is that good decision-making in complex systems requires holding context — the history of prior decisions, the current state of interdependencies, the trajectory of how things are moving — and that context takes time to rebuild after it has been disrupted or lost.

Context exhaustion is distinct from fatigue. Fatigue affects the capacity to process information and generate options. Context exhaustion affects the quality of the frame within which decisions are made. A well-rested person who has lost context for a situation will make worse decisions in that situation than a tired person who is still fully context-loaded, because the frame is more important than the processing speed.

During a dense implementation phase for the Bayanihan Harvest compliance reporting module, I was making significant decisions about reporting logic on a roughly two-day cadence. Each decision was made after reviewing the available technical documentation and consulting with the lead developer. What I was not accounting for: the cooperative officers who would use the reports — and whose understanding of what a "compliant" submission looked like was the real-world constraint the logic had to satisfy — were participating in field consultations once every two to three weeks. My decision cadence was running at five times the rate of the consultation cadence.

The result was a reporting logic that was technically coherent but operationally misaligned with how cooperative officers actually understood compliance. Not wrong in any specific decision, but progressively misaligned because each decision was made without the field feedback that should have been informing it. The human beings whose behavior the system was designed to govern were not in the decision loop at the rate the decision loop required them to be.

Correcting for human context exhaustion is different from correcting for decision pacing. It requires identifying the people whose understanding is a real-world constraint on the system — not just the technical stakeholders but the human end-users whose behavior the system depends on — and ensuring their feedback loop is included in the decision pacing calculation. If they can provide meaningful feedback at a rate of once per two weeks, the decision cadence for anything that affects them cannot be faster than once per two weeks without accepting a context gap.

What Appropriate Urgency Actually Looks Like

There is a real cost to moving too slowly in complex systems, and it is worth being clear about what it is. Markets move. Competitive windows close. Funding opportunities are time-limited. Partners who will wait six months will not wait twelve. Some urgency is genuine.

The question is not whether to have urgency — it is what appropriate urgency looks like when you know you are operating in a complex system.

What I have found: appropriate urgency in complex systems is applied to the inputs, not the outputs. The pace of research, design, and planning can be fast. The pace of decisions that introduce conditions that subsequent decisions will be made in — implementation decisions, structural decisions, deployment decisions — needs to be calibrated to the system''s feedback loop, regardless of the external time pressure.

The distinction matters because most time pressure in complex systems is applied at the output level — "we need this to be done by date X" — but the cost of compounding errors is incurred at the decision level. Moving fast through the decision sequence to meet the output deadline is the pattern that produces compounding errors. Moving fast through the upstream work — research, planning, design — while pacing decisions to allow real feedback between them is a different approach that can meet the same deadline with a materially different error profile.

The practical test I use before accelerating implementation pace in any of my ventures: can I name the longest feedback loop that the next set of decisions depends on, and is the planned decision cadence slower than that feedback loop? If I cannot name the longest feedback loop, I do not have enough system understanding to accelerate safely. If the decision cadence is faster than the feedback loop, the acceleration is producing compounding exposure regardless of how confident I feel about each individual decision.

It is also the approach that is harder to explain to stakeholders who are accustomed to equating speed with progress. A fast decision sequence looks like progress. A deliberate pause to observe feedback before the next decision looks like delay. The difference between them — in terms of what the system looks like eighteen months later — is substantial. I have not always made that case effectively. I am making it here, specifically, so that the next time I am in the conversation, the argument is clearer.

The cost of moving too fast in complex systems is not just the cost of individual errors. It is the cost of the compounding — of errors that multiply before they can be detected, of corrections that require unwinding a chain of decisions rather than fixing a single one, and of the human and operational capacity consumed by that unwinding. That cost, consistently underestimated, is how well-designed ventures end up spending significant time and resources on problems that better-calibrated speed would never have produced.

ShareTwitter / XLinkedIn

Explore more

← All Writing