Skip to content
Diosh Lequiron
Leadership12 min read

The Decision I Keep Revisiting

Some decisions get made and keep returning — not from regret, but because they were made at the edge of available information. A first-person account of one such decision and what revisiting it is actually for.

Most decisions stay made. You make them, you live with the consequences, and eventually the decision recedes into the background — it becomes part of the condition you are operating in rather than something you are still actively holding.

Some decisions do not do this. They get made and they keep coming back, not as regret exactly, but as an open question that subsequent evidence never quite closes. The more I have paid attention to this pattern, the more I have noticed that the decisions that stay live tend to share a specific characteristic: they were made at the edge of available information, in a situation where the right answer genuinely depended on facts that were not available at the time, and the subsequent evidence never resolved the underlying question cleanly enough to stop asking it.

This article is about one such decision — a specific one, not a type. And about what I have come to think the right framing for a revisited decision actually is.

The Decision

In 2023, during a critical phase of Bayanihan Harvest''s cooperative rollout, I made a staffing decision that I am still examining. We had a team member — call her M — who was central to cooperative relations and field onboarding. Her domain knowledge was substantial. The cooperatives trusted her in a way that took real time to build and was not immediately transferable to someone else. Her work quality was high. Her relationship with the rest of the team was increasingly strained.

The strain was not about performance. It was about pace, method, and fundamentally different ideas about how decisions should be made on the team. M believed that field implementation decisions should be made by the people closest to the field — that the people doing cooperative onboarding should have final authority on how onboarding was sequenced, paced, and communicated, without requiring sign-off from the technical and governance layers of the team. This was not an unreasonable position. There were field conditions she understood that the rest of the team did not.

The problem was that the decisions she was making unilaterally were affecting downstream systems — the reporting structure, the compliance architecture, the version of the platform that cooperatives had been shown — in ways that created technical debt and confusion across the team and partner network that was showing up months later. The team was not dysfunctional, but it was increasingly operating in two tracks, and the gap between the tracks was widening.

I had to make a choice between two things that both had genuine value: maintaining the cooperative relationship capital that M had built — relationships that would take twelve to eighteen months for anyone else to rebuild at equivalent depth — and restoring a team structure that could make decisions coherently and maintain consistency between what the platform promised and what it delivered.

I chose the system. M left within two months of the decision. Several of the cooperative relationships she had built required significant re-investment to maintain, and two did not survive the transition at all.

What I Knew and Did Not Know at the Time

The information I had when I made the decision: the team conflict was real and worsening; the technical and governance inconsistencies were documented; the cost of two-track operation, projected forward twelve months, was higher than the cost of losing M''s relationship capital if I could re-invest in those relationships immediately. I also knew that the team structure problem was, in some sense, my responsibility — I had let the two-track operation persist too long by not making the call earlier, and the longer it persisted, the more the solution would cost.

What I did not know: how many of the cooperative relationships were actually transferable to another team member with time and investment, versus how many were specifically personal to M in ways that would not survive her departure. I had an estimate. The estimate was optimistic.

I also did not know — could not know — whether the team conflict would have resolved through a different intervention. I had tried a direct conversation about decision rights. I had tried structural changes to the team governance. I had not tried removing M from the field-facing role while keeping her in a different capacity. Whether that option was viable — whether she would have accepted it, whether it would have addressed the underlying dynamic — is a question I cannot answer now.

What Subsequent Evidence Showed

The system problem resolved. The team, after M''s departure, operated with better coherence. The decisions that had been made inconsistently because of the two-track structure became, over the following six months, much more consistent. The technical debt that had accumulated was addressed. The governance architecture stabilized.

The cooperative relationship cost was higher than I had estimated. Two cooperatives did not continue with Bayanihan Harvest. A third required eighteen months of relationship reconstruction — a real investment of time and credibility — before they re-engaged fully. The optimistic estimate was wrong by enough that, had I known the correct number at the time of the decision, I might have chosen differently. Or I might not have — the system problem was real and worsening, and the two-track operation was generating a different kind of cost that also had a number.

The subsequent evidence does not tell me whether the decision was right. It tells me that the costs were higher than I estimated on one dimension and about what I expected on another. But the question I am left asking is not "was the decision right?" It is a different question.

What Makes Information Non-Observable

The specific bias I named — underestimating the cost of losing relationship capital I could not directly observe — raises a prior question: why was that relationship capital non-observable in the first place?

In Bayanihan Harvest, the answer is structural. The relationships that M had built were built through field presence I did not have. The cooperative onboarding process involved extended time at the farm level — showing up for meetings that started late, working through process questions that required patience and familiarity with how cooperative officers think, building the kind of informal rapport that comes from repeated contact in the same person''s context. I was not present for most of this. I knew M was doing it. I observed the output — cooperatives that engaged with the platform, officers who raised concerns through appropriate channels — but I did not have direct knowledge of how deep the underlying relationships were or how much of the engagement depended on M as a person versus the platform as a value proposition.

This is a common structure in ventures that operate across organizational boundaries: the most important relationships are built by people who are not the decision-maker, which means the decision-maker''s information about those relationships is indirect and systematically optimistic. The field team knows things about cooperative trust that do not appear in the metrics. The customer-facing team knows things about client relationships that do not appear in retention rates. These are things you can ask about and receive reports about, but reports about relationship depth are not the same as direct knowledge of it.

I have tried to address this by building explicit questions about relationship depth into review conversations — not "how is the cooperative engagement?" but "what would happen to this cooperative''s participation if this person left the project?" The second question is harder to answer, but it is the relevant one when the decision is about removing a person from a role.

The Decisions I Have Not Yet Made

There is a second category of decision that keeps surfacing, which is different from the 2023 decision in an important way: I have not yet made it.

These are situations where I can see the same structure — a team conflict between system coherence and relationship capital, a governance issue that would be resolved more cleanly by ending a relationship than by continuing to manage the tension — but I am not yet at the point where the decision is unavoidable. In a few cases I have been watching these situations for months. I know what I would do if pushed. I am not pushed yet.

The way I think about these: they are not versions of the 2023 decision. They are different situations with some similar features. The fact that the 2023 decision had higher relationship-capital costs than I estimated does not mean the right answer in the current situations is to wait longer or weigh relationship capital more heavily in all cases. The right answer depends on the specific structure of each situation, including the specific relationship capital at stake and how observable it is.

What the 2023 decision taught me is the importance of doing a more rigorous investigation of the relationship capital dimension before making the call. Not a different conclusion — a more rigorous investigation. In some cases, that investigation will produce information that shifts the decision. In others, it will confirm the original analysis with higher confidence. The investment in better information is worth making regardless of which way it resolves.

The Role of Time in Closing or Opening a Revisited Decision

One thing I have observed: the revisiting does not always come at the same rate. There are stretches where the 2023 decision is genuinely quiet — where the current work is different enough in character that the question does not surface. Then there are moments where I am facing a structurally similar situation and the earlier decision becomes present again, partly as precedent and partly as caution.

When I am working through a current team situation that involves a conflict between system coherence and a specific person''s role, the 2023 decision shows up as data. It shows up in the specific form of a warning about my own estimation patterns: I underestimated the cooperative relationship cost, so in this current situation, I should assume the equivalent cost is higher than my current estimate. This is the useful form of revisiting — the prior decision functioning as a correction factor for a current decision that has the same structure.

What does not help is when the revisiting is triggered not by a structurally similar current situation but by an update about the prior situation itself. When I learned, a year after M''s departure, that one of the cooperatives that left had not rejoined any comparable platform — that they had simply contracted their operations rather than adopting a substitute — the 2023 decision became more present again. Not because the new information resolved the question. It did not. It added one more data point in the direction of "the cost was higher than you thought," without changing the counterfactual question about whether the decision was correct.

This is the pattern I find most difficult to manage: the arrival of new information that is relevant to the prior decision but does not close the question, only adds weight to one side of an argument that was never going to be settled. In those moments, the useful response is to take the information, update the estimation bias correction accordingly, and set the retroactive question down. The decision was made. The information is useful for future decisions. It is not useful for re-litigating the past one.

Time does not close revisited decisions, in my experience. The information keeps arriving, slowly, and the counterfactual never becomes available. What changes is the capacity to distinguish between the information that is actionable — that should update how I make future decisions — and the information that is purely retrospective and serves mainly to keep the question open without moving it toward resolution.

The Wrong Frame and the Right One

For a long time, I revisited this decision as a correctness question. Given what I knew, was the decision right? Given what I now know, was it right? This framing kept generating an answer of "maybe" — the costs and benefits were genuinely close, the counterfactuals were genuinely uncertain, and no new information has arrived that would shift the analysis cleanly in one direction.

The more useful question is about the conditions under which I make decisions, not about the correctness of any particular decision made in those conditions.

What I understand now that I did not fully understand in 2023: I consistently underestimate the cost of losing relationship capital that took time to build in ways I did not observe closely. I observe technical debt and operational inconsistency in real-time — they are visible in the systems I monitor. I do not observe relationship depth in the same way. Cooperative trust, in particular, is built through field presence and accumulated interaction that I am not personally doing — it lives in other people''s relationships, which means I have lower information about its depth and transferability than I do about almost anything else in the venture.

This is a systematic bias in my decision-making, not a one-time error. The decision I keep revisiting is, in part, a record of that bias operating under conditions of genuine uncertainty. The lesson is not "value relationships more" — that is too abstract to be actionable. The lesson is specific: before making a decision that eliminates a person who holds relationship capital I cannot directly observe, assume the relationship capital is deeper and less transferable than my estimate, and adjust the decision threshold accordingly.

That does not tell me whether the 2023 decision was correct. It tells me something about how I should make the next decision that has the same structure.

Why Certain Decisions Stay Live

The class of decisions that keep surfacing — for me, and I think in general — are the ones where you chose the system over the person, or the person over the system, at a moment when the choice was genuinely close. Not the ones where one option was clearly right. The close ones.

They stay live not because they were necessarily wrong, but because the information that would confirm them was not available at the time and has still not fully arrived. The two cooperatives that left — would they have left eventually anyway, under different circumstances? Was the team coherence problem solvable through an intervention I did not try? These questions are not answerable, and they are not supposed to be. The decision was made in irreducible uncertainty, and some of that uncertainty is permanent.

What I have found useful is distinguishing between two different reasons to revisit a decision. The first is diagnostic: something about the decision revealed a systematic pattern in how I make decisions that is worth correcting. This is productive. It changes future decisions. The second is resolution-seeking: reviewing the decision in the hope that new framing or new information will finally close the question. This is almost never productive, because the question is structurally uncloseable — the counterfactual will never be available.

The 2023 decision is worth revisiting for the first reason. It revealed the systematic undervaluation of non-observable relationship capital in my decision-making. That is a finding I can act on. The question of whether the decision was correct, in some final sense, is worth setting down.

I have not fully set it down. But naming the distinction between the useful revisiting and the resolution-seeking one has made the revisiting more productive and the unanswerable question slightly less consuming.

Some decisions are not resolved by time or evidence. They are resolved by changing how you decide, so that the next version of the same situation produces a better answer — whatever better means when the full picture is never available.

ShareTwitter / XLinkedIn

Explore more

← All Writing