Skip to content
Diosh Lequiron
governance

12 initiatives assessed, 3 non-aligned projects identified, Portfolio governance framework adopted, Resource reallocation enabled

Twelve Initiatives, No Framework: Building Technology Portfolio Governance

By Diosh LequironGlobal Professional ServicesApril 2026
Key Outcomes

12 initiatives assessed

3 non-aligned projects identified

Portfolio governance framework adopted

Resource reallocation enabled

A CTO asked which of twelve technology initiatives the firm should continue funding. Nobody could answer — not because the data was missing, but because there was no framework to compare initiatives to each other. Three weeks of assessment later, three non-aligned initiatives were identified, two sunsetted, one restructured, and the four-dimension framework that produced those answers became the funding gate for every new initiative proposal.

The starting state: a global professional services firm running twelve technology initiatives simultaneously. Each had its own stack, its own project management approach, and its own definition of done. Each had a sponsor who believed in it. The firm was spending real money on all of them and had no structural way to decide which ones were worth continuing.

The challenge: build a portfolio governance framework that could answer the CTO's question defensibly, survive the political conversations that would follow, and continue to govern the portfolio after the initial assessment was finished.


Starting Conditions

The firm sold expertise to other organizations and treated its own internal technology spend as a supporting cost rather than a product. This framing matters. In a product company, the technology portfolio is the business and misalignment surfaces quickly. In a professional services firm, the technology portfolio is overhead, and misalignment can persist for years before anyone with authority notices.

Portfolio scale. Twelve initiatives running simultaneously, started at different times by different sponsors in response to different perceived needs — infrastructure modernization, productivity tools, experiments in adjacent product categories, partner-driven work. No common origin story; no common success metric.

Governance absence. Each initiative had its own project management methodology — some ran Agile, some waterfall, some a hybrid nobody had named. Each had its own definition of done. Each had its own reporting cadence, if it had one at all. When leadership asked "how is the portfolio doing?" the answer had to be assembled from twelve different status reports in twelve different formats with twelve different interpretations of "on track." The assembly itself was costly, and the answer was never comparable across initiatives because the underlying reports were not measuring the same things.

Political constraint. Every initiative had a senior sponsor — partners, practice leads, business-unit heads — and the initiatives had become part of their identity. Asking "should this project continue?" was structurally indistinguishable from asking "should your judgment be trusted?" The CTO had good reason to suspect the portfolio was carrying work the firm did not need, but "good reason to suspect" is not "evidence that can defend a termination decision." The absence of a comparison framework meant the suspicion had no way to become a conversation.

What had been tried. Previous attempts to assess the portfolio had produced executive-summary documents that listed the initiatives, described their current status, and stopped. The documents were accurate and useless — accurate because they correctly reported what each initiative was doing, useless because they offered no basis on which to compare initiatives or make funding decisions. The firm's own diagnosis was that it needed better project reporting. My diagnosis was different — this was not a reporting problem. The reports existed. The firm was missing the framework against which the reports could be interpreted.


Structural Diagnosis

Three architectural problems explained why twelve initiatives had become simultaneously running and structurally invisible to portfolio-level decision-making.

Strategic alignment was implicit, not explicit. Initiatives had been approved based on sponsor advocacy rather than documented alignment to stated business objectives. When each initiative was started, the reasoning was captured in a conversation or an email or a PowerPoint deck that nobody kept, and after a few months the institutional memory of why the initiative existed had evaporated. The initiative continued running because it was running, which is a governance pattern that produces portfolios of legacy commitments rather than portfolios of strategic bets. Conventional fixes — asking teams to "justify their work" — fail because the sponsors are the same people being asked to justify, and sponsors asked to justify their own work produce justifications. The assessment has to be structural and external to the sponsor or it becomes a performance of accountability rather than accountability itself.

Delivery signal was being read as team activity, not as shipped outcomes. Leadership knew the teams were busy. The engineers showed up, the standups happened, the Jira boards filled up with tickets. But "busy" is not "delivering." Some of the initiatives had not shipped working software to users in months; some had shipped regularly but what they shipped did not advance the initiative's stated goal; some had pivoted quietly so often that the original goal no longer described the work. Conventional fixes — demanding more frequent updates — make the problem worse, because they equate visible activity with progress, and the initiatives that were struggling most were also the best at producing visible activity reports to compensate for their lack of delivery.

Portfolio cost was hidden behind initiative-level budgets. Each initiative had its own budget, and each budget looked individually defensible. The aggregate cost was never surfaced in a form that let leadership reason about opportunity cost — "we are spending X on these three initiatives that could be spent on that one higher-value initiative instead." The initiative-level budget framing makes every project look affordable in isolation and makes the question "are we spending our engineering capacity on the right things?" impossible to ask, because the framing contains no concept of "wrong things." The problem was not that costs were too high. It was that the cost-to-value relationship was not visible at the portfolio level, which is the only level at which the right question can be answered.


The Intervention

The assessment took three weeks. The framework had to be designed before the assessment could begin, applied to all twelve initiatives in parallel, then translated into conversations that leadership could actually have without rupturing the political fabric of the firm. Each phase depended on the one before it.

Phase 1: Designing the Four-Dimension Framework

What was built: A portfolio governance framework with exactly four assessment dimensions — Strategic Alignment, Delivery Health, Architectural Integrity, and Cost-to-Value Ratio. Strategic Alignment asked whether the initiative connected to a stated business objective, and if not, why it existed. Delivery Health asked whether the team was shipping working software on a predictable cadence — not whether they were busy, but whether they were delivering. Architectural Integrity asked whether the initiative was creating technical debt that other teams would pay for later, or whether it was contributing to shared infrastructure that other teams would benefit from. Cost-to-Value Ratio asked about the total cost, including opportunity cost, against the measurable business value delivered.

Why this came first: Without a framework, the assessment would have produced twelve separate narratives and no comparison. The dimensions had to be chosen before any initiative was examined, because dimensions chosen after examination are dimensions tuned to produce a preferred conclusion, and the framework's defensibility in the political conversations that followed depended on leadership being able to trust that it had not been designed backwards from the answer.

The mechanism: Four dimensions was a deliberate limit. Three would have been too few to capture the distinct failure modes of a technology initiative. Five or more would have invited debate about which mattered most — exactly the debate the framework was designed to prevent. Four orthogonal axes meant every initiative could be scored on the same grid, narrow enough that sponsors could not argue the instrument instead of the outcome.

First-phase outcome: A written framework, reviewed with the CTO before any assessment began, with explicit scoring guidance for each dimension. The framework was the structural artifact that made every later step possible.

Phase 2: Assessing All Twelve Initiatives in Parallel

What was built: A systematic assessment of all twelve initiatives against the four-dimension framework, executed over three weeks. Each initiative was evaluated through a combination of artifact review (project documentation, shipped code, delivery history, budget records) and conversations with the teams running the work. The assessment was structured to produce a score per dimension per initiative, with narrative justification attached to each score.

Why this phase depended on Phase 1: Without the framework, each assessment would have been shaped by whoever conducted it and whoever was interviewed, and the results would have been incommensurable. With the framework, every initiative got the same questions, the same evidence requirements, and the same scoring rubric. The comparability across initiatives was the whole point, and comparability only exists when the instrument is identical.

The mechanism: Parallel assessment mattered. Sequential assessment — doing one initiative at a time — would have leaked information between assessments, because the scoring of initiative three would be influenced by what had been found in initiatives one and two, and initiative twelve would be graded against a ruler that had shifted during the process. Parallel assessment kept the ruler fixed. It also compressed the political exposure — the assessment was brief enough that sponsors could not mobilize defensive responses before the full picture was ready to present.

First-phase outcome: The uncomfortable numbers. Four initiatives scored high on all four dimensions — these were the clear keep-and-fund projects. Five scored partially and needed restructuring — the work was worth doing but the current shape of the initiative was not producing it. Three had no clear strategic connection and were consuming resources that could be reallocated to better purposes. The distribution was not dramatic; it was representative of what happens in any large portfolio that has accumulated initiatives without structural review. The numbers were the instrument. The conversation was the outcome.

Phase 3: The Hard Conversation

What was built: A single presentation to the leadership team. The framework, the scoring rubric, the evidence gathered per initiative, and the conclusions. Explicit acknowledgment that the three non-aligned initiatives each had a sponsor who believed in the project, and that those beliefs were sincere and were not the target of the discussion. The target was the structural question of whether the firm should continue funding work that the framework could not connect to stated business objectives.

Why this phase depended on Phases 1-2: Arriving at the hard conversation without the framework and the assessment in hand would have made the conversation a debate about opinions. Arriving with the framework and the assessment made it a debate about evidence. The difference is structural — opinions are defended by authority and tenure, evidence is defended or refuted on its own merits. The framework did not make the conversation easy. It made the conversation possible.

The mechanism: Presenting the framework before presenting the scores let sponsors evaluate the instrument before they knew how their initiative had scored against it. This is the structural move that takes the debate off the initiative level and onto the framework level. A sponsor who agrees with the framework in the abstract cannot reject it when their initiative scores poorly, because rejecting it at that point is visibly opportunistic. Getting the framework accepted before revealing the scores is how you make the scores stick.

Phase 4: Adoption as Ongoing Governance

What was built: The four-dimension framework became the standard for all new initiative proposals in the portfolio. Before any project received funding, it had to demonstrate alignment on all four dimensions — strategic connection, delivery plan with a predictable cadence, architectural contribution or justified debt, and a cost-to-value case including opportunity cost. The framework moved from being an assessment tool to being a funding gate.

Why this phase came last: Adopting the framework as ongoing governance only makes sense after it has survived its first use. A framework adopted before it has been tested is a framework that gets quietly dropped when the first inconvenient case arrives. A framework that has already survived a termination decision and a restructure is a framework that has proven it can carry the weight of the decisions leadership needs it to make, which is the precondition for incorporating it into permanent process.

Constraint and tradeoff: The framework introduced ongoing governance overhead. Every new proposal now required a defensible four-dimension case before funding, which is slower than the previous sponsor-advocacy path. The firm traded approval speed for portfolio discipline — the right trade, but not free. Some good ideas were delayed because their sponsors had not yet built the cross-dimensional case, and the framework could not always distinguish delay for structural reasons from delay because the thinking had not yet happened.


Results

Twelve initiatives assessed. All twelve received a complete four-dimension score against the same instrument, producing the first comparable portfolio view the firm had ever had. This was the foundational result — every subsequent decision flowed from the comparability, and comparability had not existed before the framework was built.

Three non-aligned initiatives identified. The framework found three initiatives with no defensible connection to stated business objectives. The sponsors had each had genuine reasons for starting the work, but the reasons had been contextual and had not survived the test of explicit alignment. Identifying them was the specific output the CTO's original question required.

Portfolio governance framework adopted. The four-dimension framework became the funding gate for new initiative proposals going forward. This is the durable result. The assessment of the twelve existing initiatives was a one-time exercise; the adoption of the framework as ongoing governance is what prevented the firm from drifting back into the same pattern a year later. Governance built into process persists. Governance that relies on a one-time assessment does not.

Resource reallocation. Two of the three non-aligned initiatives were sunsetted. One was restructured with a clearer mandate and a sponsor who agreed to re-run it against the framework in six months. The engineering capacity freed by the two sunsets was reallocated to higher-scoring work. The specific business value of the reallocation depended on what the freed capacity was directed to, but the structural outcome — "capacity is now being deployed against portfolio-level priorities rather than against whichever sponsor advocated first" — was itself the result.

Counterfactual. Without the framework, the twelve initiatives would have continued running. None of them would have failed dramatically enough to force a termination decision on its own; each one individually looked defensible enough to keep funding for another quarter. The portfolio would have continued consuming engineering capacity at its current rate, and the opportunity cost — the higher-value work the firm could have done instead — would have remained invisible. The CTO's original question would have recurred periodically, and each recurrence would have produced the same outcome: no defensible basis on which to answer it, and therefore no answer. This is the quiet failure mode of technology portfolios at large firms. The framework was not the only possible instrument, but some instrument was required, and the absence of any instrument was the structural problem the firm had been living with.


The Diagnostic Pattern

The firm did not have a project management problem. It did not have a budget problem. It did not have twelve individual delivery problems. It had one portfolio-level problem: the absence of a shared instrument for comparing initiatives to each other on the dimensions that mattered.

The transferable principle is that portfolio questions cannot be answered with initiative-level data. Asking "which of these projects should we continue funding?" requires cross-project comparability, and comparability requires an instrument that was designed before any specific project was examined. Firms almost always try to answer portfolio questions by aggregating initiative-level reports, and the aggregation almost always fails because the reports were not built on a common frame. The first move is never "get better reports." It is "build the comparison framework, then use it to read the reports you already have."

The diagnostic pattern to look for is the moment when a leader asks a portfolio-level question and receives a set of initiative-level answers. This is the structural signal that the comparison framework is missing. The number of dimensions matters less than the existence of any fixed set of dimensions — four worked in this case because the firm's context called for strategic, delivery, architectural, and economic orthogonality, but the specific four are not the principle. The principle is a fixed, written, pre-committed instrument against which every initiative is evaluated the same way, and which becomes part of the funding process rather than a one-time assessment.

The same pattern has recurred across other engagements. Different industries, different scales, same structural absence. The question to ask is always the same: is there a written framework, pre-committed, against which the portfolio is assessed? If not, the portfolio is being governed by sponsor advocacy — which scales to exactly as many initiatives as the firm's political tolerance allows, and that is almost always more than the firm can afford.

Related Service

This engagement falls under my PMO & Governance practice.

View advisory engagement models

Interested in similar results?