Three facilities were coordinating through phone calls, spreadsheets, and warehouse walk-throughs. After a three-phase information architecture redesign — gated by governance checkpoints between each phase — the facilities operated against shared data systems, inter-facility coordination moved off phone trees, and the group gained the structural capacity to run predictive operations. The outcome the executive team had asked for was "digital transformation." The outcome the business actually needed was a redesign of how information traveled before decisions were made.
The starting state: a manufacturing group operating three facilities with capable plant managers, functional production, and a decision architecture that assumed everyone could see each other. At two facilities, that assumption had held. At three, it had started to break. The challenge: transform the operating model without buying software that would codify the wrong architecture.
Starting Conditions
The executive team's framing of the problem was "we need to go digital." Their operating definition of digital transformation was software procurement — a manufacturing execution system, an ERP upgrade, a quality management platform. The budget conversation had already started. The vendor shortlist was already forming. What had not happened was a diagnosis of why the current state was producing the outcomes the executives were unhappy with.
Operational state across the three facilities. Production scheduling lived in spreadsheets, updated manually by each plant's scheduling lead and emailed between facilities on a daily cadence. Inventory levels were verified by physically walking the warehouse floor — a practice that worked when the warehouse was small and the person walking it knew every SKU, and broke silently as inventory expanded and the walker changed. Quality control data was being captured, but in a system that did not talk to production scheduling, which meant quality signals could not trigger production adjustments without a human noticing the pattern and making a phone call. Inter-facility coordination — the question of which facility should run which order when demand shifted — was handled by plant managers calling each other, negotiating in real time, and recording the outcome in whichever system each of them trusted.
Scale constraint. Three facilities is the scale where informal coordination breaks. At two facilities, two plant managers can maintain a complete shared mental model through a daily call. At three, the coordination graph has three edges, which means each plant manager is holding two parallel conversations and reconciling them mentally. At four facilities, the graph has six edges and the mental reconciliation becomes structurally impossible. The group was sitting at the exact threshold where the current architecture was about to fail.
Budget constraint. Capital for the transformation had been approved against a software line item. Redirecting it to information architecture work required demonstrating that the software spend would have solved the wrong problem. This was not a technical argument. It was a governance argument, and it had to be made before the first purchase order.
What had been tried. A previous attempt had piloted a scheduling tool at one facility. The tool worked at that facility in isolation. It did not integrate with the other two facilities' spreadsheets, which meant the pilot facility now had two sources of truth — the new tool and the email chain — and the plant manager had quietly reverted to the email chain because it was the one the other facilities could see. The pilot was not a failure of the tool. It was a failure of sequencing: automation applied to a disconnected system automates the disconnection.
Structural Diagnosis
Three architectural problems explained why the executive team's "digital" framing was about to produce an expensive version of the current state.
The organization did not have a tooling problem. It had an information-visibility problem. The information the business needed to run existed. Production schedules were written down. Inventory counts were taken. Quality metrics were recorded. The failure was not missing data. It was that the data was captive to the facility that produced it. A plant manager at Facility A could not see the inventory position at Facility B without picking up the phone. The structural feature that made this persist was that each facility's internal rhythms were designed around the data it owned, and nobody in the group had been responsible for the flow of information between facilities. Buying a platform to replace the spreadsheets would have moved the same captive data into a prettier container. The visibility would not have changed because the architecture of who-can-see-what had not been addressed.
Decisions were being made by people who could not see the inputs those decisions depended on. Production scheduling at each facility ignored the inventory position across the group because the scheduler could not see it without a phone call, and phone calls are expensive in attention terms. Quality signals did not trigger production adjustments because the person reading the quality report and the person running the production line were not the same person and did not share a view. Inter-facility load balancing was happening through plant-manager negotiation, which meant the decision quality was capped at whatever the two managers on the call could hold in their heads. This is not a personnel problem. Capable people making decisions without the inputs those decisions require will reliably produce suboptimal outcomes. The structural fix is not to train the people harder. It is to change what the people can see before they decide.
The executive framing of "buy software" would have locked in the wrong sequence. The classic manufacturing transformation failure is to procure a predictive operations platform — the AI-driven demand forecasting, the condition-based maintenance, the integrated planning suite — before the data foundation exists. The platform requires clean, connected, trustworthy historical data as input. In a group where inventory is verified by walking the warehouse, the historical data does not exist in a form the predictive tool can consume. The tool then either delivers outputs based on bad data (worse than no tool) or sits unused while the consultants assigned to the deployment try to manufacture the clean inputs retroactively. Either way, the capital is spent and the transformation has not happened. Conventional "start with the biggest tool" approaches miss this because the vendor selection process rewards sophistication, not sequence.
The Intervention
The redesign was structured in three phases with governance gates between them. Each phase depended on the previous phase being operational — not just installed, but demonstrably producing the behavior the next phase required as input.
Phase 1: Data Foundation
What was built: Shared data systems for information that already existed. Production schedules moved from per-facility spreadsheets into a single shared system visible to all three plants. Inventory counts moved from walk-the-floor verification into a real-time tracker. Quality metrics moved from standalone reports into a dashboard accessible to production scheduling. No new processes were introduced. No new data was required. The phase was a visibility transformation, not an operational transformation.
Why this phase came first: Every subsequent phase depended on the information being visible before decisions were made. Integrating decisions across data sources requires the data sources to exist in a shared form. Predicting future state requires trustworthy historical state. Skipping to either of those phases without the data foundation would have been building on sand — the same failure mode as the earlier scheduling-tool pilot, executed at larger scale.
The mechanism: The mechanism was deliberately boring. The phase did not change what anyone did. It changed what they could see. A plant manager at Facility B could now look at Facility A's inventory position without a phone call. The scheduling lead at Facility C could see the production schedule Facility A was running against without waiting for the morning email. Visibility precedes coordination, and coordination precedes optimization. The phase built the first of those three.
First-phase outcome: Within the phase's operational window, every facility was entering its data into the shared systems. The behavior that mattered — not "is the system installed" but "are people using it as the source of truth" — was the entry criterion for Phase 2.
Phase 2: Decision Integration
What was built: The connections between data sources that turned visibility into decision inputs. Production scheduling could now consume inventory levels automatically, which meant the scheduler was no longer making a decision while blind to the input that determined whether the decision was feasible. Quality metrics were wired to trigger production adjustment alerts when pattern thresholds were crossed. Inter-facility coordination moved from phone calls to shared dashboards where the state each facility needed to see was available without a synchronous conversation.
Why this phase depended on Phase 1: Integrating decisions across disconnected systems is impossible — there is nothing to integrate. The systems must first be visible as a single surface before the decision logic can be written against them. Running Phase 2 in parallel with Phase 1 would have meant building decision integration against a target that was still moving, which is the category of project that consumes capital and produces nothing.
The mechanism: The governance gate between Phase 1 and Phase 2 required that all three facilities be using the shared data systems for a minimum window before the integration work began. The window was not arbitrary — it was the period required to verify that the shared systems were stable enough to be the source of truth for automated decision inputs. A decision integration built against unstable data would propagate data quality problems into decision quality problems, and the group would learn to distrust the new system the same way it had distrusted the earlier pilot tool.
Tradeoff introduced: Decision integration meant that errors in upstream data now had downstream consequences. A miscounted inventory item was previously contained to one plant manager's mental model. Now, a miscounted item could misroute a production order. The phase traded isolated errors for connected errors, and the net improvement depended on the error rate falling fast enough to offset the increase in error blast radius. This is why the governance gate existed — without Phase 1 data quality being verifiable, Phase 2 would have made things worse.
Phase 3: Predictive Operations
What was built: Forecasting and scheduling logic that consumed the historical data accumulated during Phases 1 and 2. Maintenance scheduling shifted from calendar-based — run this service every N weeks regardless of machine state — to runtime-based, triggered by accumulated operating hours and condition signals. Production planning shifted from gut-feel capacity estimates to demand-pattern projections. The tools and techniques that the executive team had originally wanted to buy on day one became feasible because the inputs the tools required now existed.
Why this phase came last: Predictive operations require historical data that is clean, connected, and trusted. In Phase 1, the data became visible. In Phase 2, the decisions became integrated, which meant the historical record of decisions and outcomes became coherent. Only at Phase 3 was there enough signal in the historical record to train forecasts against. Starting the transformation at Phase 3 — the "buy the predictive platform first" path the executive team had been considering — would have produced forecasts trained on data that did not yet exist in usable form.
The mechanism: The second governance gate required that Phase 2 decision integration be demonstrably reducing coordination overhead. Demonstrable meant measurable — fewer phone calls between plant managers, faster cross-facility order routing, quality adjustments happening without a human pattern-matching the report. Without that demonstration, Phase 3 would have been a continuation of the original "buy sophistication" framing. With it, Phase 3 was the natural next layer on a foundation that was already load-bearing.
Constraint and tradeoff: The governance gates slowed the transformation relative to what a big-bang software rollout would have claimed on the procurement timeline. The executive team had to accept that the transformation was sequenced against behavior change, not vendor delivery milestones. This was not free politically. Vendors promise fast deployment. Governance gates promise correct deployment. The decision to prioritize correctness was the decision that made the outcome survivable.
Results
Information visibility across facilities. After Phase 1, the three facilities were operating against shared data systems instead of private spreadsheets and walk-the-floor verification. The mechanism was not sophistication. It was deliberate de-privatization of data that had been captive to the facility that produced it.
Inter-facility coordination moved off the phone tree. After Phase 2, decisions that had previously required a plant-manager-to-plant-manager call — order routing, inventory rebalancing, quality-driven production adjustments — were being made against a shared view. The plant managers did not stop talking to each other. They stopped having to talk to each other in order for the business to function. Conversation became optional enrichment, not load-bearing infrastructure.
Governance gates prevented the predictive-first failure. The most important result is not visible in any dashboard, because it is an outcome that never happened. The executive team did not spend the original capital on a predictive platform deployed against data that could not yet support it. The gate between Phase 2 and Phase 3 explicitly blocked that path, and the gate between Phase 1 and Phase 2 had built the justification for why the blocking was necessary.
Counterfactual. Without the phased, gated approach, the most likely outcome was the platform-first procurement the executive team had originally planned. Based on the earlier pilot's failure mode — automation applied to disconnected systems automates the disconnection — the platform would have been deployed, under-used, and eventually bypassed by the plant managers who still had to get production out the door. The group would then have been in the same structural position as before the transformation, minus the capital spent on the platform. The phased approach did not just produce better outcomes than the platform-first path. It prevented the platform-first path from consuming the budget that the information architecture work required.
Framework longevity. The tiered phase structure — Data Foundation, Decision Integration, Predictive Operations, each gated by governance checkpoints — has since become a reusable template for other manufacturing and operations engagements in the portfolio. The structural principle holds across industries because the underlying failure mode it addresses is universal: organizations buying sophisticated tools before the foundational data those tools require exists.
The Diagnostic Pattern
The manufacturing group did not have a software problem. It had an information architecture problem dressed in the language of software procurement. The executives were asking "which platform should we buy?" when the question that would have unlocked the answer was "what information is captive to one facility that other facilities need in order to make good decisions?"
Digital transformation is misnamed. The digital part — the tools, the platforms, the dashboards — is commodity. The architecture that connects the tools to the decisions they inform is the transformation. When the language of a transformation project is the language of vendor selection, the project is almost always about to solve the wrong problem.
The diagnostic pattern transfers to any multi-site operation whose coordination depends on informal relationships between site leaders. The question to ask is not "what software should we standardize on?" It is: what information would each site leader need to see before deciding, and where is that information currently captive? Once the captive-information map exists, the phasing writes itself — visibility first, integration second, prediction last. The tools are secondary. The sequencing is the transformation.
Related Service
This engagement falls under my PMO & Governance practice.
View advisory engagement models