Skip to content
Diosh Lequiron
due-diligence

Zero post-acquisition technology surprises in first 2 acquisitions under framework, 1 acquisition price adjusted 8% based on framework findings, 2 of 3 flagged evaluations declined — prevented ~$2.1M in unplanned capex trajectory

Three Acquisitions, Three Surprises: Building a Technology Due Diligence Framework

By Diosh LequironPrivate Investment Group (Anonymized)May 2026
Key Outcomes

Zero post-acquisition technology surprises in first 2 acquisitions under framework

1 acquisition price adjusted 8% based on framework findings

2 of 3 flagged evaluations declined — prevented ~$2.1M in unplanned capex trajectory

A private investment group that had completed three acquisitions of technology-enabled businesses in eighteen months had been surprised by material technology risk in all three. Not by fraud or misrepresentation — the sellers had provided accurate information about their technology in response to questions asked. The questions asked had been the wrong questions. The due diligence process had assessed financial performance, customer concentration, market position, and management quality with systematic rigor. It had assessed technology infrastructure through a checklist that had been designed for software companies and applied to businesses where technology was operationally critical but not the primary product. The checklist had missed the category of risk — operational technology debt — that had produced the post-acquisition surprises in all three cases.

The three post-acquisition surprises were structurally similar: technology systems that appeared functional during due diligence because they were functional for their current scale and operational pattern, but that could not support the growth investment thesis without significant unplanned capital expenditure. In one case, a logistics technology system that had been adequate for 400 daily transactions could not handle the 1,200 daily transactions the investment thesis required without a platform replacement. In a second, a customer data architecture that was compliant with existing regulations could not be adapted for the expanded geographic markets the thesis assumed without a complete data layer rebuild. In a third, a technology team that looked adequate by headcount was carrying a codebase with architectural decisions that required six months of foundational work before any new feature development could proceed at a commercially viable pace.

The challenge: design a technology due diligence framework that would identify operational technology debt — the gap between a system's current capability and its capability at the scale the investment thesis required — before acquisition, across businesses where technology was operationally important but not the primary product.


Starting Conditions

The investment group evaluated five to eight acquisition targets per year and completed one to two acquisitions. The evaluation process was led by two general partners with financial and operational expertise and supported by external advisors for specific domains — legal, accounting, and commercial due diligence. Technology assessment had been conducted internally, using a checklist that the partners had assembled from their prior experience and from publicly available due diligence frameworks.

The checklist at engagement start. The existing checklist covered twelve categories: technology stack and architecture, security posture, IP ownership, development team composition and retention, software licensing, infrastructure and hosting, data backup and recovery, regulatory compliance, customer-facing system uptime history, vendor dependencies, open-source license management, and technology spend as a percentage of revenue. All twelve categories were important. None of them directly assessed the gap between the technology's current capability and the capability required by the investment thesis.

The three prior acquisitions. Analysis of the three post-acquisition technology surprises produced a consistent finding: the existing checklist assessed what the technology was, not what the technology could become. The logistics company's technology system had clean IP, strong security posture, good uptime history, and a competent technology team. All twelve checklist categories had passed. The platform's capacity ceiling at 400 transactions per day was not a category on the checklist.

The investment thesis dependency on technology scale. The group's acquisition thesis consistently involved operational improvement and market expansion — buying companies with strong unit economics and growing them by improving operations and entering adjacent markets. Both of these growth vectors frequently required the target company's technology to perform at a scale it had not previously needed to perform at. A due diligence process that assessed technology at current performance could not assess whether the technology was fit for the scale the thesis required.

What the sellers had disclosed. Post-acquisition analysis of the three cases confirmed that sellers had not misrepresented their technology. They had answered the questions asked. The logistics seller had disclosed the transaction volume their system currently handled without being asked what the ceiling was or what would be required to exceed it. The threshold of disclosure had been set by the questions, and the questions had not asked about scale ceilings.


Structural Diagnosis

Three structural problems explained why a rigorous due diligence process had failed to identify material technology risk in three consecutive acquisitions.

Due diligence assessed current state, not thesis-state capability gap. The existing checklist produced a comprehensive picture of the technology as it existed at the time of evaluation. The investment thesis described the technology as it would need to exist at a defined future state — higher transaction volume, expanded geographic reach, new feature requirements. The gap between current state and thesis state was the material risk, and the checklist had no mechanism for identifying it. This is the fundamental structural failure: assessing what exists rather than assessing the delta between what exists and what the thesis requires.

Scale assumptions were financial, not technical. The investment thesis was built on financial projections — revenue growth, margin improvement, multiple expansion. The financial projections contained implicit technology assumptions — that the current technology could handle projected transaction volumes, that the current data architecture could support expanded regulatory compliance, that the current development team could deliver new features at a commercially competitive pace. These assumptions were not tested during due diligence; they were inherited from the financial model without technical validation.

Technology assessment lacked an evaluator who spoke the thesis language. The technology checklist was completed by internal partners using their operational expertise. The checklist categories that had been missed — scale ceiling, thesis-state architecture gaps, development velocity at current codebase complexity — required a specific type of technical evaluator: someone who could read a technology architecture and assess its capability at a future scale, not just at current scale. The partners could identify whether a technology team was competent; they could not independently assess whether a system architecture could support 3x transaction volume without structural changes.


The Intervention

The engagement produced a technology due diligence framework over eight months — four months of framework design and testing against historical cases, and four months of deployment on live acquisition evaluations.

Phase 1: Framework Design — Thesis-State Assessment (Months 1-3)

What was built: A technology due diligence framework organized around a central question: "What does this technology need to be able to do at the scale and configuration required by the investment thesis, and what is the gap between current capability and thesis-state capability?" The framework added five assessment dimensions to the existing twelve: transaction scale ceiling (what is the maximum throughput the current architecture supports without structural modification?), geographic expansion capacity (what architectural changes are required to operate in new markets or jurisdictions?), feature development velocity (at what rate can the current team and codebase deliver new features, and what is constraining that rate?), third-party dependency risk at scale (which dependencies work at current scale but create risk at thesis scale?), and technology team succession depth (how many team members have critical knowledge, and what is the risk of departure?).

The mechanism: Each dimension was assessed not as a current-state question but as a thesis-delta question — the answer required both the current state and the thesis-state requirement, and the gap between them was the finding. A system with a transaction ceiling of 400 per day was not a finding; a system with a transaction ceiling of 400 per day against a thesis that required 1,200 per day was a finding with a quantified gap and an estimated remediation cost.

Why this came first: The framework design phase tested the dimensions against the three prior acquisitions — applying the new framework retrospectively to assess whether it would have identified the risks that were discovered post-acquisition. All three material post-acquisition surprises were captured by the new framework dimensions; none were captured by the original twelve. This retrospective validation gave the group confidence to deploy the framework on live evaluations before the framework had been tested in a live acquisition cycle.

Phase 2: Evaluator Protocol and Engagement Structure (Months 2-5)

What was built: An evaluator protocol specifying which framework dimensions could be assessed by the internal partners and which required an external technical evaluator with specific expertise. Transaction scale ceiling and feature development velocity were identified as dimensions requiring a technical evaluator — they required reading architecture documentation and code in ways that could not be reliably done without software engineering expertise. Geographic expansion capacity and regulatory architecture required a combination of technical and legal expertise. The protocol specified the evaluator profile, the document requests, the interview structure, and the output format for each dimension.

Why this depended on Phase 1: The evaluator protocol could not be designed until the dimensions were specified. Designing the engagement structure before the dimensions were defined would have produced a generic technology assessment structure rather than one calibrated to the specific risk categories the framework was designed to identify.

Constraint introduced: The external technical evaluator requirement increased the cost of technology assessment per evaluation and added two to three weeks to the evaluation timeline. For the two to three evaluations per year that proceeded to full due diligence, this cost was material relative to total due diligence expense. The group accepted the cost increase in exchange for the risk reduction, with the calculation that the cost of one post-acquisition technology surprise exceeded the cost of multiple evaluations.

Phase 3: Deployment and Framework Calibration (Months 5-8)

What was built: Deployment of the framework on six live acquisition evaluations over four months. After each evaluation, a debrief assessed: what the framework found, whether the findings were consistent with what post-close discovery (where applicable) confirmed, and whether any dimensions had produced findings that were false positives or missed material risks. The framework was calibrated based on four completed evaluations before the other two acquisitions closed.

What this unlocked: The group completed two acquisitions during the deployment period — both using the new framework. In one case, the framework identified a feature development velocity constraint that would have delayed a key product initiative by approximately nine months. The finding changed the acquisition price and the terms of the management retention package. In the second case, the framework assessment was clean — no material thesis-state gap — and the group completed the acquisition with higher confidence than any previous transaction.


Results

Zero post-acquisition technology surprises in first two acquisitions under new framework. Both acquisitions completed under the framework produced technology performance consistent with the investment thesis. The framework identified material gaps in three of six evaluations; two of those three did not proceed to acquisition.

One acquisition price adjusted for identified technology debt. The feature development velocity finding in the first deployment-period acquisition produced a price adjustment of approximately 8 percent of enterprise value. The estimated cost of the identified architectural remediation was the basis for the adjustment. This was the first time the group had been able to use technology assessment findings as a specific input to acquisition price.

Decision improvement in declined evaluations. Three evaluations identified material thesis-state technology gaps. Two of those three did not proceed to acquisition — decisions that would not have been possible to make on technology grounds alone under the prior framework. The third was modified to include a detailed remediation plan and budget before close, with seller warranties against the cost. Whether the improved decisions in declined evaluations would have produced post-acquisition surprises is counterfactual; the framework made the decisions possible that the prior framework could not support.

Counterfactual. The three pre-framework acquisitions collectively required approximately $2.1 million in unplanned technology capital expenditure in the eighteen months post-close. At the group's average acquisition size, this represented a material EBITDA drag that had not been in the acquisition models. A framework that prevents one such surprise pays for itself over a full investment cycle.


The Transferable Lesson

The investment group did not have a due diligence thoroughness problem. It had a due diligence frame problem — its technology assessment was built around the right questions for software product companies, applied to businesses where technology was operationally critical but not the primary value driver.

The diagnostic pattern: when due diligence processes consistently produce post-close surprises in a specific domain, the surprises are almost always in the gap between what the due diligence framework assessed and what the investment thesis assumed. The framework assessed current state; the thesis assumed future state; the gap was the risk. This gap is structural, not informational — adding more information about current state does not close a gap that is fundamentally about the distance between current capability and required capability.

The design principle for any due diligence framework: start with the investment thesis, map the specific operational capabilities the thesis requires, then assess whether the target has those capabilities at the scale the thesis assumes. This is a different analytical frame from "is this business well-run?" — which is what most due diligence frameworks assess. "Is this business well-run?" is a necessary but insufficient frame when the investment thesis requires the business to be well-run at a materially different scale or configuration than it currently operates at.

Interested in similar results?