Skip to content
Diosh Lequiron
havenwizards

60+ production-grade modules built, Three-tier progression with structural gates between tiers, Milestone-based pricing model replaced failed subscription

HavenWizards: A 60-Module Technology Platform That Teaches by Building

By Diosh LequironHavenWizards (Internal Product)April 2026
Key Outcomes

60+ production-grade modules built

Three-tier progression with structural gates between tiers

Milestone-based pricing model replaced failed subscription

Sixty production-grade modules, one three-tier progression system with structural gates between tiers, and a milestone-based pricing model that only charges the learner when they advance. HavenWizards is not an education platform. It is a technology company that teaches through building — and the distinction matters because it determines every architectural decision the product makes.

The starting state: a Southeast Asian developer talent pool that had been trained on syntax and frameworks but not on systems, and a first-iteration subscription product that had failed for reasons that were structural, not marketing.

The challenge: redesign the platform so the architecture it teaches and the architecture it runs on are the same system, and so the business model aligns payment with learner progress rather than time spent stuck.


Starting Conditions

HavenWizards is a venture I own and operate. The client is the product itself, which means the constraints, the budget, and the honest diagnoses all sit on my side of the table. That changes nothing about the rigor required. It raises the bar, because there is no one else to blame if the architecture is wrong.

The problem I kept seeing. Every developer I worked with across Southeast Asia hit the same ceiling. Tutorials taught them syntax. Bootcamps taught them frameworks. Nobody taught them systems. The gap was visible in a consistent set of failure patterns: engineers who could build a feature but could not design a migration strategy, teams that could ship a prototype but could not build a deployment pipeline, organizations that had working software but no operational governance. None of these gaps were about effort or intelligence. They were about the absence of a learning vehicle that operated at the system level.

The first-iteration failure. The first version of HavenWizards was a conventional subscription product. Monthly fee, curriculum access, self-paced modules. It failed inside the first cohort for a reason that was structural rather than cosmetic: learners who got stuck felt they were paying for stagnation. Churn concentrated at the exact points where the curriculum was working hardest — the transitions where a learner had to move from component-level thinking to integration-level thinking. The pricing model punished the learner for hitting the real learning boundary and rewarded the product for keeping learners just engaged enough not to cancel. That is the wrong incentive shape for a product that claims to teach systems.

Scale and resource constraints. This is an internal product. There is no venture capital funding a transformation. Every architectural decision had to justify itself against the same question I apply to client engagements: does this produce measurable operational improvement without proportional cost scaling? The product had to be built and run on commodity infrastructure, with a stack I could maintain as the solo architect while the learner-facing surface grew.

What a conventional rebuild would have looked like. Add more courses, lower the price, improve the marketing, run cohort-based drops. All of these treat HavenWizards as an education product and all of them leave the underlying architecture untouched. The subscription iteration had failed because of structure, not execution. A better execution of the same structure would have failed in the same place.


Structural Diagnosis

Three architectural problems explained why the first iteration had failed and why most developer education products hit the same ceiling.

The content-over-systems failure. Education platforms are built to deliver content — lessons, videos, exercises — and measure success by content consumed. Systems are not content. A developer does not learn to design a deployment pipeline by watching a video about deployment pipelines. They learn by designing one, failing, and designing a better one. Content delivery is a fundamentally different product category from systems practice, and treating them as the same produces platforms that look educational but do not produce systems thinkers. Conventional fixes — better videos, more examples, hands-on exercises — do not close the gap, because exercises are not systems either. An exercise has a defined answer. A system has tradeoffs.

The progression-without-gates failure. Self-paced curricula assume learners will advance when they are ready. In practice, learners advance when the course unlocks the next module, regardless of whether they have internalized the previous one. This produces the characteristic developer-bootcamp outcome: a graduate who has touched every topic and mastered none. The missing structural element is a gate — a point at which advancement requires demonstrated competency rather than completion of content. Without gates, progression is an illusion, and the illusion is what creates the later operational failure when the graduate encounters real-world systems that do not accept "I completed the module" as evidence of capability.

The misaligned pricing model. A subscription charges the learner for time. A product that teaches systems succeeds only if the learner progresses. Those two outcomes are not aligned. When a learner is stuck, the subscription product earns revenue while the learner experiences stagnation. The product has a financial incentive to optimize for engagement rather than progression — because engagement pays even when progression stalls. This is not a marketing problem that can be fixed with better copy. It is a structural problem in the incentive architecture. The pricing is part of the system.


The Intervention

The redesign followed a dependency sequence across four architectural layers. Each layer depended on the previous one being stable before it could be added.

Phase 1: Module Architecture — Systems, Not Exercises

What was built: Sixty-plus production-grade modules. Each module is a self-contained project with a real-world application. Not exercises. Deployed systems with production concerns: error handling, data validation, performance budgets, security boundaries. Every module has to survive being treated as if it were a client deliverable, because teaching systems thinking requires the learner to operate under the same constraints real systems operate under.

Why this came first: Nothing else in the platform works if the units of learning are exercises rather than systems. The progression tiers, the structural gates, the pricing model — all of these are downstream of the module definition. If the modules are content, the platform is an education product regardless of what the other layers do. Building the progression system before the module architecture would have produced a more elaborate scaffolding around the same underlying mistake.

The mechanism: Each module is specified as a deployable unit with explicit production concerns. The learner does not mark it complete when they have read the material. They mark it complete when the system runs, handles error cases, passes its own quality gates, and produces the measurable outcome the module was defined to produce. The learning happens in the gap between "it works on my machine" and "it is a system" — which is exactly the gap bootcamp graduates fall into when they reach a real job.

First-phase outcome: A catalog of modules that each function as a small operational system in its own right. This catalog became the substrate everything else in the platform was built on.

Phase 2: Three-Tier Progression with Structural Gates

What was built: Three tiers — Foundation (individual component mastery), Integration (connecting systems), Architecture (designing systems from scratch). The tiers are not difficulty levels. They are different kinds of thinking. Foundation teaches the learner to make one thing work. Integration teaches them to make multiple things work together under constraint. Architecture teaches them to decide what things should exist at all. Between each tier there is a structural gate: advancement requires demonstrated competency through deployed work, not through content completion.

Why this phase depended on Phase 1: Tiers assume the underlying unit of work is substantive enough to carry a progression. Tiered progression over exercises is theater — any exercise can be labeled "advanced" without becoming one. Tiered progression over systems is load-bearing, because the difference between a Foundation-tier system and an Architecture-tier system is a real difference in the design decisions the learner had to make.

The mechanism: The gate between tiers is the same mechanism I use in enterprise governance engagements — a structural check that cannot be bypassed by effort or negotiation. The learner cannot advance to Integration until the Foundation modules they have built pass the tier-exit criteria. The gate is the mechanism, not the assessment. Assessments can be negotiated around. Gates that block the next tier cannot.

First-phase outcome: A learner path that produces the characteristic pattern I had been trying to manufacture: a developer who can demonstrate the difference between component work, integration work, and architecture work, because they have actually done each kind under real constraint.

Phase 3: The Stack as Its Own Lesson

What was built: The platform runs on a Turborepo monorepo, Remix on the frontend, Supabase for data and authentication, Tailwind CSS with a design token system, and GitHub Actions for CI/CD. The stack is not a deployment detail. It is deliberately the same kind of stack the learner will encounter in production environments, and the platform runs on the same governance system it teaches.

Why this phase depended on Phases 1-2: A production-grade stack underneath a content-only product is wasted infrastructure. The stack only becomes pedagogically load-bearing once the modules and the progression system demand real operational behavior from the platform. The learner observing the platform is meant to see a working example of the same patterns the modules teach. That only has value when the modules are teaching those patterns in the first place.

The mechanism: Every architectural decision I make for clients — tiered governance, structural gates, progressive complexity — I had to implement in the platform itself before I could justify teaching it. The governance layer that prevents learners from skipping tiers is the same gate-based system I design for enterprise operations. The module structure follows the same bounded-service pattern I use for SaaS architecture. The CI/CD pipeline enforces the same quality standards I build for client teams. The platform is both a product and a proof of concept. If the principles do not work for HavenWizards, I do not teach them.

Tradeoff introduced: A production-grade stack for a learning product carries maintenance cost that a lighter stack would not. I accepted this cost explicitly, because the alternative — a simpler stack that could not model the patterns the modules teach — would have broken the coherence between what the platform teaches and what the platform is.

Phase 4: Milestone-Based Pricing — Incentive as Architecture

What was built: The subscription model was retired. Pricing became aligned with progression. The learner pays when they advance through a tier gate. If they are stuck, they do not pay while they are stuck. The platform only earns when the learner makes the kind of progress the platform exists to produce.

Why this phase came last: Milestone-based pricing is only honest when the milestones mean something. Applying this pricing model to the content-and-completion platform the first iteration had been would have produced a worse product — learners paying for the illusion of advancement. Only after tiers and gates were real could pricing be tied to them.

The mechanism: The pricing model is itself a systems architecture decision. The incentive structure IS the system. By aligning the moment of payment with the moment of genuine progression, the platform and the learner have the same goal: advancement. This eliminates the failure mode of the subscription model, where the product earns when the learner stalls. It also eliminates the marketing failure mode, where the product has to work harder to keep a stalled learner from canceling than to help them progress.

Constraint and tradeoff: Milestone pricing requires the tier gates to be genuinely difficult to pass. If the gates soften under commercial pressure — if the product is tempted to let more learners through to increase revenue — the pricing model stops working as an architecture and becomes a marketing label. The gates have to be defended against the platform's own short-term revenue interest. This is the same discipline I apply to client governance frameworks: the gate only works if it is willing to block.


Results

Module catalog. More than sixty production-grade modules built and maintained. Each is a self-contained deployable system with its own production concerns rather than an exercise. The volume matters less than the coherence — every module is specified, built, and gated to the same standard, which is what makes the tier progression structurally meaningful.

Three-tier progression in production. Foundation, Integration, Architecture — live and enforced with structural gates between tiers. The gates are the mechanism that produces the characteristic learner pattern the platform was built to produce: developers who can demonstrate the difference between the three levels of system thinking because they have been required to operate at each.

Pricing model validated. Milestone-based pricing replaced the failed subscription model. The revenue and the learner outcome are now pointed in the same direction. The product earns when the learner advances, and does not earn when the learner is stuck. The alignment removed the perverse incentive that had defined the first iteration and that defines most developer education products.

The platform runs on its own governance. This is the result that matters most internally. The same stack decisions, the same structural gates, the same tiered governance I apply in client engagements are the same decisions the platform itself runs on. If those decisions fail in production, I am the first person to see it — which means the platform functions as a continuous pressure test against its own curriculum.

Counterfactual. Without the rebuild, HavenWizards would have continued as a subscription developer-education product in a market already saturated with subscription developer-education products. The first-iteration churn pattern was not fixable with better marketing. It was structural: the pricing model and the progression model were at war with each other, and the product had a financial reason to let the war continue. Left alone, the platform would have slowly fragmented into the same failure mode every bootcamp eventually reaches — graduates who touched every topic and mastered none, operating under the illusion that touching and mastering are the same thing. The rebuild did not make HavenWizards a better education product. It moved HavenWizards out of the education category into a different one, which is the only move that would have worked.


The Diagnostic Pattern

HavenWizards did not have a content problem. The first iteration had content. More content would not have fixed it. The platform had an architecture problem in which three layers — modules, progression, and pricing — were operating on assumptions that contradicted each other, and the subscription pricing was loudest about hiding the contradiction.

The transferable insight is structural: when a product teaches a skill, the product's own architecture has to demonstrate the skill. A platform that teaches systems thinking cannot be built on content delivery and hope that the content carries the message. The platform is the message. If the platform is architected like an education product, the learner internalizes education-product patterns regardless of what the modules say. If the platform is architected like a system — with gates, tiers, deployable units, and an incentive structure that aligns payment with progression — the learner internalizes systems patterns whether or not they read the accompanying text.

The same logic applies in any product category where the product's claim about itself is a claim about how it is built. The question to ask is not "what do we teach?" It is: does the architecture of the product contradict the architecture we are trying to teach? If it does, no amount of curriculum will close the gap. If it does not, the curriculum becomes the second-loudest thing the product is saying, and the loudest thing becomes the product itself.

Interested in similar results?