Skip to content
Diosh Lequiron
governance

12 core competencies standardized, 40% content overlap elevated to deeper core, Assessment rubrics aligned across 4 programs, Instructor satisfaction increased

Tiered Competency Architecture for a 4-Program Graduate School

By Diosh LequironPCU Graduate SchoolApril 2026
Key Outcomes

12 core competencies standardized

40% content overlap elevated to deeper core

Assessment rubrics aligned across 4 programs

Instructor satisfaction increased

Sixteen instructors across four graduate programs were each designing courses independently, and roughly forty percent of the content across programs was overlapping without anyone having intended it. Graduates from different programs were receiving incomparable credentials under the same institutional name. After a tiered competency framework — twelve canonical core competencies, program-specific guided competencies, and explicitly autonomous electives — the four programs shared a standardized competency foundation, assessment rubrics aligned across programs, and the duplicated content was consolidated into a deeper core. The outcome the administration had asked for was "better curriculum quality." The outcome the school actually needed was governance over the boundary between what should be uniform and what should be free.

The starting state: PCU Graduate School, offering professional development programs across four distinct tracks, with capable instructors, functional individual courses, and no shared competency architecture connecting them. The challenge: bring coherence across programs without collapsing the academic autonomy that made the individual programs distinct in the first place.


Starting Conditions

The curriculum problem at PCU Graduate School was not a content quality problem. The courses were being taught by qualified instructors to engaged students, and individual course feedback was generally positive. The failure mode was at the level above individual courses — the program level, where graduates from different tracks were being awarded equivalent institutional credentials despite receiving substantially different educations.

Scale and structure. Sixteen instructors across four programs, each instructor designing courses with the autonomy that graduate-level teaching traditionally grants. Each program had its own coordinator, its own learning outcomes, and its own approach to assessment. These were not misaligned through carelessness. They were misaligned because the institutional architecture had never required alignment, and absent a requirement, every rational local actor had designed their piece of the curriculum against local considerations.

The competency invisibility problem. A student completing Program A could demonstrate mastery of a set of competencies that had been shaped by that program's instructors. A student completing Program B could demonstrate mastery of a different set, shaped by different instructors. Both students were receiving the same institutional credential. Whether the two students could be expected to have comparable baseline capabilities was an open question — not because the school had decided the answer, but because the school had never been in a structural position to ask it. The question had been implicit, and implicit questions in education accumulate into credential ambiguity over time.

The previous attempts at alignment. There had been standardization efforts. A program-level learning-outcomes document existed for each of the four programs. The documents had been written at different times by different coordinators, they used different vocabulary for overlapping concepts, and they had never been reconciled against each other. The documents were not wrong. They were parallel descriptions of parallel systems that nobody had been asked to integrate. Prior attempts to harmonize them had been framed as "update the outcomes documents," which produced new documents without producing new structure.

Political constraint. Graduate-level instruction carries a strong culture of instructor autonomy. Any intervention that read as centralized curriculum control would be perceived as encroaching on academic freedom, and would trigger the same protective response that standardization attempts trigger in any domain where autonomy is a core professional value. The framework had to earn trust by preserving the autonomy that mattered rather than overriding it.

Regional and institutional context. PCU operates in the Philippine higher education environment, which carries its own accreditation expectations, its own faculty governance norms, and its own student expectations about how graduate programs differentiate themselves. Any framework that ignored these contextual constraints would have been correct in theory and unworkable in practice.


Structural Diagnosis

Three architectural problems explained why the four programs had drifted into incomparability.

The school had been applying the same governance intensity to two different categories of work. Some of the work — the foundational competencies that every graduate should possess regardless of program — required uniformity, because allowing variation at that layer produced graduates who were institutionally equivalent and operationally different. Other work — the specific topics a program chose to emphasize, the electives that reflected instructor expertise, the experimental modules that let the faculty explore new ideas — required variation, because collapsing it would have destroyed the distinctness that justified having four programs in the first place. Governance that treated both categories uniformly produced the worst of both: uniformity on things that needed variation, variation on things that needed uniformity. The structural fix was not more governance or less governance. It was governance with the right level of differentiation.

The forty-percent overlap was a symptom of duplicated effort, not a symptom of waste. When instructors across four programs were each independently designing courses on the same foundational concepts, the overlap was evidence that the concepts were genuinely foundational — they kept recurring because every program needed them. The failure was not that the overlap existed. It was that each overlapping instance was being taught at the depth of a program-specific elective rather than at the depth of a shared canonical foundation. Students were learning the same concept four times at shallow program-specific depth, instead of once at canonical depth with four times the teaching hours consolidated into a deeper treatment. The overlap was not a cost problem. It was a depth problem disguised as a duplication problem. Conventional fixes — eliminate the overlap by assigning each topic to one program — would have made things worse, because the topics are needed by all four programs. The right move was to move the overlapping content to the canonical tier where every graduate would encounter it, and free the program tier for content that actually differed.

Assessment rubrics varied because there was no shared reference point against which to calibrate them. An "A" grade in one program meant "the instructor judged this work to be in the top tier of submissions within that program." An "A" in another program meant the same thing relative to that other program's submissions. Neither was wrong, but they were not measuring the same thing, and the institutional credential they both contributed to was doing work neither of them could individually support. The structural cause was not grade inflation or grade variation. It was the absence of a shared competency framework against which performance could be calibrated. Rubric alignment without a shared competency foundation is rearranging labels on differently-shaped measuring instruments.


The Intervention

The redesign applied a three-tier competency architecture — directly inspired by the tiered governance pattern used in operations engagements, adapted for the specific constraints of graduate-level education. The tiers were implemented in sequence across one academic term, with each tier depending on the previous tier being operational.

Phase 1: Core Competencies (Canonical Tier)

What was built: Twelve core competencies that every graduate of the school must demonstrate regardless of which program they completed. These were the foundational capabilities — critical analysis, professional ethics, research methodology, communication, and the other structural skills that define what a graduate credential from PCU Graduate School means. Each core competency came with a standardized assessment rubric that every program used identically. No variation allowed at this layer.

Why this phase came first: The canonical tier is the load-bearing wall of the framework. Every other tier depends on it being stable, because the distinction between "content that belongs in the core" and "content that belongs in the program tier" cannot be drawn until the core itself exists. Building the program tier first would have forced each program to designate its own content without knowing what the shared baseline was going to be, which would have produced twelve different versions of the problem the framework was designed to solve.

The mechanism: The twelve competencies were not invented. They were extracted from the existing course catalog by identifying the topics that were already being taught across all four programs — the forty percent overlap from the diagnosis. The overlap was not discarded. It was elevated. Content that had previously been taught four times at shallow program-specific depth was consolidated into core modules taught at canonical depth, with the recovered teaching hours freed for program-specific material at the next tier. This was not a standardization initiative that added new work. It was a reorganization that moved existing work into the right structural layer.

First-phase outcome: By the end of the first phase rollout, every new student entering any of the four programs would encounter the twelve core competencies under standardized rubrics. Graduates from different programs were now meeting the same foundational bar, which was the condition the institutional credential needed in order to mean something consistent across programs.

Phase 2: Program Competencies (Guided Tier)

What was built: Between eight and ten competencies per program, specific to the discipline and focus of that program, sharing a common assessment framework but retaining program-specific content and context. Each program's competencies were defined by its coordinator and instructors within the shared framework. The framework specified the required structure — how competencies were named, how they were assessed, how they related to the core tier — without specifying the content.

Why this phase depended on Phase 1: The guided tier only works when the canonical tier is stable. If the core competencies were still drifting, the program tier would have no fixed reference to differentiate itself against, and each program would end up redefining its own quasi-core in addition to its program-specific material. With the core tier locked, the program tier could focus on what made each program distinct, because the foundation both programs shared was already handled at the tier below.

The mechanism: Instructor workshops brought the sixteen instructors together across programs to align on the shared framework while each program maintained authority over its own content decisions. The workshops were structured to make the distinction explicit: here is what every program will look like the same (framework, structure, rubric shape), and here is what every program will look like different (actual content, examples, specialized focus). The workshops did not ask instructors to surrender autonomy. They asked instructors to exercise autonomy at the right layer, which turned out to be a more defensible request than the usual "please standardize everything" framing that had failed in earlier attempts.

Tradeoff introduced: The guided tier required ongoing coordination cadence — the instructor workshops had to continue beyond the initial rollout to keep the program competencies aligned with the shared framework as content evolved. The framework traded a one-time standardization cost for an ongoing governance cost. This cost was not free, and it would need to be carried by the program coordinators in subsequent terms.

Phase 3: Elective Competencies (Autonomous Tier)

What was built: Explicit recognition that a class of competencies — the emerging topics, the experimental modules, the instructor-driven specializations — did not need cross-program governance at all. These were formalized as autonomous. No institutional rubric applied. No cross-program review touched them. Instructors had full creative freedom within the broader competency architecture.

Why this phase came last: Naming what does not need governance is only possible after the work that does has been settled. Declaring electives autonomous while the core tier was still unstable would have been read as the framework giving up on alignment. Declaring electives autonomous after the core and program tiers were functioning was read as the framework deliberately stepping back — a signal of trust, not of abdication.

The mechanism: Instructors gained back explicit, documented creative authority over their elective modules. The autonomous designation was not the absence of governance. It was a governance decision about where governance should not reach. The morale dividend from the autonomous tier compensated for the discipline cost that the core tier had imposed. Instructors accepted constraint in one layer because they received real creative freedom in another, and the exchange felt fair because both sides of it had been made explicit.

Constraint and tradeoff: The three-tier model required ongoing classification decisions. When a new topic emerged in the field — a new methodology, a new regulatory framework, a new technological capability — someone had to decide which tier it belonged in. Was this new topic foundational enough to enter the core? Important enough to become a program competency? Or still exploratory enough to live in the autonomous tier? The classification decision became its own continuing governance responsibility. If classification lagged, new content would default to whatever tier was most convenient for whoever proposed it first, and the framework would silently erode.


Results

Twelve core competencies standardized across four programs. The canonical tier was operating by the end of the rollout term. Every graduate of any program was meeting the same foundational bar under the same rubric. The institutional credential now had a structural basis — not just a policy basis — for meaning something consistent across programs.

Forty percent content overlap eliminated by elevation, not subtraction. The overlap that had been a symptom of the problem became the material for the solution. Content previously taught four times at shallow depth was consolidated into the core tier and taught once at canonical depth, with the recovered teaching hours redirected to program-specific material that could now be genuinely distinct. The mechanism was structural reorganization, not curriculum reduction.

Assessment rubrics aligned across four programs. An "A" in one program now calibrated to the same competency bar as an "A" in another. This was not a grade normalization initiative. It was a consequence of the shared rubric at the core tier, which gave every program a common reference point against which to calibrate the rest of its assessment. The alignment emerged from the framework; it was not imposed on top of it.

Graduate feedback improved. Students now understood what each program offered and why the programs were distinct. The differentiation was explicit rather than implicit, which meant students could make informed choices about which program fit their goals. This was the outcome the administration had originally been asking for when they said "better curriculum quality" — coherence across the institution, distinctness across the programs, comprehensibility for the people enrolling in them.

Instructor satisfaction increased. This was the counterintuitive result. A framework that imposed new standardization at the core tier nonetheless produced higher instructor satisfaction, because it clarified expectations rather than constraining creativity. The autonomous tier gave instructors real creative freedom for elective work. The guided tier gave them a shared structural language for program-specific work. The canonical tier took the foundational content off their individual plates and consolidated it into shared teaching load. The discipline cost was real, and the autonomy dividend was larger.

Counterfactual. Without the tiered framework, the most likely trajectory was continued drift. Each program would have continued designing courses independently, the forty-percent overlap would have continued being taught four times at shallow depth, assessment rubrics would have continued varying, and the institutional credential would have continued meaning different things depending on which program had issued it. Over enough time, this trajectory reaches accreditation concerns and reputational risk. The framework did not just improve the curriculum. It prevented the accumulating incoherence from reaching a point where external parties would begin asking hard questions about what the credential guaranteed.


The Diagnostic Pattern

The school did not have a curriculum content problem. It did not have a teacher quality problem. It had a governance-tier problem — the same structural problem that shows up in multi-site operations, software engineering organizations, and any federation where local autonomy and institutional coherence must coexist.

The insight is that "more standardization" and "less standardization" are not the right axis. The right axis is which work belongs in which tier. Canonical tier: the work where divergence directly damages the institutional credential or output. Guided tier: the work where shared structure matters but local content must vary to serve local context. Autonomous tier: the work where governance applied at all would be pure overhead and would destroy the creativity that justifies having local autonomy in the first place. Treating these three categories with the same governance approach is the error that produces the oscillation between over-governance and chaos — in graduate schools, in operations teams, in software platforms, and in every federated system that has ever tried to balance coherence with autonomy.

The diagnostic pattern transfers to any institution where multiple local units share an institutional credential that is supposed to mean something consistent. The question to ask is not "how do we standardize more?" or "how do we preserve autonomy?" It is: which elements of what we do require uniformity to keep the credential meaningful, which elements need shared structure with local content, and which elements should be explicitly protected from cross-unit governance? Once those three categories are separated, the framework designs itself. Until they are, every alignment initiative will reproduce the drift it was intended to prevent.

Related Service

This engagement falls under my PMO & Governance practice.

View advisory engagement models

Interested in similar results?