About Who We Are Services Resources Case Studies Engage Us
Articles

Practical thinking for delivery leaders.

Short-form perspectives on delivery discipline, governance, vendor control, and what tends to go wrong in live programs.

← Back to Resources

Why Most PMOs Fail in Year Two (And What to Do About It)

PMOs that survive the first year often collapse under their own bureaucracy by year two. The problem isn't process - it's purpose drift. Here's how to keep your PMO delivering value beyond the honeymoon phase.

The first year of a PMO is usually the easiest. There is a mandate, there is energy, and there is a clear problem the PMO was built to solve. Templates get designed, governance structures get established, and the organization begins to feel the benefit of having someone watching delivery from a broader vantage point. Then year two arrives.

What changes is rarely visible at first. The PMO is still operating. Reports are still going out. Steering committees are still running. But something has shifted. The conversations that used to happen informally - where a PM would call the PMO lead to think through a decision - stop happening. Project teams begin treating PMO processes as checkbox exercises rather than decision-support tools. Senior leaders start asking, in more polite language, what the PMO is actually for.

The Purpose Drift Problem

Purpose drift happens when a PMO is so focused on sustaining its own processes that it loses sight of the delivery outcomes it was designed to protect. This is not a malicious shift. It is usually driven by the natural tendency of any organizational function to expand its scope, standardize its outputs, and protect its position in the governance structure.

The signals are recognizable once you know what to look for:

  • Status reports are produced on time but no one acts on them
  • Risk logs are maintained but escalations rarely happen
  • The PMO knows about problems that executives have not heard about
  • Template compliance becomes a measure of PMO success
  • PMs view PMO reviews as administrative friction, not delivery support
"The PMO that outlasts its usefulness is often indistinguishable from one doing good work. The difference only becomes visible when a delivery goes sideways and the PMO had no early warning system that worked."

What a Year-Two Reset Looks Like

The organizations that arrest purpose drift do three things consistently. First, they return to the delivery outcomes that originally justified the PMO - on-time delivery rates, escalation speed, decision turnaround, go-live predictability - and measure the PMO's contribution to those outcomes directly.

Second, they differentiate between compliance and contribution. A PMO that enforces templates is doing governance administration. A PMO that identifies a scheduling conflict before it becomes a delay is delivering value. These are different activities, and most PMOs inadvertently drift from the second toward the first.

Third, they give the PMO lead genuine authority to escalate and be heard - not just the process authority to send a report, but the organizational credibility to raise a concern and have a decision-maker act on it within a defined window. Without that credibility, the PMO is just documentation overhead.

The Right Questions at the Two-Year Mark

If your PMO is approaching or past the two-year mark, the diagnostic questions are straightforward: Can the PMO lead name the three highest-risk active deliveries right now without checking a report? In the last quarter, how many times did the PMO flag a concern that was acted on before it became a crisis? Are project teams asking the PMO for help, or are they treating it as a process obligation?

The answers to those three questions will tell you more about PMO health than any governance audit. A PMO that is drifting will struggle to answer them confidently. A PMO that is still delivering value will have specific examples at the ready.

Year two is not where PMOs go to die. But it is where the ones without intentional governance of their own purpose begin the slow drift toward irrelevance. The antidote is simpler than most leaders expect: stay close to the delivery floor, stay relevant to the people trying to ship, and never let template production become the primary measure of success.

Salesforce Is Not a Strategy: What Government Clients Get Wrong

After delivering Salesforce programs across three provincial government ministries, the pattern is clear: technology is rarely the problem. Change management, data governance, and stakeholder alignment are where programs live or die.

When a government ministry decides to modernize a legacy system using Salesforce, the conversation usually starts in the right place - with a clear description of what is broken and what outcomes better technology could enable. The conversation often ends there too, with the assumption that selecting the platform is the hardest decision and the rest is execution.

It is not. The platform decision is usually the easiest part. What follows it - the behavioral change, the data cleanup, the political navigation, the adoption work - is where most government Salesforce programs quietly fall apart.

The Configuration Trap

Government Salesforce implementations tend to get captured by configuration scope early. A business analyst documents a process. A developer builds it. Then a stakeholder in a different branch says the process is slightly different for their cases. Then another. Before long, the system is a highly customized mirror of the manual processes it was supposed to replace - with all of their complexity, but now embedded in software that is expensive to change.

The root cause is almost always the same: requirements were gathered from individual contributors who described their current workflow, not from program leads who could describe the intended future state. Salesforce becomes a digital replica of old behavior rather than an enabler of new behavior.

What this looks like in practice

  • Dozens of custom fields that replicate data already captured elsewhere
  • Approval workflows that match legacy authority matrices no one has reviewed in five years
  • Reports built to mirror existing spreadsheet structures rather than decision needs
  • Data entry screens laid out to match paper forms, not to support efficient digital processing

The Data Governance Gap

The second failure pattern is less visible but more consequential. Government programs run on data. Case records, client histories, decision audit trails, compliance logs - this information is the operational foundation of what a ministry does. When a Salesforce program launches without a clear data governance framework, the system becomes a new container for old data chaos.

I have seen programs go live with duplicate client records in the thousands, with source-of-truth disputes between the new Salesforce system and legacy platforms that were supposed to have been decommissioned, and with reporting outputs that could not be reconciled because field definitions had not been standardized before migration. These are not minor technical issues. They are program credibility problems that surface in audit environments and legislative review processes.

"A Salesforce go-live is not a transformation milestone. It is an infrastructure event. The transformation milestone is the first reporting cycle where decision-makers trust the data without checking it manually."

Where Change Management Gets Underestimated

Government organizations are not resistant to change because of individual stubbornness. They are resistant because the incentive structures that govern how people are evaluated, what counts as a compliant decision, and what constitutes documentation of record are slow to update. When a new system launches, the old behaviors persist not because users reject the tool but because the accountability framework has not shifted.

Effective change management in a government Salesforce program requires more than training sessions and user guides. It requires visible sponsorship at the director or ADM level that signals behavioral expectations have changed. It requires supervisors who understand the new system well enough to hold their teams accountable to it. And it requires a hypercare period that is genuinely resourced - not a 30-day window where the implementation team is already rolling off to the next project.

Making the Program Work

The programs that deliver durable outcomes share a few characteristics that are visible before go-live. The executive sponsor is engaged beyond governance attendance - they understand the change they are sponsoring and they actively communicate it to their organization. The business process owner has signed off on future-state designs, not just current-state documentation. Data migration has been validated by end users, not just by technical teams running row counts. And the change management workstream has a dedicated resource who is not doubling as a requirements analyst.

Salesforce is a powerful platform. In the right conditions, it genuinely does transform how government services are delivered and tracked. But the platform is not the program. The program is everything around the platform - the process design, the data strategy, the adoption work, the governance. Get those right, and Salesforce delivers. Get them wrong, and you will be back in two years discussing why the modernization program needs to be modernized.

The Fractional CIO Model Is Growing - Here's Why CFOs Love It

Organizations across Canada are discovering that full-time CIO salaries do not always make sense when senior IT strategy can be engaged on a fractional basis - with higher accountability and lower overhead.

The fractional executive model has been well established in the CFO space for years. Small and mid-sized organizations have long recognized that hiring a full-time Chief Financial Officer for a growth-stage business or a project-intensive phase is often more expensive than the value it delivers. The same logic is now arriving in technology leadership - and the uptake in Canada's mid-market and public sector is accelerating.

What the Fractional CIO Actually Does

A fractional CIO is not a consultant who makes recommendations and leaves. The distinction matters. Fractional CIO arrangements are structured around ongoing accountability - typically a defined number of days per month over a multi-month engagement - where the individual sits inside the leadership function, not outside it.

The scope covers the work that an in-house CIO would carry: technology roadmap oversight, vendor governance, digital transformation program leadership, IT budget review, and the strategic advisory work that ties technology decisions to business outcomes. The difference is that this capacity is engaged at a fraction of the cost, with the flexibility to scale up or down as program cycles require.

Why CFOs Respond to This Model

The CFO interest in fractional CIO arrangements is not primarily about cost savings, though that is part of it. It is about accountability structure. A fractional engagement comes with a defined scope, defined deliverables, and a clear offboarding point. There is no tenure risk, no severance exposure, and no organizational politics that accumulates around a permanent executive hire.

  • Salary plus benefits for a senior CIO in Canada ranges from $180,000 to $280,000 annually
  • A fractional arrangement for comparable strategic coverage typically ranges from $60,000 to $120,000 per year
  • The fractional model is fully variable - it can be suspended, reduced, or expanded as business conditions change
  • Fractional CIOs typically bring cross-sector exposure that a single-organization hire cannot match
"The CFOs I have spoken with are not primarily motivated by the hourly rate difference. They are motivated by the ability to have a clearly scoped technology leadership function that can be held to outcome-based accountability from day one."

Where the Model Works Best

Fractional CIO engagements deliver the most value in three organizational contexts. The first is the growth-stage organization that has outgrown its IT function but is not yet ready to build a full technology leadership team. The second is the organization going through a defined transformation - a platform modernization, a major vendor transition, or a systems integration initiative - that requires elevated technology leadership for 12 to 24 months. The third is the organization that has experienced a technology leadership departure and needs interim coverage while it defines what the permanent role should look like.

Government-adjacent organizations and regulated industries are also discovering that a fractional CIO arrangement can provide the governance credibility required for board and audit committee oversight without the full cost of a permanent executive. In environments where IT spend is scrutinized closely, having a named senior accountable for technology strategy is often required - and fractional arrangements satisfy that requirement efficiently.

What to Look for in a Fractional CIO

The qualities that matter most in a fractional arrangement are different from what drives a permanent CIO hire. Domain depth in the organization's specific technology environment matters less than the ability to establish credibility quickly, navigate executive relationships efficiently, and build functional oversight of delivery across multiple concurrent initiatives. The fractional CIO who succeeds is usually someone who has held delivery accountability at the program level before moving into strategic roles - which means they understand how decisions made at the leadership table land in the delivery environment.

For organizations evaluating this model, the practical starting point is a scoped diagnostic - a 30-to-60-day engagement that assesses current technology posture, identifies the highest-priority governance and strategy gaps, and produces a recommendation for the ongoing engagement structure. This allows both parties to establish fit before committing to a longer arrangement, and it gives the organization a clear artifact of value from the very first engagement.

Project Governance That Works - Balancing Structure with Agility

A grounded view of how to build enough control to protect delivery without creating ceremony that slows teams down.

Most discussions about project governance start with a framework. PRINCE2, PMBoK, SAFE - the methodology vocabulary is rich and well-documented. What rarely gets discussed is the implementation reality: that governance frameworks, when applied without judgment, produce the exact delivery problems they are supposed to prevent.

The organizations with the strongest delivery track records are not the ones with the most rigorous governance documentation. They are the ones whose governance structures are calibrated to the actual risk profile of their programs - lighter where risk is manageable, heavier where the consequences of a missed decision are significant.

The Problem with Symmetric Governance

Symmetric governance applies the same level of control to every project regardless of its complexity, scale, or strategic importance. This is the natural output of a PMO that is trying to be fair and consistent - but it produces distorted incentives. Small projects get buried in documentation requirements that consume more time than the work itself. Large, high-risk programs get the same standard template coverage as a ten-person internal initiative, which means they are systematically under-governed relative to the risks they carry.

The practical fix is tiered governance: a lightweight framework for low-complexity initiatives, a standard framework for mid-complexity programs, and an enhanced framework - with more frequent executive touchpoints, more rigorous risk escalation protocols, and more structured decision tracking - for high-stakes transformation work. Most organizations already have an intuitive version of this in practice. What they rarely have is the formal documentation that makes it defensible and consistently applied.

Decision Rights as the Core of Governance

At its most functional, project governance is a decision rights system. Who can authorize what? At what threshold does a decision require escalation? How quickly does an escalation need to resolve before it becomes a delivery risk? These are the questions that governance must answer clearly, and they are the questions that most governance frameworks answer vaguely.

A governance structure without clear decision rights is a reporting structure with committee names attached to it. Reporting without decision accountability produces information that no one acts on. Decision rights without reporting produces decisions that no one can track. The two must be designed together, and the design must be specific enough that a project manager in week six of a delivery knows exactly who needs to be in the room for a given type of decision and how long they have to get one.

Practical anchors for governance design

  • Define a decision taxonomy before the project kicks off - list the categories of decisions that will arise and map them to an accountable role
  • Set escalation windows explicitly - not just who escalates, but how many business days before no decision becomes a default decision
  • Distinguish between information reporting and decision reporting in steering committee design - executives should not be sitting through status updates when they need to be making calls
  • Build a governance calendar that is integrated with delivery milestones, not separate from them

Where Agility Fits

Agile delivery does not require the abandonment of governance - it requires governance that operates at agile speed. The steering committee that meets monthly cannot govern a two-week sprint cycle. The risk process that routes through three levels of approval cannot respond to a delivery signal that emerges on Tuesday and needs a decision by Thursday.

In practice, this means building a fast-track path for specific categories of decisions - scope adjustments below a defined threshold, vendor escalations within an approved protocol, resource reallocation within an approved budget envelope. These fast-track paths are not governance bypasses. They are governance design that matches the tempo of modern delivery.

The organizations that get this right have governance structures that PMs describe as useful, not burdensome. That is the signal. When the people closest to the delivery work are engaging voluntarily with governance tools - using them to accelerate decisions, surface risks, and protect scope - the governance is working. When they are filling out forms to satisfy an audit trail, it is not.

Vendor Accountability - How to Enforce SOWs Without Burning the Relationship

What disciplined vendor management looks like when quality is slipping, timelines are moving, and nobody wants a formal escalation.

The most common vendor management failure in enterprise delivery is not the absence of a contract. It is the reluctance to use one. Statement of Work documents get negotiated carefully, milestone definitions get reviewed by legal, payment terms get structured to incentivize delivery - and then, when performance slips, none of that structure gets activated because the relationship feels too important to risk.

This reluctance is understandable. Vendor relationships are genuinely complex. A large implementation vendor typically has resources that cannot be replaced mid-project. An escalation that triggers defensive behavior can consume more delivery capacity than the original problem. And there is always the hope, usually unwarranted, that the next sprint will fix what the last three have not.

The Early Warning Window

Vendor performance problems have early signals that are visible well before they become unmanageable. The first is communication pattern changes - a vendor team that was responsive begins taking longer to answer routine questions, or escalates internally before responding to direct requests from the client team. The second is delivery artifact quality shifts - not missing deadlines yet, but submitting work that requires more revision rounds than early in the engagement. The third is resourcing turnover - not always visible on the surface, but detectable when key contacts change and institutional knowledge has to be rebuilt.

None of these signals individually constitute a performance issue. Together, they constitute a performance trend - and a trend is far easier to address than a crisis. The project manager who raises a quality concern after the first round of incomplete deliverables is having a very different conversation than the one who raises it after the fourth.

"The point of a SOW is not to enable termination. It is to enable performance conversations. The best vendor managers use contract language to anchor expectations, not to threaten consequences."

A Framework for the Difficult Conversation

When performance requires a formal conversation, the structure of that conversation matters as much as its content. Starting with the contract framing - citing clause numbers, invoking remedy provisions - immediately positions the conversation as adversarial. Starting with the delivery problem, documented specifically and without editorializing, positions it as a shared problem requiring a joint solution.

The sequence that tends to work: describe the specific delivery gap in terms of the agreed scope definition, share the downstream impact on the broader program, ask the vendor to describe what has changed in their delivery capacity or approach, and then move together to a documented recovery plan with defined checkpoints. The contract remains in the background - an implicit framework that both parties know is there - but the conversation itself stays focused on the delivery outcome.

Elements of an effective vendor recovery plan

  • A specific root cause identification, agreed by both parties
  • Named resource commitments from the vendor, not just capacity promises
  • Revised milestone dates with clear acceptance criteria
  • A weekly check-in cadence for the recovery period with a named owner on each side
  • A defined escalation trigger - if the revised milestones slip again by a defined threshold, what happens next

When Formal Escalation Is Actually Required

Formal escalation - invoking remedy provisions, engaging procurement, moving toward a performance improvement notice - becomes necessary when a vendor has failed to produce an agreed recovery plan, when a recovery plan has been produced and subsequently missed, or when the vendor's behavior in performance conversations has shifted from collaborative to defensive or evasive.

At this point, the relationship frame no longer serves the delivery. The contract frame takes over, and the project manager's role is to document carefully, escalate to the appropriate organizational authority, and ensure the project's interests are protected regardless of what happens next with the vendor relationship. In these situations, the documentation from earlier performance conversations is not just useful - it is essential. The project manager who has been carefully noting what was committed, what was delivered, and what the gap was will navigate a formal escalation far more effectively than the one who has been managing performance informally.

Most vendor relationships do not reach this point. Most performance issues resolve in the early conversation window. But the ones that do not are often the ones where someone was hoping the problem would self-correct - and the cost of that hope, in schedule, in budget, and in organizational credibility, is almost always higher than the cost of the difficult conversation would have been.

Why Good Requirements Still Fail in Delivery

Requirements can be complete on paper and still collapse in execution when decision latency, weak ownership, and hidden dependencies are ignored.

Requirements documentation has become increasingly sophisticated. Modern business analysis practices produce detailed user stories, acceptance criteria, process flows, data dictionaries, and traceability matrices. On many projects, the requirements phase is the most rigorously managed stage of the entire delivery lifecycle. And yet requirements failures remain one of the most consistent sources of project overruns, scope disputes, and go-live delays.

The explanation is not that the requirements documentation is wrong. It is that requirements documentation answers a different question than delivery teams think it does. A requirements document answers the question: what was agreed to be built? It does not answer the question that actually determines delivery outcomes: who has the authority to change what was agreed, how quickly can that change be made, and what happens to the plan when it does?

Decision Latency as a Requirements Risk

Decision latency - the gap between when a decision is needed and when it is made - is the single most under-documented requirements risk in enterprise programs. A requirements document can be complete in every formal sense while still containing dozens of decision points that have not been assigned to a named owner with a defined resolution timeline.

In a typical enterprise program, requirements decisions that require cross-functional agreement will take two to four times longer to resolve than single-owner decisions. When those cross-functional decisions land on the critical path of a development sprint, the schedule impact compounds quickly. A decision that should take three days and takes twelve creates a nine-day schedule exposure - multiplied across the number of cross-functional decisions in the program.

The Hidden Dependency Problem

Hidden dependencies are requirements for capabilities, data, or decisions that are not called out explicitly in the requirements documentation because they were assumed to be either obvious or someone else's problem. They surface in delivery as blockers - usually at the worst possible moment, which is typically during integration testing or user acceptance testing.

  • An integration requirement that assumes a legacy system has a capability it does not actually have
  • A data requirement that assumes clean, structured data that is actually inconsistent or incomplete
  • A workflow requirement that assumes a user role exists that has not yet been defined in the HR or access governance framework
  • A reporting requirement that assumes an organizational data standard that different business units have implemented differently

The discovery of a hidden dependency during delivery is not primarily a technical problem. It is a scope and timeline problem - because the work required to resolve it was not in the plan, and the discovery typically triggers a requirements change process that consumes capacity at exactly the point when the team is under the most delivery pressure.

Ownership as the Missing Requirement

The requirement that most projects fail to document is ownership. For each functional requirement, who is the business owner - the person who can confirm that a built feature satisfies the need it was designed to meet? For each data requirement, who is the data steward - the person who can authorize a data definition, a migration decision, or a data quality exception?

Requirements documentation without named ownership is a description of what needs to be built attached to an implicit assumption that someone will take accountability when questions arise. In practice, when those questions arise in the middle of a sprint, accountability disputes are some of the most time-consuming delivery problems a PM can face.

The fix is not complex but it is consistently underimplemented: every requirement above a defined complexity threshold should have a named business owner, a named technical owner, and a defined escalation path if those two cannot agree. This is not additional bureaucracy. It is the minimum governance structure required to protect the requirements investment the organization has already made.

The Real Cost of Scope Creep Is Not What You Think

Scope creep's financial cost is visible. The organizational cost - eroded confidence, decision fatigue, and team burnout - is what actually kills programs.

When scope creep gets discussed in project retrospectives, the conversation almost always focuses on budget overrun and schedule extension. These are the measurable consequences - the ones that appear in variance reports and get cited in post-implementation reviews. They are real, and they are significant. But they are not the most damaging effects of uncontrolled scope change.

The most damaging effects are organizational, and they accumulate quietly in ways that do not show up in project tracking tools until they reach a tipping point.

The Confidence Erosion Curve

Executive sponsors commit to programs based on a combination of the business case, the delivery team's track record, and their own judgment about the organization's capacity to absorb change. That confidence is not static. Every scope change that extends a timeline or increases a budget consumes a portion of it. The first one is usually accepted. The second is questioned. By the third, the sponsor is not asking about the specific change - they are asking whether the team has the program under control.

Once a sponsor reaches the loss-of-confidence threshold, the delivery environment changes materially. Reporting frequency increases. Governance becomes more intensive. The PM's autonomy decreases as the sponsor begins to involve themselves in decisions that would normally be delegated. All of this adds overhead precisely when the team needs to be focused on delivering, and the overhead itself becomes a drag on the delivery it was designed to protect.

Decision Fatigue at the Program Level

Scope change requests require decisions. Each decision requires someone's time and cognitive capacity - the PM to document and analyze the request, the technical lead to assess the impact, the business owner to evaluate the need, the steering committee to approve or deny. When scope changes arrive at a high enough frequency, the decision-making capacity of the program governance structure begins to saturate.

Saturated decision capacity produces predictable failure modes: decisions get deferred because no one has bandwidth to assess them properly, decisions get made without the full stakeholder input that good governance would require, or decisions get made correctly but slowly, allowing the scope change request to sit unresolved long enough that the team begins working on the change anyway in anticipation of approval.

The Team Dimension

Delivery teams on programs with chronic scope creep show recognizable patterns. Work gets started and then redirected before it is completed. Effort goes into features that are subsequently deprioritized or redesigned. Team members stop volunteering for initiatives because past experience has taught them the initiative may not survive to completion. This is not a morale problem in the soft sense - it is a rational response to working in an environment where completion rates are low and the rules keep changing.

"The highest-performing delivery teams I have worked with are not the ones with the most talent. They are the ones whose program environments gave them clear scope, clear decisions, and the reasonable expectation that what they shipped in week one would still be in the product in week twelve."

The Practical Response

Effective scope control is not about preventing change - programs operating in real organizations face legitimate change needs throughout their lifecycle. It is about ensuring that change arrives through a managed channel, gets assessed consistently, and produces a formal adjustment to the plan before work begins. The disciplines that support this are well known: a defined change control process, a named change authority at each governance tier, a scope baseline that the team can reference, and a change log that gives stakeholders visibility into the aggregate impact of all approved changes.

What distinguishes programs that control scope from programs that do not is usually not the existence of these tools - most programs have them. It is whether the PM uses them consistently and whether the governance structure supports their use. A PM who allows informal scope requests to bypass change control because the requestor is a senior stakeholder is eroding the very structure that protects the program. The most valuable thing a PM can do for an executive sponsor who wants to add scope informally is to walk them through the change control process - not as resistance, but as evidence that the program is being run with the discipline the sponsor's investment deserves.

Change Management Is Not Communications - And the Difference Matters

Organizations that treat change management as a communications plan are consistently surprised when adoption fails. The gap between informing people and actually changing behavior is where programs lose their return on investment.

In most enterprise programs, change management gets a line in the project plan, a resource allocation, and a set of deliverables that consist primarily of: stakeholder communication, user training, and a go-live announcement. These activities are necessary. They are not sufficient. And the organizations that conflate them with change management are setting up their programs for post-go-live adoption failures that are expensive, visible, and almost entirely preventable.

What Change Management Actually Addresses

The behavioral science underpinning change management is not complicated. People adopt new behaviors when four conditions are met: they understand why the change is happening and why it matters to their work specifically; they want the change to succeed (which requires that the organizational incentives align with the desired behavior); they know how to perform in the new environment; and they have the ability to apply that knowledge under real working conditions. Miss any one of these conditions and adoption fails - regardless of how good the training was or how many communications went out.

Communications address the first condition. Training addresses the third. Most programs stop there and then wonder why the system is underutilized six months after go-live, or why staff are finding workarounds to avoid the new process, or why supervisors are still accepting work product from the old system rather than enforcing the new one.

The Desire Gap

The desire gap - the distance between users knowing about a change and actually wanting it to succeed - is where most programs lose adoption momentum. It is also the most politically sensitive gap to address, because closing it requires organizational leaders to examine whether the change they are sponsoring is one that their teams perceive as beneficial or as an imposition.

In government and regulated environments, this gap appears most often when a digital transformation is driven by efficiency targets or compliance requirements rather than by a clearly articulated service improvement. Staff who experience the new system as a constraint rather than an enabler will comply with it minimally and find every workaround available. The communications may have been excellent. The desire to adopt was never built.

Building desire requires visible leadership behavior - executives who use the new system in front of their teams, who ask for data from the new platform in meetings rather than requesting manual reports from staff. It requires early adopter programs that give influential users a sense of ownership over the new environment. And it requires an honest conversation, usually before launch, about what the change means for how work gets done and what support is available during the transition.

Capability Beyond Training

Training delivers knowledge. Capability requires knowledge applied under realistic conditions - and the gap between the two is where go-live failures most often occur. A training session that walks users through a process using test data in a controlled environment does not reliably produce staff who can process real cases, under time pressure, with unusual scenarios, the week after go-live.

  • Shadow sessions before go-live where experienced users process real cases alongside trainers
  • Floor support during the first two to four weeks where subject matter experts are physically present or available in real time
  • Escalation paths that are shorter during the hypercare period than they will be in steady state
  • Post-go-live tracking of error rates and processing times, not just system usage statistics

The Investment That Protects Every Other Investment

The business case for every enterprise transformation program is built on an assumption about adoption - that the users who will operate the new system will actually use it, in the way it was designed to be used, within a reasonable time after go-live. Change management is the discipline that makes that assumption true. When it is treated as a communications add-on rather than a core delivery workstream, the assumption remains just that - an assumption, with an uncertain probability of being realized.

Programs that invest properly in change management - with dedicated resources, a structured approach, and a clear accountability for adoption outcomes - consistently demonstrate better post-go-live utilization rates, faster time-to-proficiency, and lower hypercare costs than programs that don't. The investment is not large relative to the overall program budget. The return, measured in avoided re-training, avoided workaround management, and avoided post-go-live remediation, almost always exceeds the cost.

Why Programs Drift: The Early Signals Most PMs Miss

A program rarely fails suddenly. The signals that it is heading off course are usually visible three to six weeks before the schedule starts slipping — if you know where to look.

Post-implementation reviews of struggling programs share a consistent finding: the delivery team had access to information that, in hindsight, clearly indicated the program was in trouble. The information was there. What was missing was a structured way to interpret it and act on it before the problem compounded.

This is not a failure of intelligence or effort. It is a failure of signal recognition — the ability to distinguish between noise and the early indicators that a program's trajectory is changing.

The First Signal: Meeting Behavior

The earliest drift signal is almost never a missed milestone. It is a change in meeting behavior. When a delivery team that was engaged and direct starts becoming evasive in status meetings — giving answers that are technically accurate but incomplete, deferring questions to written follow-ups that arrive late or never, or escalating discussions that should be resolved at working level — something has changed in the delivery environment.

This pattern most often indicates that the team is aware of a problem they have not yet quantified, or are aware of a problem they are hoping to solve before it surfaces. Either way, the PM who notices this shift and asks direct questions early will spend far less time managing the consequences than the PM who waits for the written evidence.

The Second Signal: Dependency Slippage

Before milestones slip, dependencies slip. A deliverable that was supposed to arrive from another team on Tuesday gets pushed to Thursday with no explanation. An approval that was scheduled for this week gets moved to next week because the approver is unavailable. A data extract that was supposed to support testing gets delayed by two days.

Each of these individually looks like a minor scheduling adjustment. Together, they constitute a dependency chain under stress — and dependency chains under stress have a predictable downstream effect on milestone dates. The PM tracking dependencies at the individual level, not just the milestone level, will see this coming two to three weeks before the schedule report shows it.

The Third Signal: Scope Conversation Volume

An increase in the volume of scope questions — from the business, from developers, from testers — is a reliable leading indicator of a requirements gap that is about to become a delivery problem. Scope questions in the requirements phase are healthy. The same volume of scope questions in the build phase means the requirements did not fully transfer from paper to working understanding, and the gap is being discovered in the most expensive place possible.

Tracking scope question volume by week and by source gives a PM a measurable signal that the team is encountering ambiguity at a rate the plan did not anticipate. Addressing it through rapid clarification cycles and targeted requirements review is far cheaper than discovering it during system integration testing.

Building Signal Recognition into Delivery Practice

The PMs who catch drift early are not necessarily more experienced than those who miss it. They are more deliberate about what they pay attention to. A weekly five-minute review of three leading indicators — meeting behavior, dependency status, and scope question volume — gives a delivery lead far more useful information about program health than a standard status report. The status report tells you where you are. The leading indicators tell you where you are heading.

Procurement Is a Delivery Risk: How Government PMs Lose Control Before the Project Starts

By the time a government IT project is approved and resourced, many of its most consequential delivery decisions have already been made — in the procurement process. Most PMs have no visibility into that process and inherit its consequences on day one.

Government procurement is designed to protect the public interest. It ensures competitive pricing, supplier diversity, and accountability for how public funds are spent. These are legitimate objectives and the framework that supports them is appropriate. The problem for delivery is that procurement optimization and delivery optimization are different things — and when they conflict, procurement wins every time.

What Gets Decided Before the PM Arrives

By the time a project manager is assigned to a government IT initiative, a significant portion of the delivery environment has already been fixed. The vendor has been selected through a process the PM had no input into. The contract scope has been defined — often at a level of abstraction that leaves room for interpretation disputes. The commercial terms have been set, including payment milestones that may or may not align with delivery milestones. The timeline has been established based on budget cycles and approval processes rather than delivery capacity.

None of these decisions are wrong in the procurement sense. But they create a delivery environment that the PM must work within rather than design. The vendor whose delivery methodology the PM would not have chosen. The timeline that is two months shorter than the work requires. The payment structure that incentivizes vendor behavior that does not always align with client outcomes.

The SOW That Does Not Protect You

Government Statements of Work are detailed documents. They describe deliverables, acceptance criteria, timelines, and penalty provisions. What they rarely describe precisely enough is the process for resolving ambiguity — and in complex IT programs, ambiguity is the normal condition, not the exception.

When a requirement turns out to mean something different to the vendor than it does to the client, the SOW becomes the battleground. The PM who was not involved in writing it must now interpret it, negotiate its application, and manage the relationship while the legal interpretation is being contested. This is the most draining kind of vendor management, and it is largely preventable if the PM has input into SOW development before signature.

What Government PMs Can Do

The most effective government PMs I have worked with have developed a consistent practice of engaging with procurement before contracts are awarded, not after. They build relationships with procurement officers, participate in technical evaluation panels where possible, and provide written delivery risk input on draft SOW language when they can access it. This is not standard practice and it is not always possible — but the PMs who do it consistently inherit delivery environments that are more workable than those who wait for the handoff.

Where early procurement engagement is not possible, the priority on day one is a structured contract review — not the legal review, which procurement has already done, but a delivery review. What has been committed to, in what timeline, with what acceptance criteria, and where are the gaps between what the contract says and what delivery actually requires? This review, done in the first two weeks, gives a PM the baseline they need to manage the delivery environment they have inherited.