About Who We Are Services Resources Engage Us
Insights

Patterns worth watching from the delivery floor.

Sharper observations pulled from complex delivery environments, stakeholder dynamics, and enterprise transformation programs.

← Back to Resources
Insight March 3, 2026
Change ManagementAdoption · Transformation

Unifying Digital and Delivery Practices - Bridging Go-Lives with Behavior Change

Why transformation programs fail at adoption even when the technology goes live as planned, and what to fix before launch.

There is a persistent and costly assumption embedded in how many organizations plan digital transformation programs: that a successful go-live is a successful transformation. The technology is live. The migration is complete. The project is closed. The transformation, presumably, has occurred.

It has not. What has occurred is an infrastructure event. The transformation - the actual change in how people work, how decisions get made, and how services are delivered - is either beginning or failing at the point when the project plan shows it as complete.

The Gap Between Deployment and Change

Digital delivery practices and organizational change practices are managed as separate disciplines in most enterprise programs. The delivery team tracks milestones, manages scope, coordinates technical workstreams, and drives toward go-live. The change management stream runs in parallel - usually smaller, usually less funded, and almost always subordinate in the governance structure to the delivery workstream.

This separation produces a structural problem. The delivery team has a clear finish line: go-live. The change team's actual finish line - durable adoption at target proficiency - is weeks or months after go-live. When the delivery team exits post-go-live, the change work is often still in its most critical phase. The organization transitions from a project-supported environment to a steady-state environment at exactly the moment when users are most dependent on active support.

The Adoption Signals to Track Before Launch

  • Supervisor readiness rate: are managers prepared to reinforce new behaviors or are they still absorbing the system themselves?
  • Early adopter confidence: do the first cohort of users describe the system as intuitive or as something to be endured?
  • Workaround detection: are users already developing informal processes to avoid parts of the new system during pilots?
  • Escalation path clarity: do users know exactly who to call and how fast they will get an answer in the first two weeks after launch?

What Bridging Actually Requires

Bridging go-live with behavior change requires treating adoption as a delivery milestone with the same governance rigor applied to technical milestones. This means defining a measurable adoption target - not usage statistics, but proficiency indicators - and tracking progress toward that target through the hypercare period with the same frequency that technical delivery milestones are tracked.

It also requires that the individuals accountable for adoption outcomes have organizational authority that matches their accountability. A change manager who can report on adoption metrics but cannot direct the remediation activities needed to improve them is carrying accountability without authority - a governance design flaw that produces frustrated change managers and unresolved adoption gaps.

"The programs that achieve durable adoption are not the ones with better training decks. They are the ones where the executive sponsor is still publicly invested in the change three months after go-live."

The Sponsor Accountability Cliff

Most executive sponsors are engaged and visible during the delivery phase. They attend steering committees, they communicate the transformation vision to their organizations, they make decisions when the project needs them. Then the system goes live and the project closes - and sponsor visibility drops sharply at exactly the moment when their organizational authority is most needed to drive adoption.

The organizations that successfully bridge go-live with sustained behavior change are the ones that have built a post-go-live sponsor engagement plan as explicitly as they built a pre-go-live communication plan. What will the sponsor do in the first month after launch? What messages will they send? What behaviors will they model? What adoption metrics will they request in their executive reports? The answers to these questions determine whether the technology investment delivers its intended return or sits in the gap between deployment and change.

Insight February 2, 2026
RiskSignals · Early Warning

Delivery Risk Usually Shows Up Weeks Before the Status Turns Red

An early-warning view of the small signals that usually appear before schedule failure, sponsor frustration, and credibility loss.

Status reports are backward-looking documents. By the time a project is reported as red, the conditions that produced the red status have typically been present for weeks - sometimes months. The signals were there. They were often visible. They were either not recognized as signals or not acted upon because no single indicator was severe enough to trigger a formal response.

Pattern recognition in delivery risk is a learnable skill. The signals that precede most delivery failures are not random - they are consistent across programs, industries, and organization types. Learning to read them early is one of the most valuable capabilities a delivery leader can develop.

The Communication Pattern Shift

One of the earliest and most reliable signals of delivery deterioration is a change in how the team communicates. This shows up in several forms: project managers who were previously proactive in raising issues begin waiting to be asked; technical leads who used to flag risks informally start holding concerns until formal review sessions; stakeholders who were engaged and responsive begin to respond more slowly or defer to written communication rather than direct conversation.

These shifts happen because people in delivery environments naturally modulate their communication behavior in response to stress - specifically, the stress of knowing that progress is slower than planned and not yet knowing how to close the gap. The instinct is to buy time before reporting a problem. The practical effect is that the PM is the last person to know about the problem, rather than the first.

The Velocity Plateau

In sprint-based delivery environments, the second reliable early signal is velocity flattening. A team that has been completing a consistent volume of work per sprint begins to slow - but the slowdown does not immediately show up in the milestone plan because carry-over work gets absorbed into the next sprint rather than being reported as a slip.

Velocity plateaus rarely come from reduced effort. They usually come from increased decision latency (work is stopped waiting for an answer), rising defect rates (completed work is returning for rework), or scope expansion within stories (items that were scoped as straightforward are proving more complex than estimated). Each of these causes has a different remediation path, and identifying which one is causing the plateau early is the difference between a schedule correction and a schedule crisis.

Early Warning Signal Checklist

  • Informal communications from technical leads have dropped in frequency
  • Two or more consecutive sprints with carry-over from the previous sprint
  • Defect rates in integration testing are rising week over week
  • Dependencies on another workstream or vendor are unresolved past their due date
  • Steering committee attendance has decreased or key sponsors are sending delegates
  • The PM has started producing more optimistic milestone projections without a documented basis
  • Stakeholder questions in reviews are shifting from forward-looking to backward-looking

Steering Committee Attendance as a Signal

Senior stakeholder attendance patterns in governance forums are a remarkably accurate leading indicator of program health. When a sponsor who previously attended every steering committee begins sending their chief of staff or a direct report, two things have typically happened: their confidence in the program has declined to the point where they feel their time is not well spent in the forum, or the program has become politically sensitive and they are creating distance as a protective measure.

Neither scenario is a good signal. But both are recoverable if identified early. The PM who recognizes the attendance shift and proactively engages the sponsor outside the formal governance structure - to understand their concerns, clarify the program's status, and re-establish the confidence that drives attendance - is managing the situation. The PM who notes the attendance shift in the log and moves on is missing the most important early warning signal available to them.

"Programs do not fail on the day the status turns red. They fail in the weeks when small signals are seen and not acted upon, because no single signal was alarming enough to trigger a response."

Acting on Signals Before They Compound

The practical discipline is straightforward in concept and difficult in practice: build a lightweight signal-monitoring habit into weekly delivery management. Not a formal risk review - a five-minute pattern check. Are communications flowing normally? Are velocity indicators stable? Are dependency resolutions arriving on time? Are key stakeholders engaged? Any change in the answer to these questions is worth ten minutes of investigation before it becomes the subject of a steering committee recovery briefing.

Insight December 29, 2025
Executive ReportingGovernance · Leadership

Executive Confidence Is Built by Rhythm, Not by Better Slide Design

Consistent decision cycles, transparent reporting, and real issue ownership do more for sponsor confidence than polished presentations.

A significant amount of PM time on complex programs gets directed toward steering committee decks. Formatting, narrative, color-coding, the sequencing of information. The implicit belief driving this investment is that a better presentation will produce a better reaction from the executive audience - more confidence, more trust, more willingness to support the program when it needs something.

The belief is not entirely wrong. Presentations that are clear and well-organized are easier for executives to process than ones that are not. But the returns on presentation quality diminish rapidly. An executive sponsor who has been receiving transparent, consistent, accurate information for six months does not become meaningfully more confident because the September deck looks better than the August deck.

What Confidence Is Actually Built On

Executive confidence in a delivery program is built on predictability. Can the team tell me what is going to happen next and then deliver what they said would happen? When something goes wrong, do I hear about it from the PM before I hear about it from someone else in my organization? When I ask for a decision, does the team come with a recommendation or do they come with a set of options and wait for me to choose?

These are not presentation questions. They are governance and communication cadence questions. The PM who delivers the same quality of information at the same frequency, who surfaces issues before they become visible to the executive through other channels, and who arrives at every governance forum with a clear ask rather than an update is building confidence systematically. The presentation format is largely irrelevant.

The Issue Ownership Test

Nothing erodes executive confidence faster than the discovery that the PM has been managing an issue informally that should have been escalated. This is the central accountability question for delivery governance: at what threshold does an issue leave the PM's direct management and require executive visibility?

Programs with strong governance have a defined answer. Programs without it operate on the PM's judgment - which varies by individual, changes under stress, and inevitably produces at least one instance where an executive learns about a significant problem from a source other than the PM. That instance, once it happens, resets the confidence baseline and requires months of consistent transparent reporting to recover.

"I have seen programs with beautiful governance decks that executives had stopped trusting. I have seen programs with plain-text weekly updates that executives defended when challenged. The difference was always in whether the information was reliable, not in whether it was well-designed."

Building the Rhythm

The reporting rhythm that builds confidence has three characteristics. It is consistent - the same cadence, the same format, at the same frequency regardless of whether the news is good or difficult. It is complete - it includes issues and risks that are actively managed, not just completed milestones. And it is forward-looking - it tells the executive what is coming, what decisions they will be needed for, and what the PM is watching most closely in the next period.

Executives who receive reporting that has these characteristics begin to trust the PM's judgment, not just the PM's output. That trust is what produces the organizational support - faster decisions, more active sponsorship, willingness to go to bat for the program in budget or resource conversations - that complex programs depend on. No slide template produces that. Consistent, honest, forward-looking rhythm does.

Insight November 18, 2025
StakeholdersChange · Delivery

The Middle Management Blind Spot in Digital Transformation

Most transformation programs invest heavily in executive sponsorship and frontline user readiness. The layer in between is where adoption programs most often break.

Executive sponsors set the vision and signal organizational commitment. Frontline users are trained, supported, and measured for adoption. The group that consistently receives the least deliberate attention in digital transformation programs is the middle management layer - the supervisors, team leads, and operational managers whose daily behavior does more to shape frontline adoption than any training program can.

Why Middle Managers Are Structurally Under-Supported

Middle managers in transformation programs face a compressed set of competing demands. They are expected to keep their teams productive during the transition, absorb the governance burden of the new system themselves, translate executive communications into operational reality for their direct reports, and manage the performance anxiety that almost always accompanies significant process change. They typically receive training that is identical to what their frontline staff receives, without any additional preparation for the supervisory challenges that are unique to their role.

The result is predictable. Middle managers who are not ready to operate confidently in the new environment cannot reinforce new behaviors in their teams. When staff come to them with questions, they either escalate to the project team (creating volume the project team cannot sustain post-go-live) or they give informal guidance that may not match the designed process. In the worst cases, they quietly permit team members to use old methods because the new method is taking longer and the team's performance metrics are suffering.

"A transformation program's adoption ceiling is set by the readiness of the middle management layer. You can have the best platform, the best training, and the most engaged executive sponsor - and still fail if supervisors are not prepared to reinforce the change in their daily management."

What Deliberate Middle Manager Readiness Looks Like

Programs that successfully navigate the middle management challenge do three things that most programs do not. First, they design a separate readiness track for managers that addresses the supervisory skills required in the new environment - how to coach a staff member who is struggling, how to interpret performance data from the new system, what to do when a team member reports a process gap. Second, they engage middle managers before go-live in a way that gives them system confidence - not just a training session, but enough hands-on time that they can answer their team's questions without referring up the chain. Third, they involve middle managers in the design of early hypercare processes - who is the floor support contact, what is the escalation path, what constitutes a process exception that needs project team involvement versus a management call.

These investments are not large in budget terms. They are large in program planning attention. The programs that make them routinely outperform the ones that don't on the metrics that actually matter post-go-live: proficiency rate, workaround frequency, and time to steady-state operation.

Insight October 21, 2025
GovernanceSteering Committee · Leadership

When Steering Committees Stop Working - and How to Reset Them

A steering committee that meets regularly but doesn't make decisions is not a governance mechanism. It is a reporting ceremony that creates the illusion of oversight.

Most enterprise programs have a steering committee. Fewer have one that is genuinely functional as a governance body. The difference between a steering committee that works and one that doesn't is usually not the seniority of the members, the frequency of the meetings, or the quality of the materials presented. It is whether real decisions come out of the room.

How Steering Committees Drift into Ceremony

The drift is gradual and usually begins with success. Early in a program, steering committees are energetic - there are foundational decisions to make, scope is being defined, and the executive interest is high. As the program moves into execution, the decisions become more operational and less strategic. Steering committee agendas shift toward status updates, milestone confirmations, and risk reporting. The committee receives information but is not regularly asked to act on it.

After several months of this pattern, a subtle dynamic sets in. Executives who are not being asked to make decisions stop preparing for the meeting in the same way they would for a decision-making forum. Attendance becomes more variable. Pre-reads receive less careful review. The meeting becomes something to be managed rather than something to contribute to - and the PM, sensing this, produces increasingly polished presentations in an attempt to sustain engagement.

The Reset Approach

Resetting a steering committee that has drifted requires addressing the structure before the content. The first intervention is to redesign the agenda so that every session has at least one item requiring a named decision with a defined consequence for non-decision. This is not manufactured urgency - it is an honest representation of what delivery programs actually need from their governance bodies.

The second intervention is to change how issues are presented. Moving from a status summary to a recommendation format - here is the situation, here is the proposed response, here is what we need from this committee - repositions the executives from audience to decision-maker. It respects their time by coming with analysis complete and a specific request, rather than asking them to synthesize a status update and determine an action independently.

The third is pre-meeting engagement with key sponsors. A steering committee where the two most important attendees are seeing information for the first time in the room is a steering committee where decisions get deferred. A brief pre-meeting conversation with key decision-makers - not to pre-determine the outcome, but to ensure they have context and can engage confidently - is one of the highest-return investments a PM can make in governance effectiveness.

Insight September 16, 2025
TechnologyEnterprise IT · Strategy

The Platform Trap: When Technology Strategy Gets Captured by Vendor Roadmaps

Organizations that allow a single platform vendor to define their technology strategy rarely get the outcomes that drove the original platform selection.

Platform consolidation has been a dominant theme in enterprise IT strategy for the better part of a decade. The logic is compelling: fewer platforms mean lower integration complexity, lower licensing overhead, better data coherence, and simpler governance. In practice, organizations that consolidate heavily onto a single major platform often find themselves in a different position than they anticipated - not simpler, but more constrained.

How the Trap Forms

The platform trap forms slowly. An organization selects a major platform - a CRM, an ERP, a cloud infrastructure provider, a digital service platform - based on an evaluation of current needs against available options. The selection is usually well-reasoned at the time it is made. What changes is not the quality of the decision but the conditions that follow it.

As the platform becomes embedded in core operations, switching costs rise sharply. Data is in the platform. Processes are built around it. Staff are trained on it. The organization's technology planning increasingly begins with the question of what the platform supports rather than what the business needs. The vendor's annual roadmap becomes a significant input to the organization's digital strategy - a dependency that was not visible at the time of selection.

The Government Context

In government environments, the platform trap has a particular dimension that is worth understanding. Procurement constraints and multi-year contract cycles mean that once a platform is selected and deployed, it typically occupies its position for seven to ten years regardless of how the market evolves. A platform that was leading-edge at the time of procurement may be trailing its category five years later, but replacement is a procurement and budget exercise that takes years to initiate, approve, and execute.

The organizations that navigate this most effectively are the ones that build platform governance into their technology strategy explicitly - defining at the point of procurement what the platform will and will not be used for, what the evaluation criteria are for extending or limiting its scope over time, and what organizational ownership structure ensures the platform serves the business rather than the reverse. These are not technical decisions. They are strategic governance decisions that belong in the business leadership layer, not in the IT function alone.

"The most expensive platform decision an organization makes is usually not the initial purchase. It is the ten subsequent decisions made inside a vendor relationship that the organization did not fully account for when it signed the contract."
Insight April 7, 2026
Delivery PatternsGovernance · Risk

The Approval Bottleneck Nobody Talks About

On most enterprise programs, the thing that slows delivery is not the work itself. It is the queue of decisions waiting for sign-off from people who are not available, not briefed, or not sure they have the authority to decide.

I have tracked delivery blockages across a number of programs now — government, logistics, telecom — and the pattern is consistent. When a program slips, the root cause is almost never technical complexity. It is decision latency. A change request sits for eight days waiting for a director who is traveling. An architecture decision gets deferred because two stakeholders disagree about who owns it. A procurement approval waits in a queue because the correct approver threshold was never defined clearly in the governance framework.

Each of these individually looks like a minor admin issue. Together, they account for two to three weeks of lost delivery capacity across a typical quarter — capacity that the program budget paid for and never received.

Why It Keeps Happening

Governance frameworks define escalation paths and approval authorities, but they rarely define response time expectations. An escalation that reaches the right person in day one but does not receive a decision until day nine is technically compliant with the governance framework. It is also a delivery failure.

The programs that avoid this pattern share a common practice: they define decision SLAs alongside decision authorities. Who needs to decide is documented. How long they have to decide is also documented. What happens if the decision does not arrive in time — whether a default decision applies, whether authority delegates down, whether the PM escalates further — is known in advance by everyone in the governance structure.

Questions to ask before your next steering committee

  • How many decisions are currently sitting in someone's queue awaiting approval?
  • What is the average time from decision request to decision received on this program?
  • Which single individual is most often the bottleneck — and does the governance structure provide a backup path?

Fixing approval bottlenecks does not require restructuring governance. It requires making the time dimension of governance explicit — which most programs have simply never done.

Insight March 25, 2026
Stakeholder DynamicsLeadership · Communication

What Executives Actually Hear When You Present Project Status

The information gap between what a PM presents in a steering committee and what the executive sponsor actually takes away from it is larger than most delivery teams realize — and it explains a lot of governance failures.

Most project status reports are written by people who know a great deal about the project and read by people who are managing ten other organizational priorities. The mismatch in context between writer and reader is enormous — and it means that even well-constructed status reports routinely fail to convey the most important information.

I have sat in enough steering committees to observe this dynamic directly. A PM presents a status deck with a green RAG for the past four weeks. The executive sponsor leaves the meeting confident that the program is on track. The PM knows that the green is technically accurate but that three dependencies in the next six weeks are highly uncertain. The sponsor does not know this because uncertainty does not have a RAG status in the standard template.

The Confidence Gap

Executive sponsors are not passive consumers of status information. They are actively constructing a mental model of the program's health from the signals they receive — not just from formal reports, but from the PM's tone in conversation, the questions that come up in meetings, the issues that appear and then quietly disappear. When the formal status report says green and the informal signals suggest stress, the executive's confidence erodes even if they cannot articulate why.

The PMs who maintain executive confidence through difficult periods are not the ones who manage the RAG status most carefully. They are the ones who give executives a clear, honest picture of where things stand, what the risks are, and what they need from the governance structure to keep delivery moving. This approach takes more courage than a polished status deck. It also produces sponsors who trust the PM's judgment when it matters most — which is during the periods when the program genuinely needs their support.

One Practical Change

Add a single field to your status report: "What I need from the steering committee this week." Not a list of informational items, but a specific ask. A decision on a change request. An escalation to a Ministry contact who has gone quiet. Authorization to proceed past a stage gate. This one change shifts the steering committee from a reporting forum to a decision forum — and it gives executives a reason to engage with the status material rather than receive it passively.

Insight April 16, 2026
Digital FluencyAI · Delivery

5 AI Skills Every Project Manager Needs in 2026

Delivery teams are automating tasks that once consumed hours of a project manager's week. The gap between teams who have made this shift and those still working manually is widening fast.

Across government and enterprise engagements, status reporting, risk logging, meeting documentation, and schedule management are increasingly being handled by tools — not people. A recurring pattern emerges: teams have the licenses, but the training budget prioritized onboarding over fluency. Tools paid for sit largely unused because no one has demonstrated what effective use looks like in practice.

At the delivery level, AI fluency is not one skill. It is five.

The five skills

  • Prompt engineering for project artifacts — generating risk registers, RACI matrices, status reports, and stakeholder communications using structured, repeatable prompts
  • AI-assisted schedule and risk modelling — modelling what-if scenarios, flagging resource conflicts, and surfacing early warning signs before they reach the sponsor
  • Automated reporting and documentation — AI transcribes meetings, extracts action items, and produces board-ready summaries without a PM spending an evening writing them up
  • Workflow automation across your PM stack — connecting tools so task completions, approvals, and escalations trigger automatically with no manual handoffs
  • Critical evaluation of AI output — knowing where to interrogate the output before it goes upstream

That last skill is what separates a senior project manager from someone who forwards AI-generated slides. AI forecasts rely on historical data and may not account for changing stakeholder priorities or the political dynamics that can undermine a recommendation before it reaches the board. The PM who knows how to scrutinize that output is invaluable.

These skills are practical and teachable. Any AI tool introduced into a delivery environment must first be evaluated against your organization's data governance policies, vendor terms of service, and applicable privacy legislation. In government and regulated environments, this step is not optional.

Insight April 13, 2026
Change ManagementInnovation · Leadership

Kodak Had the Answer. They Just Could Not Afford to Believe It.

In 1975, a 25-year-old engineer at Kodak built the world's first digital camera. It was the size of a toaster. It stored images on a cassette tape. It took 23 seconds to save a single photo. And it was the future.

He walked into the boardroom, took pictures of the people in the room, played them back on a TV screen, and watched everyone go quiet. Then someone asked: "Why would anyone want to take a picture this way when there's nothing wrong with conventional photography?"

That was the last serious conversation Kodak had about it. They patented it, buried it, and told Steve Sasson to stop talking about it publicly. They had film to sell. They had a business to protect. They could not afford to believe in what their own engineer had just shown them.

Kodak filed for bankruptcy in 2012. The very technology their own hands built was what buried them.

"Innovation without the organizational readiness to embrace it is just an invention gathering dust."

This is what happens when innovation arrives and the organization is not ready to receive it. Not because the idea was wrong. Because the culture, the leadership, the internal conversations — none of it had been prepared for the disruption already sitting in the room.

The question worth asking in your organization today is not "are we innovating?" It is "are we actually ready for what we say we want?"

Digital transformation programs fail at exactly this point. The technology goes live. The platform is deployed. But the organization was never made ready to receive it — and the gap between deployment and genuine adoption is where the return on investment disappears.