The lifecycle
Six phases. Twenty stages. Typically eighteen to twenty-four months end-to-end. Every stage has a name, a duration range, named owners, and exit criteria. The shape of the lifecycle reflects an opinion: programmes succeed or fail at the front end, so the front end gets twelve weeks of structured work before anyone touches procurement.
Phase 1 — Pre-Programme · Stages 0–5 · ~12 weeks
The question this phase answers: should we do this, why, and on what evidence?
Pre-Programme is twelve structured weeks before any vendor is contacted. Most ERP failures are decided here, in the absence of structure. By the end of Pre-Programme, the organisation has a problem statement everyone agrees on, a vision aligned to strategy, a benefits map with baselines and named owners, governance constituted, and a Change Lead in post.
Frames the trigger. Why now, what's broken or being missed, how big the prize. One-page Problem & Opportunity statement, sponsor-signed. The page is short on purpose — if you can't articulate the problem in a page, you don't yet know what it is.
Translates the problem into a target state. Vision document. Strategic alignment matrix — every objective in the vision maps to an objective in the organisation's wider strategy. AS-IS architecture review starts here, feeding the Case for Change.
The benefits side of the business case is established here, not at Solution Design & Full Business Case (S12). Benefits Map. ROI Driver Matrix. KPI baselines measured before any system change. Benefit Owners named — one per business benefit, accountable through to Benefits Realisation & Review (S18). Data Owners named — one per master-data domain, accountable for hygiene from S2 onward.
Constitutes the Executive Sponsor Group from Phase 1 / Week 1. Terms of Reference signed. Sponsorship cascade documented. Stakeholder map first cut. The governance structure is in place before the programme has anything to govern, on purpose — establishing it later is hard.
Identifies Process Owners (one per process stream). Scopes the resourcing model. Plans the readiness work for Benefits & Continuous Alignment (S5) onward. Produces the Pre-Programme Risk Register — the risks identified now are the ones the programme will live with through Selection.
Benefits measurement framework signed off. Baselines documented. Measurement plan attached to each benefit, with owners and cadence. Change Lead named from S5 onward — stakeholder analysis and comms strategy begin. Programme Charter signed. Phase exit checkpoint: Programme Charter Signed.
Phase 2 — Selection · Stages 6–9 · 17–24 weeks
The question this phase answers: what are we buying, from whom, at what cost?
Selection is where the methodology meets the market. Funding gets approved by the Board (the first real Gate). The market is engaged, software is selected, an SI is selected, and contracts are signed.
Benchmark costs developed. Funding envelope sized at ±30–40% variance. Board Gate 1: the Board approves the envelope and authorises programme initiation. This is a binary go/no-go. If it fails, the programme stops here. If it passes, the programme is real.
Long list of vendors. RFI issued — the structured one Keystone provides as a template, not a free-form questionnaire. Responses scored. Shortlist confirmed. Shortlist Checkpoint at the end.
Scripted demos. Reference calls. Software platform selected. Preferred Vendor Checkpoint at the end. Solution Architect (Client side) named from SI Selection (S9) — joins now to influence the SI selection.
SI selected against ROM Pricing at ±30% variance. Contracts signed. Design Authority forms at the end of this stage and runs through hypercare. SI Solution Architect joins at S9 to co-chair the Design Authority. Phase exit checkpoint: SI Contracts Signed.
Phase 3 — Setup & Design · Stages 10–12 · 12–18 weeks
The question this phase answers: what exactly are we building, and is the business case still good?
Setup & Design takes the funding envelope and turns it into a Full Business Case at ±10–15% variance — the firm number that the Board commits to at Gate 2. Workstreams form. Process Owners and the Change Lead embed. The Steering Committee starts running monthly from Phase 3 / Week 12.
Programme team mobilises. Project Managers join — per workstream — from Programme Setup & Mobilisation (S10) onward, not earlier. They are not present in Pre-Programme or Selection because there are no workstreams yet to run; staffing them earlier is one of the most common methodology errors. Environments architected and provisioned. SI Functional, Technical and Data Migration Leads join. Training Lead named.
Discovery workshops against the already-cleansed master data from the Data Hygiene workstream. Source data quality scorecard. In-scope object catalogue. Security architecture and role design. Client Test Manager named from Discovery (S11) onward. Discovery Sign-Off at the end.
Solution Design Document. Integration design. Migration design with cleansing rules. Full Business Case confirmed at ±10–15% variance — firm/capped SI build & test pricing. Design Sign-Off (Design Authority approves all FDDs, integration, data migration design). Board Gate 2: the final investment decision before build begins. If it passes, build starts. If it fails, the programme returns to design or is paused.
Phase 4 — Build & Test · Stages 13–14 · 14–24 weeks
The question this phase answers: does it work?
Build & Test is where the methodology earns its name. Sprints build the system. The eight-level testing framework runs in sequence with NFT in parallel. Dry Runs validate the migration. By the end of Testing (S14), every test level has exited and the cutover is ready to be planned.
Sprint-based build of functional configuration. FAT runs every sprint, accepted by Process Owners at sprint review. Mini-BAT every second or third sprint. Integration build and unit test. Migration build with Dry Run 1. Build Complete checkpoint at the end.
The full sequential test pipeline runs: SAT (SI Test Lead) → SIT (Client Test Manager owns, joint with SI) → Pre-UAT → UAT (executed by users, NOT Process Owners) → BAT (scenario-based, signed off by Benefit Owners + Exec Sponsor). NFT runs in parallel with BAT. Dry Run 2 at full volume. Test Exit / Cutover Readiness checkpoint at the end.
Phase 5 — Deploy · Stages 15–17 · 8–14 weeks
The question this phase answers: can we cut over and stay live?
Deploy is the most operationally intense phase. Cutover is rehearsed, executed, and stabilised. The Executive Sponsor's cutover Go/No-Go sits between Cutover Planning (S15) and Deployment & Go-Live (S16) — without it, cutover does not start.
Cutover runbook finalised. Final data migration dress rehearsal. NFT exit certificate. Communications cascade rehearsed. The Executive Sponsor's cutover Go/No-Go at the end of Cutover Planning (S15): the Sponsor's binary call on whether cutover proceeds. Not a Board Gate — Sponsor's call — but binding.
Cutover execution. Migrated data live in production. Go-Live event within Deployment & Go-Live (S16) — the gold star on the timeline, not a gate. System live. Hypercare begins. KPI measurement against the Value Definition & Case for Change (S2) baselines starts here.
Defect resolution. Post-go-live data validation. Training reinforcement. The Design Authority remains active through hypercare. Platform / Application Owner takes handover for BAU optimisation. Hypercare Exit checkpoint at the end — by criteria, not by date.
Phase 6 — Post-Programme · Stages 18–19 · 12–26 weeks formal, then ongoing
The question this phase answers: did we get what we paid for, and what's next?
Most methodologies stop at hypercare. Keystone treats Post-Programme as a governed phase because that's where the benefits actually land.
Benefit Owners report against the Value Definition & Case for Change (S2) baselines using the Benefits & Continuous Alignment (S5) measurement framework. Benefits Realisation Review at the end — formal evidence-based assessment. Steering Committee transitions to BAU at Optimisation & Maturity (S19). Programme Closure checkpoint.
Platform / Application Owner runs the optimisation backlog. Continuous improvement against measured KPIs. Lessons learned fed back into the methodology and into the next programme. The programme is closed; the platform isn't.
Multi-wave delivery — when waves repeat
Most PLC-scale programmes are multi-wave: Wave 1 deploys to part of the estate, Waves 2 onwards follow against the global template. The lifecycle bends explicitly to handle this.
Run once for the whole programme
Pre-Programme (S0–S5), Selection (S6–S9), and Programme Setup & Mobilisation (S10) run once. The vision, the case for change, the platform decision, the SI choice and the programme-wide team are all set up for the whole estate, not just Wave 1.
Run once, but wave-aware
Discovery (S11) surveys the whole estate, not just Wave 1 sites. This is critical. If Discovery only covers Wave 1, the global template is built blind to the factories, brands or BUs that come later — and Wave 2 then either compromises the template or blows the budget on rework. Discovery is also a brilliant vehicle for cross-estate surveys, gemba walks and "what's special about your site" capture; use it that way deliberately.
Solution Design & Full Business Case (S12) is built once, but with two pricing positions: Wave 1 firm at ±10–15%, Waves 2 onwards indicative, plus explicit per-wave re-baseline gates. The Board commits to Wave 1 firm; subsequent waves come back for re-baselining as the actual scope, scale and learnings firm up.
Repeat per wave
Build & Configuration (S13) through Hypercare & Stabilisation (S17) repeat per wave. Wave 1 is the heaviest — full S13–S17 because it builds the template plus pilots deployment plus first hypercare. Waves 2 onwards are abbreviated — config delta, regression test, wave UAT, cutover, hypercare. Mobilisation is lighter, Discovery is local-only against the global template, Build is delta config not net-new, Test is regression-led.
Programme-wide but wave-aware
Benefits Realisation & Review (S18) accumulates across waves; baselines reset per BU; Wave 1 carries a disproportionate share of fixed cost unless you're disciplined about wave attribution. Optimisation & Maturity (S19) runs continuously across the wave train.
What "wave" can mean
Geography (regions), brands or business units, functions (Finance → Logistics → CRM), pilot-then-scale, or hybrid combinations. The lifecycle pattern is the same regardless of what defines a wave.
Failure modes specific to multi-wave
- Template fragmentation on Wave 2. Without ruthless Design Authority the global template degrades into a different solution per business unit — defeating the entire point of building a template.
- Wave 1 Discovery blindness. Discovery scope must be the full estate, not Wave 1 sites only.
- Solution Design & Full Business Case (S12) business case only firm for Wave 1. The Board commits to "the programme" but the ROM beyond Wave 1 is indicative. By Wave 3 the cost has crept and no-one re-papered the Board. Lock Wave 1 firm and Waves 2 onwards indicative with explicit re-baseline gates per wave.
- Sponsor fatigue / turnover between waves. A 24-month programme spans two ESG iterations and probably an Exec Sponsor change. Wave 2 starts with weaker air cover than Wave 1 unless you build sponsorship handover into the programme deliberately.
- SI commercial gymnastics. "Wave 1 SOW priced firm; Wave 2 will be re-scoped." Watch for this. Lock at least Wave 2 ROM at S12.
- Wave 1 hypercare bleeds into Wave 2 build. Same SI team, can't focus. Resource allocation must be explicit between waves.
Vendor deployment services & on-prem vs cloud
Keystone is written from the Client's perspective and historically frames programmes as a Client–SI relationship. Modern cloud ERP introduces a third commercial party — the platform vendor (Microsoft, SAP, Oracle, Workday) — with its own constraints, prerequisites and paid service tiers. The lifecycle bends to handle that reality.
On a cloud deployment, the vendor — not the SI — owns the production environment lifecycle. Microsoft's Lifecycle Services (LCS) and FastTrack for Dynamics 365, SAP Cloud ALM, Oracle Cloud Customer Connect and Workday Tenant Management each control their own platform's environment provisioning, code package validation, refresh windows and cutover slot timing. On-prem deployments keep that ownership inside the Client/SI boundary; cloud does not. Treating a cloud programme as if the SI controls cutover is a common mistake.
Two decision points matter. The cloud-versus-on-prem decision is taken at Software Selection (S8), with implications cascading through every subsequent stage. The vendor service tier — standard, accelerated (Microsoft FastTrack and equivalents), or paid premium support (Microsoft Premier / Unified Support, SAP Enterprise Support, Oracle Premier) — is locked at Programme Setup & Mobilisation (S10). Surfaced at Cutover Planning (S15) instead, the programme is committed to a go-live date the vendor's standard SLA can't support.
The structural differences are substantial. On cloud, cutover windows are partly the vendor's call; code packages must pass vendor validation before deployment; the Cutover Lead has partial control rather than full. On-prem, the Cutover Lead owns the runbook end-to-end and the Client carries full NFT scope including capacity, DR and security architecture. Service tier costs feed directly into the Full Business Case at Solution Design & Full Business Case (S12) — paid tiers are line items, free tiers like FastTrack-equivalent must be qualified for and exercised, not assumed.
The full table of vendor services per platform, the prerequisites each vendor sets, and the "why pay twice" framing for Clients sit in the canonical reference. The lifecycle mechanics above are the operational summary.
Run the lifecycle against your own programme.
The Command Centre walks each stage interactively, with readiness checks and links to the artefacts you'll need. No sign-up, no gate.
Open the Command CentreOr talk through where your programme sits. Book a 30-minute call →