2016-2019. A major composite insurer. One million personal lines policies on a legacy mainframe. A troubled migration programme that blew its timeline and budget. This is where it started.
The insurer was one of the UK's largest composite carriers — personal lines Home and Motor, with over one million policies sitting on a legacy mainframe. The target was Duck Creek, a modern PAS platform. The programme was ambitious: migrate a million policies, preserve every endorsement chain, satisfy regulatory reporting requirements, and do it within the approved timeline and budget.
It didn't go to plan. The programme was troubled from the start. The mainframe's data structures were decades old — built for batch processing, not for extraction. Field definitions had drifted from their documentation over fifteen years of incremental changes. The data dictionary described what the system was designed to hold. The actual data told a different story.
Timeline slipped. Budget expanded. The programme brought in additional resources, but the fundamental problem wasn't headcount — it was methodology. The tools being used treated every policy as an independent record. But personal lines at this volume have deep interdependencies: household-level groupings, multi-policy discounts, renewal chains that span years, endorsements that reference terms from previous endorsements. Breaking those chains during migration meant breaking the data.
One million policies across Home and Motor — 650,000 Home policies and 350,000 Motor policies — each with their own endorsement patterns, renewal histories, and regulatory reporting requirements. The scale amplified every methodological weakness. An error rate that might be manageable on a 10,000-policy book becomes catastrophic at a million.
Tom Richardson was the customer-side data lead at the insurer. He saw first-hand what happens when a migration programme doesn't understand the business reality of the data it's moving. The technical team mapped schemas. But nobody mapped the business logic — the underwriting rules embedded in decades of mainframe batch jobs, the implicit relationships between policy records that weren't in any documentation.
The gap between the data dictionary and the actual data was the root cause of most failures. Fields that were 'not in use' according to documentation contained critical business logic. Batch processes that ran overnight handled edge cases that nobody had formally specified. The migration team discovered these gaps one at a time, each one adding weeks to the timeline.
Evidence produced
A deep understanding of why documentation-dependent migration methodologies fail on legacy systems — and why the discovery phase must analyse actual data, not rely on what people think the data contains.
Dan Pears was the vendor-side counterpart — working for the consultancy delivering the migration. He saw the programme from the delivery side: the pressure to commit to timelines before the book was properly understood, the disconnect between sales estimates and engineering reality, the compounding effect of early methodology decisions on downstream delivery.
The vendor's tooling was designed for generic data migration — not insurance-specific migration. It could move records between schemas, but it didn't understand endorsement chains, policy groupings, or regulatory reporting dependencies. Every insurance-specific requirement had to be custom-built, which is why the timeline and budget kept expanding.
Evidence produced
A clear view of why generic ETL and data migration tools fail on insurance books — and why domain-aware tooling purpose-built for insurance migration is not a luxury but a necessity.
Between them, Tom and Dan had seen the same migration fail from both sides of the table. The insurer needed tooling that understood insurance data — not just schemas, but business logic, endorsement chains, regulatory dependencies, and the reality of what legacy systems actually contain versus what they're documented to contain. That tooling didn't exist. So they built it.
KeystoneMigrate was designed from day one to address what Tom and Dan saw fail at the major insurer: discovery that analyses actual data rather than relying on documentation, domain-aware migration that understands insurance-specific data structures, and evidence-first validation that proves the migration worked before cutover — not after.
Evidence produced
The founding thesis of KeystoneMigrate: insurance PAS migration requires purpose-built, domain-aware tooling — not generic data migration adapted for insurance.
Policies in scope
Historical fact: the programme scope covered over one million personal lines policies — approximately 650,000 Home and 350,000 Motor.
Programme duration
Historical fact: the programme ran from 2016 to 2019. The original timeline was significantly shorter. This is the experience that revealed the gap in migration tooling.
Historical fact: the source system was a legacy mainframe with decades of accumulated batch processing logic and data structure drift.
Historical fact: the target PAS was Duck Creek. The migration required translating mainframe data structures into Duck Creek's modern policy model.
This case study describes the migration programme that inspired KeystoneMigrate. It is not a Keystone product engagement.
We lived this migration from both sides — customer and vendor. We saw what failed and why. KeystoneMigrate was built to solve the problems we experienced first-hand.