Legacy Modernization

Replacing an aging core banking system with no customer-facing downtime

Legacy ModernizationFinancial Services5 min read

Zero

Hours of customer-facing downtime

26 months

Total migration duration

100%

Account verification before cutover

A mid-sized Canadian bank had been running its core banking system on a platform that the vendor was retiring. Extended support was available but expensive and time-limited. The bank's technology leadership had assessed three migration paths: a vendor-recommended big-bang cutover, a phased migration over five years using the extended support period, and a strangler fig approach that would migrate functionality incrementally while keeping both systems operational. The vendor's recommended cutover had been modelled at 72 hours of system unavailability. The bank's customer agreement commitments made that window impossible.

The outcome

We implemented the strangler fig migration — running both systems in parallel, incrementally shifting customer accounts to the new system, and decommissioning the old system only after every account had been verified on the new platform. The entire migration took 26 months. Customers experienced no downtime.

01

Why the vendor's recommended approach wasn't an option

The vendor's big-bang cutover plan had been designed for customers with more operational flexibility than a retail bank. The plan required taking the core system offline for 72 hours to migrate all customer data, run reconciliation, and bring up the new system. The bank's service level agreements with its commercial customers — corporate clients with real-time payment requirements, payroll processing obligations, and cash management operations — made 72 hours of system unavailability a contract breach, not an operational inconvenience. The vendor's plan also assumed that the migration would be clean: that every account record would migrate without error, that every balance would reconcile, and that no customer would need to access their account during the cutover window. The bank's technology team had significant doubts about all three assumptions. The existing core system had accumulated thirty years of schema variations, exception handling patches, and data quality issues that had never been fully catalogued. A clean migration of a 30-year-old core banking database in 72 hours was not a realistic expectation.

02

The strangler fig pattern at banking scale

The strangler fig migration pattern gets its name from the fig tree that grows around an existing structure, gradually replacing it while the original structure continues to function. Applied to banking, the approach works as follows: the new core banking system is deployed and operates in parallel with the existing system. Customer accounts are migrated in cohorts — groups of accounts moved from the old system to the new system in a planned sequence. After migration, each cohort's accounts are served by the new system. The old system continues to operate for the accounts not yet migrated. The two systems exchange data to maintain consistency on accounts that interact across the boundary — a payment from a migrated account to an unmigrated account, for example. Over time, the proportion of accounts on the new system grows until the last cohort has been migrated, at which point the old system is decommissioned. The complexity of this approach is in the data synchronization layer that keeps both systems consistent while the migration is in progress.

03

Building the synchronization layer

The synchronization layer was the most technically demanding part of the migration. Every transaction that touched the boundary between the two systems — a payment, a transfer, a fee — had to be recorded in both systems without double-counting. We built the synchronization layer as an event-driven system: every transaction on either core system produced an event, and the synchronization layer consumed those events and applied the corresponding update to the other system. The challenge was idempotency: network failures and system restarts meant that events could be delivered more than once, and applying a transaction twice would corrupt account balances. We implemented idempotency keys on every synchronization event — a unique identifier that allowed each system to detect and discard duplicate events — and ran a daily reconciliation process that compared account balances across both systems for accounts with cross-boundary activity. Discrepancies in the daily reconciliation triggered an immediate review before the next business day. There were forty-one reconciliation discrepancies across the entire 26-month migration period. Each was identified, investigated, and resolved before it affected the customer. None reached the customer.

04

The account migration sequence and verification process

We divided the bank's 840,000 customer accounts into migration cohorts based on account complexity: simpler accounts first, more complex accounts later. The first cohorts contained individual savings and chequing accounts with straightforward transaction histories. The final cohorts contained commercial accounts with complex product structures, automated payment arrangements, and covenant monitoring requirements. Each cohort was migrated on a Sunday morning, when transaction volume was lowest. The migration process for each cohort consisted of: extracting the account data from the old system, transforming it to the new system's data model, loading it into the new system, running a suite of automated verification checks against both systems to confirm that balances, transaction histories, and product configurations matched, and then switching the routing for those accounts to the new system. The automated verification suite ran more than 200 checks per account. Any account that failed a verification check was not migrated on that Sunday — it was flagged for manual review and included in a later cohort after the discrepancy was resolved.

05

Decommissioning the old system

When the last cohort of accounts had been migrated and verified, the old system still ran in read-only mode for 90 days before decommissioning. During those 90 days, the synchronization layer was disabled — no new transactions were written to the old system — but the old system remained accessible for audit queries, regulatory reporting that referenced historical data, and any disputes that required access to transaction records predating the migration. At the end of the 90-day period, the old system was archived: a complete snapshot of the database was taken and stored in long-term archival storage for the seven-year retention period required by the bank's regulatory obligations, and the live environment was decommissioned. The total infrastructure cost of the parallel-running period — operating two core banking systems simultaneously for 26 months — was $4.2 million more than a big-bang cutover would have cost. The bank's risk management committee had approved that cost premium as appropriate given the alternative, which was a migration approach with meaningful probability of a customer-facing incident during one of the most operationally sensitive periods in the bank's recent history.

Facing a similar infrastructure challenge?

We're happy to have a technical conversation about your specific environment — no commitment required.