IBM i Core Banking Data Replication Pipeline
Forming the foundation for real-time data integration initiatives including analytics support, fraud detection, and AI
Industry
Financial Services
Company Size
Multinational
Services
Banking and financial services
Key Factors of Success
- Drop in tools enabling IBM i DB2 data to be streamed in real time to external systems and platforms
- Cross-platform team with experience in IBM i (AS400) development, DB2, and Kafka platform
Outcomes
- A scalable event driven data streaming framework for future data sources, platform, enhancements, and infrastructure
- Functional IBM i (AS400) DB2 to Kafka replication pipeline
- Operational test and production environments
- Processing over 1,000 records per second from DB2 to Kafka
The Customer
A leading full-service banking operation with a network of over 80 branches across 18 provinces.


The Challenge
The customer leveraged a legacy IBM i based core banking application as well as several other services and databases for critical business functions. Due to an emerging business need, customer development and IT leadership needed a way to effectively make both data from the IBM i and other sources available in real-time for a variety of use cases such as fraud detection, data analytics, AI, and other use cases. Due to tight deadlines and a thinly stretched in-house team, the customer was in urgent need of assistance from a vendor with a strong background in legacy modernization and integration.
The Solution
An engagement leveraging InfoConnect products and Kafka middleware. The solution included standing up Kafka platform, configuring InfoCDC, and Kafka Connectors for IBM i systems, and establishing a replication pipeline for a pre-selected number of DB2 tables.


The Design
To enable real-time data replication, InfoCDC was installed on the IBM i system to capture changes as they occurred without major rework of the customers’ existing IBM i systems. These changes were streamed via Kafka Connect nodes into Kafka topics, enabling seamless data flow. Sink connectors were configured to route the data into the target database, as well as other target platforms. A hybrid delivery model, blending flexibility with the structure of waterfall, was used to align with architectural standards while maintaining timely project delivery.
Results
The solution successfully delivered a real-time IBM i DB2 to Kafka data replication pipeline, capable of streaming over 1,000 records per second. Fully functional test and production environments were established to ensure both stability and scalability. The architecture introduced a flexible, event-driven framework designed to evolve with changing business needs all with minimal disruption to existing IBM i systems.
