infoCDC
A lightweight and simple to implement change data capture tool to detect and stream relevant data changes in real-time without any manual development required
Data Replication
Automatically stream DB2 updates to external systems
- Real-time streaming with no polling intervals
- No manual development required/li>
- Exclusively for IBM i (AS400, iSeries) systems
- Focused purely on replication at the fraction of the cost of competitor tools
Complementing Professional Services Assistance
Architectural
advisory
Middleware platform implementation
Platform and Application Support
infoConnect product implementation
System integration
Custom connector development
Supports leading integration platforms
Real-time bi-directional integrations
- Configure tables and individual columns to replicate
- Auto-detect primary keys and unique keys
- Define custom keys for legacy files
- Snapshot support for initial data loads
- Acknowledge messages
infoview cdc configuration menu
Main Menu
Select one of the following:
1. Work with Table Configuration
2. Work with Replication Flows
3. Update INFOCDC License
Selection or command: 1
Feature Comparisons at a Glance
InfoCDC by Infoview Systems | Other market solutions |
---|---|
Supports data replication, API enablement, and green screen automation functionality | Offers just replication and/or ETL functionality |
Supported by a team of IBM i (AS/400) experts and exclusively for IBM i systems | May be compatible for systems other than the IBM i |
Requires no additional infrastructure to leverage | May require additional infrastructure |
Enables real-time event processing without polling intervals | Polls DB2 journals for changes resulting in time lag and additional IBM i system impact |
Minimal implementation timelines – little to no training for end customer team who will own product operations | Can take longer to implement and require heavy training for end owners of product operations (mapping and training) |
Supports leading integration platforms including MuleSoft, Kafka (Confluent), and AWS | May not support API or event-based middleware applications |
Priced quite fairly for mid-market to enterprise firms with no restrictions on transaction amounts or end points | Often priced for enterprise sized firms or based on transaction amounts |
Flexible Proof of Concept Models
Letting companies evaluate on their own terms
- Upon receiving the serial numbers of the IBM i servers planned to be leveraged for the POC, a 30-day trial license will be provided along with installation documentation
- In house teams will design and implement the Kafka and MuleSoft components and IBM i configuration to their desired use case
- As always, the Infoview team will be available to answer any questions and assist with configuration or troubleshooting upon request
Variation one
- For a short period of time (40 hours) a consultant will be allocated to review the POC scope and assist the involved teams.
- The scope would be small and include 1-2 simplified scenarios working end to end in a non-production environment
Variation two
- Entails the creation of the desired use case in our own sandbox environment
- Once configured, results will be demoed to applicable teams with knowledge transfer sessions
Our team will
- Take complete ownership of the implementation of the involved components and corresponding architecture
- Kafka
- Configure DQ listeners, and target DB sync connector for a major standard DB
- Create an initial snapshot of the source IBM i tables and import them into the target database manually then turn on the replication
- Load test and transition to customer IT team for operations and expansion
- MuleSoft:
- MuleSoft
- Configure connector with the IBM i and create flows within MuleSoft studio
- Initiate integration for a select amount of product uses cases
- Align integrations with organizational standards
- Host knowledge transfer sessions with team members taking ownership of the implemented components and integrations
Our team will
Typically, it last 2-3 months and requires a formal SOW
Common Integration Use Cases
Frequently Asked Questions
Looking to increase communication from us to you
-
Is infoCDC installed and run on the IBM i?
Yes, the infoCDC is installed and run on the IBM i, this is because it is a journal based -
Once the product is configured, is there any manual coding required to leverage the product?
The only manual process involved with the infoCDC pertains to the configuration of the product itself. Once configured, the product monitors for DB2 changes automatically and then streams to target API’s as defined by the user. -
How does the product capture the change on the selected table on DB2?
Via journal files on the IBM i -
Once infoCDC is installed on an IBM i server, is auto-discovery enabled on the server?
At this point, manual process – focus is not so much replicating database or entire server, but to replicate a subset of tables and maybe tables a subset of data that would make more sense from an application perspective -
Does the Infoview team offer POC’s to customers?
Absolutely! Our service team is here to assist with product configuration, and first use case implementation, and can also assist with defining tables, full installation of prod in sandbox/non-prod, and as well as corresponding connector configurations to ensure everything works from the IBM i perspective. -
Does infoCDC require timestamps on the table?
Time stamps are not required, as the product is not polling periodically based on time stamps. infoCDC connects directly into DB2 journals, almost like Kafka persistently logs data and events, whenever a change happens, one or more entries are pushed into a journal that immediately is made available for a process on our end -
How Much CPU or memory is needed for the infoCDC to run on the IBM i?
Overall, implementation will be relatively easy touch on IBM i system performance. Listening happens with a specific job, per journal, on IBM i compared to trigger solution. Very lightweight compares to filtering criteria and then sends to a data queue to external systems. In summary, largely depends on all components of systems, jobs, and processes (in the system). -
Why did Infoview choose to invest in a CDC-based product?
There are several products on the market focused on mass-scale replication, but are often heavy, expensive, and not easy to implement. We envisioned this product as a lightweight, easy-to-use alternative, that’s more accessible and requires less time to implement . -
Must the infoCDC be leveraged with an existing Infoview connector?
The way the infoCDC was developed, it expects a consumer to be listening to its outbound interface (currently, it’s a data queue with an associated format table). When bundled together, Infoview’s products are highly complementary and supported by our in-house team members. -
Are there limits on the number of tables you can capture?
The product supports multiple tables and multiple journals. infoCDC has one listener job per journal, with multiple tables that could be configured per journal and processed by a single job, making it rather lightweight and easy on the system. Generally, there’s no hard limit on the number of tables or journals to process simultaneously. -
Is it easy to recover from failures that may occur due to outages?
infoCDC keeps track of the changes using the “next journal sequence” position, which can be reset to reprocess some changes if necessary, as long as the journal receivers that contain these changes are still in the system. For scenarios where journal receivers are auto-managed, we recommend setting the wide enough retention period (a few days or weeks) to allow for the replaying of some of the changes if needed.
InfoCDC Demonstration
Subscription Models, Delivery, and Support
- Standard:During the entirety of the subscription term, support will be provided entailing product deployment, error/bug resolution, best practice advice, as well as subsequent product releases
- Priority: Expedited incident resolution, bug fixes, and small enhancements. Unused support hours can be rolled over for the following month.
- Priority 24×7: Support engineer available for incident resolution during standard business hours and on-call rotations for all nights/weekends/holidays. Unused support hours are rolled over to the following month.
- Dedicated 24×7: A dedicated support engineer is online and ready to jump in at any point in time, day or night. Time outside incident resolution can be used for any additional project work.
Our Offering
Ease of Use
Eliminate the need to capture and replicate database updates manually, while leveraging a user-friendly interface that does not require extensive training.
Data monitoring and copying rules
Adding a new data monitor can be done by simply stating a new control consisting of table level, row-level, and column level metadata. Furthermore, when a custom rule is created on the IBM i server, the infoCDC maintains the rule when replicating data to another application.
Transferring Data from tables
Hassle free replication/ preventing inconvenient replication/ no more troublesome post-replication/ deliver only the desired replication / delivering only desired information
To increase efficiency, you can pick the desired column on a table to transfer data from, preventing the need to transfer a whole table for only a select amount of data.
Cross-product Compatibility
infoCDC was designed to work alongside a listener. Paired with our suite of infoConnect connector products including the MuleSoft or Kafka Connector, the product listens for database changes and streams them to their final distention in near real-time, enabling complete integration solutions with zero coding required.
An agile application
infoCDC was designed as a lean system aiming to cause only a light usage of system resources when deployed on an IBM i server.
Integral part of a package
The infoCDC was designed to work alongside a listener, such as MuleSoft/Kafka, a standalone CDC component could be developed but would still require an external listener to make a complete product. This could be a data queue reader residing on an external IBM i system either homegrown by the client or developed by a third party.
New Data Monitoring
Adding a new data monitor can be done by simply starting a new control consisting of the table level, row level, and column level metadata, allowing users to monitor exactly what they want to monitor.
Formatting data copying rules
When a custom rule is created on the IBM i server, infoCDC maintains this rule when replicating data to another application.