Frequently Asked Questions: The IBM i (AS/400) Connector for Kafka

As the one-year anniversary of the infoConnect for Kafka product launch quickly approaches, we have noticed repeating questions from customers. The goal of this blog is to share common questions the Infoview team has received about the infoConnect for Kafka, to better equip you and your team with the information you need.

 

Does the Kafka Connector feature source and sink capabilities?

Infoview’s Kafka Connector is actually a product suite composed of three separate connectors. As the most team’s often require the use of more than one, depending on their use case, we bundled them together

 

How does the connector product suite work?

Via low-level socket-based connections, facilitated via IBM toolbox library. The connector works as a plug-in within the Kafka Connect on-premise, cloud, or Confluent Hub runtime

 

Can the Infoview team assist with product-related POC’s?

Absolutely! We offer multiple options ranging from:

  • A cost-free license (self- service approach) allows interested teams to evaluate on their own 
  • An Infoview consultant can be allocated for a short period of time to assist with configuration, implementation, and first use case 
  • The desired use case can be re-created within Infoview’s internal environments
  • The Infoview team can take complete ownership of the product implementation, configuration, and overarching infrastructure 

Kafka vs JDBC connectors  

When leveraging a JDBC connector, changes cannot be captured in real-time, leading to an inevitable time lag. The infoConnect for Kafka enables bidirectional integration directly from Kafka and the IBM i without any additional application servers or code generation in real-time. Furthermore, some database models are complex and additional rules must be applied, in this case, our product’s ability to call back programs instead of replicating business logic in the integration layer is a featured customers have found useful

 

Does Infoview offer a change data capture product to leverage with the Kafka connector suite?

  • Our team indeed just finished the development of infoCDC, a journal-based solution that resides on the IBM i and monitors for DB2 changes. Once a change is detected, it’s streamed to the Kafka Connector, and then onto Kafka with no manual coding required.  

What was the intent for the creation of the Kafka product suite?

  • We wanted to provide Kafka development teams with an easy way to connect their IBM i based systems to Kafka without any special knowledge of IBM i, and without a need to implement and operate another piece of middleware software 

How is the product suite priced?

  • Pricing is based on the number of physical IBM i servers leveraged with the product suite, regardless of server type (production, non-production, DR, etc.). As three connectors are included in the total, they are bundled together. 
  • For additional pricing and subscription information, please reach out to our team.  

Compatibility with commercial off-the-shelf IBM i backend systems? 

  • The connector can be used to execute business logic or exchange messages with IBM i based commercial applications. 

Compatibility with Confluent Suite? 

  • The Kafka connector Suite can indeed be leveraged on both Confluent and Apache Kafka stacks, the only difference is that the implementation and configuration steps will be slightly different for the Confluent suite 

For more information regarding the infoConnect for Confluent product compatibilities, prerequisites, and operations, check out the links below:

🌐