Create a Custom Kafka CDR Consumer Extension
You can create your own custom Kafka CDR Consumer extensions for charging implementations that are not covered by out-of-box extensions delivered with the Kafka CDR Consumer.
Before you begin
-
Create a custom MATRIXX Data Container (MDC) that extends MtxChrgMsg to add additional fields for your specific charging solution requirements. For more information about MDC and MDC field extension configuration, see the information about MDC and MDC field extension configuration in MATRIXX Configuration.
-
The most important prerequisite is to determine the input message format (for example, CSV, JSON, compact MDC (CMDC), or any other format) and schema from the Kafka topic to be consumed by the extension.
- Create a new project for your extension using Archetype. This requires that
you first have the following in place:
- JDK 11 or later
- Maven
- Runtime deliverable for the software you are using to create images (for example, Docker)
Procedure
-
Create a new project for your extension using Archetype. Use the following
Maven commands to create the project:
mvn -B archetype:generate -DarchetypeGroupId=com.matrixx.kcc -DarchetypeArtifactId=kafka-cdr-consumer-archetype -DarchetypeVersion=<Archetype version number> -DgroupId=<your project group ID> -DartifactId=<your project artifact ID> -Dversion=<your project version> -DcustomPrefix=<your project prefix> -DcustomPackage=<your project package>
Where:- <Archetype version number> is the version of
Archetype you are using, for example,
1.0.9
. - <your project group ID> is the ID of your project
group, for example,
com.matrixx.acme
. - <your project artifact ID> is the ID of your
project artifact, for example,
acme-json-consumer
. - <your project version> is the version of your
project, for example,
1.0.0-SNAPSHOT
. - <your project prefix> is the project name prefix you are using, for example,
acme-json-consumer
. - <your project package> is the name of your project
RPM, for example,
acme.json
.
The following is an example:mvn -B archetype:generate -DarchetypeGroupId=com.matrixx.kcc -DarchetypeArtifactId=kafka-cdr-consumer-archetype -DarchetypeVersion=1.0.9 -DgroupId=com.matrixx.acme -DartifactId=acme-json-consumer -Dversion=1.0.0-SNAPSHOT -DcustomPrefix=acme-json-consumer -DcustomPackage=acme.json
This step creates a directory structure like the following:$ tree -L 3 acme-json/ acme-json/ ├── acme-json-config │ ├── acme-json-engine-config │ │ ├── pom.xml │ │ └── src │ ├── acme-json-mdc-config │ │ ├── Dockerfile │ │ ├── gen.sh │ │ └── pom.xml │ └── pom.xml ├── acme-json-consumer │ ├── Dockerfile │ ├── pom.xml │ └── src ├── acme-json-engine-simulator │ ├── Dockerfile │ ├── pom.xml │ └── src ├── acme-json-integration-test │ ├── pom.xml │ └── src ├── acme-json-mdd-containers │ └── pom.xml └── pom.xml
- <Archetype version number> is the version of
Archetype you are using, for example,
-
Update the create.config.info file after the new project
is created or when you change an existing project. This is necessary so that the
custom fields are recognized by the deserializer and mapping logic. This file is
located in the
src/main/resources/00Default.app/create_config.info
directory of the sub-project specified by
<prefix>-config/<prefix>-engine-config,
where <prefix> is the prefix for your sub-project, for
example,
acme-json
as in the preceding example. -
Write custom code to parse or deserialize the incoming data format, for example, CSV, JSON, or CMDC, into readable Java classes. Implement the deserializer you created in the
sub-project
<prefix>-consumer
. - Write custom code to map the deserialized data to MtxChrgMsg subtypes (for example, Mtx5GMsg or MtxDiamRoMsg). After you complete this step, a new image is created for the new custom extension deliverable.
-
Create mappings to map the deserialized Java classes to a MATRIXX chargeable
message object that is sent to MATRIXX Engine for rating. Implement the mappings in the sub-project
<prefix>-consumer
. -
Build the project and image by running the following command on the root of the
new project:
mvn clean verify
The image is built with a name similar to the following:{docker.repo.url}/matrixx-kcc/<your project artifact ID>:<your project version>
For example:docker images | grep acme-json harbor.matrixx-services.com/matrixx-kcc/acme-json-consumer 1.0.0-SNAPSHOT aba29c72c641 3 minutes ago 526MB
The built image is a standalone image that does the following when it is deployed:- Validates connections to both Kafka brokers and MATRIXX Engine defined in the Helm chart values file.
- Consumes messages from the configured Kafka topic.
- Deserializes and maps each message using your custom implementation.
- Sends the mapped message to MATRIXX Engine for rating.
You can changedocker.repo.url
by editing the root of pom.xml. You can push the image you built to the registry defined by thisdocker.repo.url
value.
What to do next
mtx-tap3json-consumer
is the image
name:global:
image:
registry:
name: harbor.matrixx-services.com
image:
name: "matrixx-kcc/mtx-tap3json-consumer"
version: "5250"
# Kafka CDR Consumer App configurations
configuration:
engine:
host: "engine.host"
port: 4060
consumer:
kafka:
topic: rerater_topic
config:
bootstrap.servers: kafka1:9092,kafka2:9092,kafka3:9092
group.id: "cg1"
max.poll.records: 100
producer:
kafka:
topic: reject_topic
config:
bootstrap.servers: kafka1:9092,kafka2:9092,kafka3:9092
Point the image name and version to the image built. Make sure your Kubernetes environment can pull the image from the registry. Point the engine host and port to the MATRIXX Engine address or TRA-LB address in case of a multiple domain deployment.
helm install acme-json-kafka-consumer kafka-cdr-consumer-1.0.0.tgz -f acme_override.yaml
Once the chart is deployed
successfully, send a message to the source Kafka topic to trigger acme-json-kafka-consumer
to start processing.kafka-cdr-consumer
Helm chart is the same for all Kafka CDR Consumer extensions. Replace
the image name and version inside the values file.After you finish testing
successfully, you can finish your new image (that is, remove the
SNAPSHOT
from root.pom.xml) and rebuild
your custom Kafka CDR Consumer extension for release.