CDR Aggregation Function Configuration Examples

Configuring the CDR Aggregation Function (CAF) application consists of updating the necessary properties in the application.yaml file.

The configuration file is located in the /opt/mtx/caf/conf directory. It has two sections:

  • Application configuration — properties used in the business logic layer of the CAF for aggregation for various service types and context types.
  • Apache Kafka (using Spring Cloud Stream) configuration — specification of the CAF input and output topic names, Apache Kafka client application ID, threads, and other properties.

The following is an example application.yaml file:

mtx:
  aggregator:
    globalSettings:
      #rnfId: "pny9-chf-cgf-012-agg-01"
      invocationTsPattern: "yyyy-MM-dd'T'HH:mm:ssXXX"
      chargingInformationAdditionMode: APPEND # can be UPDATE or APPEND. In APPEND mode, MUU blocks from the 5G requests are appended to the stored ChargingDataRequest message. In UPDATE mode, the MUU block in the stored ChargingDataRequest message is replaced.
      passThrough: false # If true, no aggregation is performed and the messages are passed straight to the output topic    
      useStrictOrdering: false # If true, then a an ordered map indexed by invocationSequenceNumber is created for every session
    serviceList:
      - type: DATA # Valid values are DATA|VOICE|SMS
        sessionReleaseAggregationEnabled: true # Indicates if the CDR should be aggregated upon a Release
        sessionAggregationSettings: # aggregation will be performed across all rating groups for a session.
          timeBasedAggregationPeriod: 4h
          endOfDayAggregationEnabled: false
          thresholds:
            volume:
              enabled: false
              value: 200MB
            interactions:
              enabled: true
              value: 5
        triggerTypes:
          - RAT_CHANGE
          - UE_TIMEZONE_CHANGE
spring:
  cloud:
    stream:
      function:
        definition: dataAggregator
      bindings:
        dataAggregator-in-0:
          destination: aggregator-input-topic
        dataAggregator-out-0:
          destination: aggregator-output-topic
      kafka:
        streams:
          bindings:
            dataAggregator-in-0:
              consumer:
                dlqName: aggregator-input-topic-dlq
          binder:
            deserializationExceptionHandler: sendToDlq
            configuration:
              num.stream.threads: 3
              metrics.recording.level: DEBUG
            brokers:
              - pny9-chf-kafka-08:9092
              - pny9-chf-kafka-09:9092
              - pny9-chf-kafka-10:9092
              - pny9-chf-kafka-11:9092
management:
  endpoints:
    web:
      exposure:
        include:
        - metrics
        - health
  endpoint:
    health:
      show-details: always
  health:
    binders:
      enabled: true

The following application.yaml excerpt shows how the spring.cloud.stream.bindings property can be configured to accept multiple input and output Kafka topics for CAF:

spring:
  cloud:
    stream:
      function:
        definition: dataAggregator
      bindings:
        dataAggregator-in-0:
          destination: aggregator-input-topic-1,aggregator-input-topic-2,aggregator-input-topic-3         # order of input topics must be the
                                                                                                          # same as the outputs
        dataAggregator-out-0:
          destination: aggregator-output-topic-1                                                          # output topic for first input topic
        dataAggregator-out-1:  
          destination: aggregator-output-topic-2                                                          # output topic for second input topic
        dataAggregator-out-2:  
          destination: aggregator-output-topic-3                                                          # output topic for third input topic