Configure Network Enablers

Make configuration changes to enable Call Control Framework (CCF) in the Kubernetes cluster.

About this task

Configure CCF and Network Enablers (NEs) by answering questions in the resource_create_config.info file.

Procedure

  1. Increase shared memory to 1 GB by answering the following question in the resource_create_config.info file.
    What is the shared memory size in MB to use?1024
    Note: This is the minimum memory required to run CCF. In a production system, more memory is required.
  2. Ensure that NE is enabled and configured in the Helm values file, for example:
    # Network Enabler
    network-enabler:
      enabled: true
      replicaCount: 2
      sideloadImages:
        - name: config-sideloader
          version: "release_version_number"
     
      # topology information, these values will be used by ne and engine pod to generate the configurations such as remote port, etc.
       topology:
        networkEnabler:
          # Single Network Enabler defined for the Domain
          totalPodCount: 2
          portRangeStart: 29052
          internalLinkCount: 4
          cni:
             -  master: eth0
               subnet: 192.168.2.0/24
               rangeStart: 192.168.2.201
               rangeEnd: 192.168.2.250
               gateway: 192.168.2.1  
     	domains:
    For more information about NE topology properties, see the discussion about Network Enabler topology configuration.
    The out-of-the-box NE pod generates the default configurations based on the values supplied in the Helm values file. You can override the configuration values using configuration sources. The image name and version must be provided in the Helm values file. You must have separate configuration for NE pods. Do this by adding more files in the configuration source using the following the naming convention so that the same source is used in all pods of the NE but the configuration files are different:
    • If the NE pod name is ne-ag1-0, include the file in the configuration source using the following naming pattern:
      • xxx1_create_config.info.ne-ag1-0
      • xxx2_create_config.info.ne-ag1-0
    • If the NE pod name is ne-ag1-1, include the file in the configuration source using the following naming pattern:
      • xxx1_create_config.info.ne-ag1-1
      • xxx2_create_config.info.ne-ag1-1

    For more information, see the discussion about configuration sources in MATRIXX Configuration.

  3. Add general configuration for CCF. At a minimum, configure the external M3UA SIGTRAN links to external STPs and the NE point codes for all potential NEs, for example:
    # Network Enablers can each be given different individual point-codes, but at least one must be configured for Network Enabler 1. Undefined Network Enabler's will take the same point-code as the last defined here:
    Network Enabler 1:What is the M3UA point-code?402
    
    # NEs will create cross-links with other NEs in the same access group if turned on here:
    Do you want to configure cross-links between Network Enablers (y/n)?y
     
    # External Connectivity for STPs:
    Network Enabler defaults:What is the local port?2905
    Network Enabler defaults:What is the transport protocol?SCTP
    Network Enabler defaults:Do you want the M3UA link state to be actively managed (y/n)?n
    Network Enabler defaults: What is the link initiation?listenNetwork Enabler:How many links do you want?1
    Network Enabler Link 1:What is the remote address?gateway
    Network Enabler Link 1:What is the destination point-code?401
    When specifying NE links, if the remote STP is inside the Kubernetes cluster, you can specify the remote DNS address of the pod running the STP:
    Network Enabler Link 1:What is the remote address?sigtran-test.matrixx
    NEs evaluate the DNS address at configuration time (during start-up), this means that the pod host address can change if the remote pod is restarted. If the address mapping of a remote pod changes after NE start-up, it must be re-configured to accept connections from the new address. You can call the following command to make all NEs reload their configuration.
    kubectl get pods -l app=mtx-network-enabler -o name | xargs -I{} kubectl exec {} -- pkill -f network_enabler -USR1
    Note: You can specify general CCF configuration in the resource_create_config.info file, or you can supply a separate configuration file for CCF and NEs. For example, you could specify CCF configuration in a file named ccf_create_config.info. For information about CCF configuration parameters, see MATRIXX Call Control Framework Integration.

What to do next

NEs derive all Kubernetes topology information from the Kubernetes deployment through Service Discovery using the Kubernetes API. However, engine pods require additional topology information provided by an additional create_config.info file.

When using Helm for installation, the following default configurations are generated by the Helm configuration map for NE, based on the parameter values in the Helm values file provided during Helm installation:
# NEs can be given individual point-codes but by default they will all share NE1's value defined here:
# CCF configuration
Optional Feature:Do you want to enable Call Control Functionality (CCF) (y/n)?y
Optional Feature:Are you licensed to enable Call Control Functionality (CCF) (y/n)?y
 
# Define as many NEs as you might need (not all need to be deployed)
# each should have a different port configured in the form *:<port>
How many Network Enablers do you have?2
Network Enabler 1:What are the internal M3UA IP addresses?ne-ag1-1
Network Enabler 1:What is the M3UA point-code?402
Network Enabler 1:What are the M3UA IP addresses?ne-ag1-1.ne-ag1.matrixx.svc.cluster.local 
Network Enabler 2: What are the internal M3UA IP addresses?ne-ag1-2
Network Enabler 2: What is the M3UA point-code?402
Network Enabler 2: What are the M3UA IP addresses?ne-ag1-2.ne-ag1.matrixx.svc.cluster.local 
# The following configurations will be auto generated based on the values yaml file. if required can be override using the side loader image.
# totalPodCount,portRangeStart,internalLinkCount fields in values.yaml file will be used to generate these configs 
# Number of internal links, depends on number of NE pods in the deployment
Network Enabler Internal: How many links do you want?2
Network Enabler Internal defaults: Do you want to prompt for advanced link configuration (y/n)?y
Network Enabler Internal defaults: What is the transport protocol?TCP
Network Enabler Internal defaults: What is the local address?
Network Enabler Internal defaults: What is the local port?.
Network Enabler Internal defaults: What is the remote port?.
Network Enabler Internal defaults: What is the link initiation?connect
Network Enabler Internal defaults: What is the receive buffer size in bytes?
Network Enabler Internal defaults: What is the send buffer size in bytes?
Network Enabler Internal defaults: What is the link heartbeat interval in milliseconds?1000
Network Enabler Internal defaults: What is the SIGTRAN network role?SGP
Network Enabler Internal defaults: What is the SIGTRAN traffic-mode to use?loadshare
Network Enabler Internal defaults: Do you wish to validate incoming M3UA DPCs (y/n)?n
Network Enabler Internal defaults: Do you want the M3UA link state to be actively managed (y/n)?n
Network Enabler Internal defaults: Do you want to prompt for advanced route configuration (y/n)?.
Network Enabler Internal defaults: What is the priority of this route?0
Network Enabler Internal defaults: What is the routing-context?-1
Network Enabler defaults:What is the M3UA network-indicator?2
Network Enabler Internal defaults: What is the maximum SCCP segment size?3952 
# Internal Link nameNetwork Enabler Internal Link 1:What is the link name?blade_1_1_1
Network Enabler Internal Link 1:What is the local port?29052
# name of the Kubernetes services for proc pod
 Network Enabler Internal Link 1:What is the remote address?proc-m3ua-s1e1-0.matrixx.svc.cluster.local
Network Enabler Internal Link 1:What is the remote port?2902
 Network Enabler Internal Link 1:What is the ASP identifier?1
Network Enabler Internal Link 1:What is the priority floor for relayed messages?
Network Enabler Internal Link 1: Do you want to prompt for advanced route configuration (y/n)?.
Network Enabler Internal Link 1:Do you want SCCP SCMG messages sent on this route (y/n)?n
# Internal Link name
Network Enabler Internal Link 2: What is the link name?blade_1_1_2
Network Enabler Internal Link 2: What is the local port?29052
# name of the Kubernetes services for proc pod
Network Enabler Internal Link 2: What is the remote address?proc-m3ua-s1e1-1.matrixx.svc.cluster.local
Network Enabler Internal Link 2: What is the remote port?2902
Network Enabler Internal Link 2: What is the ASP identifier?1
Network Enabler Internal Link 2: What is the priority floor for relayed messages?
Network Enabler Internal Link 2: Do you want to prompt for advanced route configuration (y/n)?.
Network Enabler Internal Link 2: Do you want SCCP SCMG messages sent on this route (y/n)?n