Naming and Grouping

MATRIXX Engine components, for example, engine 1 in sub-domain 1 (s1e1), can only be deployed once per namespace.

Multiple engines and sub-domains must have unique identifiers. Kubernetes objects for engine components have a suffix identifying the sub-domain and engine in the form of, for example, s1e1 for engine 1 in sub-domain 1 or s2e2 for engine 2 in sub-domain 2.

Non-engine components are grouped into access groups, where each Kubernetes object name ends with the suffix of the group. The convention is ag1 for access group 1, ag2 for access group 2, and further, as needed. By default, all non-engine components are configured to communicate with other components in the same access group. More than one access group can be deployed to a namespace.

Each access group requires a unique Helm chart release and name. An access group can contain some of or all the non-engine components. For example, if two SBA Gateways (CHFs) are needed, configured in different ways, a second access group ag2 could include a Helm chart release with just the second differently configured 5GC SBA Gateway (CHF) component enabled. The chart can be configured to reuse the Traffic Routing Agent (TRA) in ag1 by setting the sba-5gc-chf.connections.tra.host property to tra-ag1.

An access group is associated with one Apache ActiveMQ group, made up of n number of pods specified with the activemq.replicaCount property, which it uses to receive messages from the engine. An access group can serve multiple sub-domains with messages for each sub-domain segregated using queue name suffixes. Components within the access group that use ActiveMQ respond to messages that they receive on the sub-domain x request queue use the sub-domain x response queue.

An engine is also associated with a single ActiveMQ group. Therefore, you must associate an access group with each ActiveMQ group that is associated with an engine. It is possible to deploy these to separate Kubernetes clusters using external configuration properties.

Access Group Configuration Properties describes properties used to identify access groups.

Table 1. Access Group Configuration Properties
Property Description
global.accessGroup.id The name of the access group to use for non-engine components. Valid values start with ag, followed by a number. The default value is ag1.
global.accessGroup.activemqGroupId The ActiveMQ group in use for this release. The default value is 1.

ActiveMQ Group Configuration Properties descirbes properties for creating, identifying, and accessing ActiveMQ groups.

Table 2. ActiveMQ Group Configuration Properties
Property Description
global.activemq.groups[n].id The ID of the ActiveMQ group at index n. This is expected to be a numeric value starting at 1. The default value is n + 1.
global.activemq.groups[n].create Indicates whether to create Kubernetes objects for the ActiveMQ group. If the group is external only, an ExternalName service is created. The default value is true.
global.activemq.groups[n].external Indicates if this ActiveMQ group exists or should exist in the current cluster. The default value is false.
global.activemq.groups[n].externalAddress If the ActiveMQ group is external, then this value has the fully qualified domain name (FQDN) at which the ActiveMQ LoadBalancing service in another cluster can be accessed. Set global.networking.externalAddresses.useIPAddresses to true to allow IP Addresses to be used. If this is an Internal ActiveMQ group, this property has the IP address used to expose the LoadBalancing service to other clusters.

To use Artemis Cloud Broker, specify the IP address or host name of the broker with this property.

global.activemq.groups[n].serviceAnnotations A dictionary of Kubernetes annotations added to the ActiveMQ services. The value for each annotation can include resolvable Helm expressions. For more information, see the discussion about annotations in Kubernetes documentation.
global.activemq.groups[n].activemqBrokerURL A full ActiveMQ Broker URL that can be used to override the default. This can point to an ActiveMQ installation outside of the Kubernetes cluster, in which case, set global.activemq.groups[n].create to false. Or it can be used to refine the default ActiveMQ Broker used, such as to add additional parameters.
global.activemq.groups[n].useIPAddress When set to true, the global.networking.externalAddresses.useIPAddresses are overridden for the ActiveMQ Address. If this property is not set, the value of global.networking.externalAddresses.useIPAddresses is used. If neither property is set, it is assumed that FQDN are in use.
global.activemq.groups[n].port The TCP port to use when connecting to ActiveMQ.
global.activemq.groups[n].portAMQP The AMQP port to use when connecting to ActiveMQ.

Typical Deployment Example

A typical deployment deploys the engine, ActiveMQ and the access group to a single Kubernetes cluster. As such much of the configuration assumes that and so does not need to be explicitly set. The following is a verbose example of the configuration used for this setup.

global:
 
  activemq:
    groups:
      - id: 1
        create: true
        external: false
 
  accessGroup:
    id: ag1
    activemqGroupId: 1
 
  topology:
    domains:
      - subdomains:
          - id: 1
            engines:
              - id: 1
                activemqGroupId: 1

Figure 1 shows this simple deployment.

Figure 1. Typical Deployment Example
Typical Deployment Example

Spanning Kubernetes Clusters Example

ActiveMQ can be installed in a separate Kubernetes cluster from either the engine, the access group, or both, by exposing the ActiveMQ LoadBalancer service on a given URL. The following are example engine configuration values for this deployment:

global:
 
  activemq:
    groups:
      - id: 1
        external: false
        externalAddress: 10.11.12.13
 
  topology:
    domains:
      - subdomains:
          - id: 1
            engines:
              - id: 1
                external: false
                activemqGroupId: 1

The following are example access group cluster configuration values for this deployment:

global:
 
  activemq:
    groups:
      - id: 1
        external: true
        externalAddress: activemq.example.com
 
  accessGroup:
    id: ag1
    activemqGroupId: 1
 
  topology:
    domains:
      - subdomains:
          - id: 1
            engines:
              - id: 1
                external: true

Figure 2 shows this example deployment.

Figure 2. Example Spanning Kubernetes Clusters
Spanning Kubernetes Clusters Example

External ActiveMQ Example

ActiveMQ can be installed outside of the Kubernetes deployment if required. Engine and access group components can be set to point to the external ActiveMQ instance (typically a network of brokers) by setting the global.activemq.groups[n].activemqBrokerURL configuration property, as shown in the following example:

global:
 
  activemq:
    groups:
      - id: 1
        activemqBrokerURL: tcp://outside.k8s:61616
        create: false
 
  accessGroup:
    id: ag1
    activemqGroupId: 1
 
  topology:
    domains:
      - subdomains:
          - id: 1
            engines:
              - id: 1
                activemqGroupId: 1

Figure 3 shows this example at a high level.

Figure 3. External ActiveMQ Example
External ActiveMQ Example

Artemis Cloud Broker Example

An Artemis Cloud broker in one Kubernetes cluster can handle MATRIXX components in more than one cluster. To use Artemis Cloud broker, update the values as shown below. In this example, replace the value artemis-broker-hdls-svc.artemis-cloud with the IP Address or host name of your broker. Also, the values of port and portAMQP should be replaced with the TCP and AMQP ports of the broker.

global:
  activemq:
    groups:
        # update the definition of the ActiveMQ Group to point to the Artemis Cloud Broker
      - id: 1
        create: false
        external: true
        externalAddress: artemis-broker-hdls-svc.artemis-cloud
        port: 61616
        portAMQP: 5672

Figure 4 shows this example at a high level.

Figure 4. Artemis Cloud Broker Example
Artemis Cloud Broker Example