Using Topology Operator with Call Control Framework
This section includes examples for how to configure the installation of Call Control Framework (CCF) in Topology Operator-based MATRIXX deployments.
Network Enabler External Links
network-enabler.stp.links
, for example:network-enabler:
stp:
links:
- connect:
remoteAddress: STP 1 address
- listen:
remoteAddress: STP 2 address
connect
link and one listen
link. For more information about additional configuration
properties including the local and remote ports, see the discussion about Network Enabler link configuration
in MATRIXX Call Control Framework
Integration.listen
links, sometimes load balancer
services can distort the IP or port of the STP. This means that the network-enable pods see the traffic coming from a different IP or port than expected and can reject it. In this
scenario, you could configure the network-enabler pods to accept traffic from any remote address (for example,
stp.links
[x].listen.remoteAddress=""
or "any"
) or any remote port (for example,
stp.links
[x].listen.remotePort=0
.When using listen
links, you must also expose each network-enabler pod outside the cluster using a load balancer service so that the STP can be configured to send
traffic to them. Do this with the network-enabler.externalAddresses
property, for example:
network-enabler:
externalAddresses:
- network-enabler-0 external address
- network-enabler-1 external address
network-enabler.externalAddresses
must equal the value of the network-enabler.replicaCount
property, which
defaults to 2
.Add this configuration to the network-enabler.yaml file in each example in the following sections.
Multiple Namespaces in a Single Cluster
- The masters and agents in the matrixx-operators namespace.
- Engine s1e1 in the matrixx-s1e1 namespace.
- Engine s1e2 in the matrixx-s1e2 namespace.
- The network-enabler pod in the matrixx-ne namespace.
kubectl create ns matrixx-operators
kubectl create ns matrixx-s1e1
kubectl create ns matrixx-s1e2
kubectl create ns matrixx-ne
helm install mtx-s1e1 matrixx/matrixx -n matrixx-s1e1 -f base.yaml -f topology.yaml --version version
helm install mtx-s1e2 matrixx/matrixx -n matrixx-s1e2 -f base.yaml -f topology.yaml --version version
helm install mtx-operators matrixx/matrixx -n matrixx-operators -f base.yaml -f topology.yaml --version version
helm install mtx-ne matrixx/matrixx -n matrixx-ne -f base.yaml -f topology.yaml -f network-enabler.yaml --version version
global:
features:
ccf: true
engine:
enabled: true
global:
configurationSources:
pricing-config:
docker:
image: example-pricing-sideloader:version
topology:
operators:
master:
namespace: matrixx-operators
agents:
- namespace: matrixx-operators
domains:
- subdomains:
- pricing:
configurationSource:
refName: pricing-config
fileName: mtx_pricing_matrixxOne.xml
engines:
- namespace: matrixx-s1e1
- namespace: matrixx-s1e2
pricing-controller:
enabled: true
network-enabler:
enabled: true
Multiple Clusters
- Masters and agents in the matrixx-operators namespace in cluster 1.
- Agents in the matrixx-operators namespace in cluster 2.
- Engine s1e1 in the matrixx-engine-s1 namespace in cluster 1.
- Engine s1e2 in the matrixx-engine-s1 namespace in cluster 2.
- The network-enabler pod in the matrixx-ne namespace in cluster 1.
kubectl --context context1 create ns matrixx-operators
kubectl --context context2 create ns matrixx-operators
kubectl --context context1 create ns matrixx-engine-s1
kubectl --context context2 create ns matrixx-engine-s1
kubectl --context context1 create ns matrixx-ne
helm --kube-context context1 install mtx-engine-s1 matrixx/matrixx -n matrixx-engine-s1 -f base.yaml -f topology.yaml -f cluster1.yaml --version version
helm --kube-context context2 install mtx-engine-s1 matrixx/matrixx -n matrixx-engine-s1 -f base.yaml -f topology.yaml -f cluster2.yaml --version version
helm --kube-context context2 install mtx-operators matrixx/matrixx -n matrixx-operators -f base.yaml -f topology.yaml -f cluster2.yaml --version version
helm --kube-context context1 install mtx-operators matrixx/matrixx -n matrixx-operators -f base.yaml -f topology.yaml -f cluster1.yaml --version version
helm --kube-context context1 install mtx-ne matrixx/matrixx -n matrixx-ne -f base.yaml -f topology.yaml -f cluster1.yaml -f network-enabler.yaml --version version
engine:
enabled: true
global:
configurationSources:
pricing-config:
docker:
image: example-pricing-sideloader:version
topology:
operators:
master:
context: context1
namespace: matrixx-operators
agents:
- context: context1
namespace: matrixx-operators
externalAddress: topology-agent-1 external address
- context: context2
namespace: matrixx-operators
externalAddress: topology-agent-2 external address
domains:
- subdomains:
- pricing:
configurationSource:
refName: pricing-config
fileName: mtx_pricing_matrixxOne.xml
engines:
- context: context1
namespace: matrixx-engine-s1
processing:
externalAddress:
tcp: proc-cls-s1e1-tcp external address
udp: proc-cls-s1e1-udp external address
m3ua:
- proc-m3ua-s1e1-0 external address
- proc-m3ua-s1e1-1 external address
publishing:
tcp: publ-cls-s1e1-tcp external address
udp: publ-cls-s1e1-udp external address
- context: context2
namespace: matrixx-engine-s1
processing:
externalAddress:
tcp: proc-cls-s1e2-tcp external address
udp: proc-cls-s1e2-udp external address
m3ua:
- proc-m3ua-s1e2-0 external address
- proc-m3ua-s1e2-1 external address
publishing:
tcp: publ-cls-s1e2-tcp external address
udp: publ-cls-s1e2-udp external address
pricing-controller:
enabled: true
global.topology.domains
[x].subdomains
[y].engines
[z].processing.externalAddress.m3ua
must equal the value of the
global.topology.domains
[x].subdomains
[y].engines
[z].processing.replicaCount
property, which defaults to 2
.Edit the base.yaml and network-enabler.yaml files to match those in the example of multiple namespaces in a single cluster. Edit the cluster1.yaml and cluster2.yaml files to match those in other multi-cluster examples. For more information, see the discussion about multi-cluster installation examples.
Multiple Network Enablers
- Masters and agents in the matrixx-operators namespace in cluster 1.
- Agents in the matrixx-operators namespace in cluster 2.
- Engine s1e1 in the matrixx-engine-s1 namespace in cluster 1.
- Engine s1e2 in the matrixx-engine-s1 namespace in cluster 2.
- A network-enabler pod in the matrixx-ne namespace in cluster 1.
- A network-enabler pod in the matrixx-ne namespace in cluster 2.
kubectl --context context1 create ns matrixx-operators
kubectl --context context2 create ns matrixx-operators
kubectl --context context1 create ns matrixx-engine-s1
kubectl --context context2 create ns matrixx-engine-s1
kubectl --context context1 create ns matrixx-ne
kubectl --context context2 create ns matrixx-ne
helm --kube-context context1 install mtx-engine-s1 matrixx/matrixx -n matrixx-engine-s1 -f base.yaml -f topology.yaml -f cluster1.yaml --version version
helm --kube-context context2 install mtx-engine-s1 matrixx/matrixx -n matrixx-engine-s1 -f base.yaml -f topology.yaml -f cluster2.yaml --version version
helm --kube-context context1 install mtx-engine-s2 matrixx/matrixx -n matrixx-engine-s2 -f base.yaml -f topology.yaml -f cluster1.yaml --version version
helm --kube-context context2 install mtx-engine-s2 matrixx/matrixx -n matrixx-engine-s2 -f base.yaml -f topology.yaml -f cluster2.yaml --version version
helm --kube-context context2 install mtx-operators matrixx/matrixx -n matrixx-operators -f base.yaml -f topology.yaml -f cluster2.yaml --version version
helm --kube-context context1 install mtx-operators matrixx/matrixx -n matrixx-operators -f base.yaml -f topology.yaml -f cluster1.yaml --version version
helm --kube-context context1 install mtx-ne matrixx/matrixx -n matrixx-ne -f base.yaml -f topology.yaml -f cluster1.yaml -f network-enabler.yaml --version version
helm --kube-context context1 install mtx-ne matrixx/matrixx -n matrixx-ne -f base.yaml -f topology.yaml -f cluster1.yaml -f network-enabler.yaml --version version
global:
topology:
networkEnabler:
totalReplicaCount: 4
network-enabler:
enabled: true
portOffset: 0
network-enabler:
enabled: true
portOffset: 2
Each network-enabler pod sends traffic to the engine pods using different ports. By default, the first network-enabler pod sends traffic from port 29101 (the
global.topology.networkEnabler.localPortStart
property defaults to port 29101
) to port 2901 (the
global.topology.networkEnabler.remotePortStart
property defaults to 2901
). The second network-enabler pod sends traffic from port 29102 to port
2902.
network-enabler.replicaCount
property defaults to 2). By default, the first network-enabler pod in each cluster sends traffic from port 29101 to port 2901. To
prevent this port overlap, configure the network-enabler.portOffset
property (which defaults to 0) so that:- The first network-enabler pod in cluster 1 sends requests from port 29101 to port 2901.
- The second network-enabler pod in cluster 1 sends requests from port 29102 to port 2902.
- The first network-enabler pod in cluster 2 sends requests from port 29103 to port 2903.
- The second network-enabler pod in cluster 2 sends requests from port 29104 to port 2904.
You must also configure four network-enabler pods, that is, global.topology.networkEnabler.totalReplicaCount=4
(defaults to 2
) so that the engine pods
are configured to accept traffic on ports 2901 through 2904 (2901 + 4 - 1).
Separate camel-gateway Pods
By default, the camel-gateway process runs in the engine processing pods. By setting the global.features.camelGateway
property to true
(this property
defaults to false
), the camel-gateway process runs in separate camel-gateway pods.
This applies if you have multiple namespaces in a single cluster or if you have multiple clusters, as in preceding examples:
- To define separate camel-gateway pods in a single cluster having multiple namespaces, set
global.features.camelGateway
totrue
in the base.yaml file. - To define separate camel-gateway pods in multiple clusters, do the following:
- Set the
global.features.camelGateway
property totrue
. - Configure
global.features.domains
[x].subdomains
[x].engines
[z].camelGateway.externalAddresses
. instead ofglobal.features.domains
[x].subdomains
[x].engines
[z].processing.externalAddress.m3ua
.
- Set the
Here is an example topology.yaml file configuration:
engine:
enabled: true
global:
configurationSources:
pricing-config:
docker:
image: example-pricing-sideloader:version
topology:
operators:
master:
context: context1
namespace: matrixx-operators
agents:
- context: context1
namespace: matrixx-operators
externalAddress: topology-agent-1 external address
- context: context2
namespace: matrixx-operators
externalAddress: topology-agent-2 external address
domains:
- subdomains:
- pricing:
configurationSource:
refName: pricing-config
fileName: mtx_pricing_matrixxOne.xml
engines:
- context: context1
namespace: matrixx-engine-s1
processing:
externalAddress:
tcp: proc-cls-s1e1-tcp external address
udp: proc-cls-s1e1-udp external address
publishing:
tcp: publ-cls-s1e1-tcp external address
udp: publ-cls-s1e1-udp external address
camelGateway:
externalAddresses:
- cgw-m3ua-s1e1-0 external address
- cgw-m3ua-s1e1-1 external address
- context: context2
namespace: matrixx-engine-s1
processing:
externalAddress:
tcp: proc-cls-s1e2-tcp external address
udp: proc-cls-s1e2-udp external address
m3ua:
- proc-m3ua-s1e2-0 external address
- proc-m3ua-s1e2-1 external address
publishing:
tcp: publ-cls-s1e2-tcp external address
udp: publ-cls-s1e2-udp external address
camelGateway:
externalAddresses:
- cgw-m3ua-s1e2-0 external address
- cgw-m3ua-s1e2-1 external address
pricing-controller:
enabled: true
global.topology.domains
[x].subdomains
[y].engines
[z].camelGateway.externalAddresses
must equal the value of the
global.topology.domains
[x].subdomains
[y].engines
[z].camelGateway.replicaCount
property, which defaults to 2
.