Using Topology Operator with Multus CNI
Service meshes or Kubernetes container network interface (CNI) plug-ins, such as Multus, allow MATRIXX to connect to separate networks for transaction processing and operations and maintenance (O&M).
This example assumes that you have installed Multus CNI in every Kubernetes cluster where
MATRIXX Engines are installed. For a Multus quick start guide, see the discussion about multus-cni
at the GitHub website. Enable Multus CNI in your Helm values file by setting
global.features.multusCNI
to true
, then add CNI
configuration for each engine. You must also enable the multus-cni sub-chart in the
namespaces where the engines are created.
This example has components distributed in the following namespaces, described by multiple Helm values files:
- The masters and agents in namespace
matrixx-operators
. - Engine s1e1 in namespace
matrixx-s1e1
.
Create the namespaces and install components using the following commands:
kubectl create ns matrixx-operators
kubectl create ns matrixx-s1e1
helm install mtx-s1e1 matrixx/matrixx -n matrixx-s1e1 -f base.yaml -f topology.yaml -f multus_cni.yaml --version matrixx_version
helm install mtx-operators matrixx/matrixx -n matrixx-operators -f base.yaml -f topology.yaml --version matrixx_version
host-local
is used. This is only appropriate for a single-node environment. in a multi-node environment, another
plug-in, such as whereabouts
, might be more appropriate.Configuration in the following base.yaml values file excerpt is used to make Multus CNI available as a feature in all three namespaces:
global:
features:
multusCNI: true
The following topology.yaml file is also applied in all three namespaces:
engine:
enabled: true
global:
configurationSources:
pricing-config:
docker:
image: example-pricing-sideloader:matrixx_version
topology:
operators:
master:
namespace: matrixx-operators
agents:
- namespace: matrixx-operators
domains:
- subdomains:
- pricing:
configurationSource:
refName: pricing-config
fileName: mtx_pricing_matrixxOne.xml
engines:
- namespace: matrixx-s1e1
cni:
cniVersion: 0.3.1
type: macvlan
master: eth0
ipam:
type: host-local
subnet: 192.168.1.0/24
gateway: 192.168.1.1
rangeStart: 192.168.1.200
rangeEnd: 192.168.1.216
pricing-controller:
enabled: true
The following multus_cni.yaml file is used to enable Multus CNI:
multus-cni:
enabled: true
The global.features.multusCNI
configuration creates an additional (secondary) network. Processing, publishing, and checkpointing pods are allocated IP addresses on the
secondary network between rangeStart
and rangeEnd
. The create_config.info files are configured during engine start to use the
additional networks for transaction processing.
The configuration provided in global.topology.domains.subdomains.engines.cni
is used to populate the spec.config
property of the
NetworkAttachmentDefinition
. For information about the different configuration options, see the discussion about configuring an additional network at the OpenShift
website.
CNI configuration can be provided as a YAML object as shown in the preceding Helm values file excerpt or as a JSON string similar to the following :
cni: '{"cniVersion": "0.3.1", "type": "macvlan", "master": "eth0", "ipam": { "type": "host-local", "subnet": "192.168.1.0/24", "gateway": "192.168.1.1", "rangeStart": "192.168.1.200", "rangeEnd": "192.168.1.216" }}'