Multiple Clusters
These examples show installation and upgrade of Topology Operator masters, agents, and MATRIXX Engines s1e1 and s1e2 in two separate clusters.
This example assumes you have two Kubernetes clusters, cluster 1 and cluster 2, and a kubeconfig file configured so that you can access both clusters from one location. You
can specify which cluster the kubectl and helm commands target with the --context
and --kube-context
options. The
kubeconfig file is configured so that you can access cluster 1 and cluster 2 using context1
and context2
, respectively.
In the example, MetalLB is configured in each cluster for your load balancers. The IP address range is 10.10.10.100-10.10.10.102 in cluster 1, and 10.10.10.200-10.10.10.202 in cluster 2.
Installation Example
The following is the distribution of Topology Operator components across clusters and namespaces in this example:
- The masters and agents are in cluster 1 namespace
matrixx-operators
. - Agents in cluster 2 are in namespace
matrixx-operators
. - Engine s1e1 components are in cluster 1 namespace
matrixx-engine-s1
. - Engine s1e2 components are in cluster 2 namespace
matrixx-engine-s1
.
As with the other examples, install or upgrade engines first, then agents, then masters. In this example, cluster 1 namespace matrixx-operators
, which contains masters and
agents, should be installed after cluster 2 namespace matrixx-operators
, which contains only agents.
Create the namespaces and perform installation with the following commands:
kubectl --context context1 create ns matrixx-operators
kubectl --context context2 create ns matrixx-operators
kubectl --context context1 create ns matrixx-engine-s1
kubectl --context context2 create ns matrixx-engine-s1
helm --kube-context context1 install mtx-engine-s1 matrixx/matrixx -n matrixx-engine-s1 -f base.yaml -f topology-install.yaml -f cluster1.yaml --version matrixx_version
helm --kube-context context2 install mtx-engine-s1 matrixx/matrixx -n matrixx-engine-s1 -f base.yaml -f topology-install.yaml -f cluster2.yaml --version matrixx_version
helm --kube-context context2 install mtx-operators matrixx/matrixx -n matrixx-operators -f base.yaml -f topology-install.yaml -f cluster2.yaml --version matrixx_version
helm --kube-context context1 install mtx-operators matrixx/matrixx -n matrixx-operators -f base.yaml -f topology-install.yaml -f cluster1.yaml --version matrixx_version
Where matrixx_version is the version of MATRIXX, such as 5280.
The base.yaml Helm values file is similar to the base configuration from the other examples, with the addition of global networking configuration:
global:
loadBalancerType: metallb
networking:
externalAddresses:
useIPAddresses: true
The topology-install.yaml Helm values file has the following contents:
engine:
enabled: true
global:
configurationSources:
pricing-config:
docker:
image: example-pricing-sideloader:matrixx_version
topology:
operators:
master:
context: context1
namespace: matrixx-operators
agents:
- context: context1
namespace: matrixx-operators
externalAddress: 10.10.10.100
auth:
basic:
username: username1
password: password1
- context: context2
namespace: matrixx-operators
externalAddress: 10.10.10.200
auth:
basic:
username: username2
password: password2
domains:
- subdomains:
- pricing:
configurationSource:
refName: pricing-config
fileName: mtx_pricing_matrixxOne.xml
engines:
- context: context1
namespace: matrixx-engine-s1
processing:
externalAddress: 10.10.10.101
publishing:
externalAddress: 10.10.10.102
- context: context2
namespace: matrixx-engine-s1
processing:
externalAddress: 10.10.10.201
publishing:
externalAddress: 10.10.10.202
pricing-controller:
enabled: true
The cluster1.yaml file has the following contents:
global:
topology:
operators:
currentContext: context1
The cluster2.yaml file has the following contents:
global:
topology:
operators:
currentContext: context2
The context
properties in the Helm values files are used to distinguish between Kubernetes clusters. For example, the operators use context
properties to
match an engine to the Kubernetes cluster managed by an agent. The sub-domain-operator-s1 instance creates the MtxEngine CR for engine s1e1 in the Kubernetes cluster managed by the first
topology-agent.
You can almost use the same values file for each helm install
command, except for the changing value of the global.topology.operators.currentContext
property. Although Helm knows the namespace where the installation is occurring, the context, in this case the cluster, must be explicitly set.
Values for context
properties used in your Helm values file(s) do not have to be the same as the values passed in Helm and Kubernetes commands, although it is recommended for
clarity where possible.
In certain circumstances, you may be forced to use simplified context values in your Helm values file(s). The context values are used by Helm to generate names and labels for various resources. These values must:
- Contain at most 63 characters.
- Contain only lowercase alphanumeric characters or hyphens.
- Start with an alphabetical character.
- End with an alphanumeric character.
Authentication configuration is required for agents in a multi-cluster deployment. (Only basic authentication is supported in this release.) There are no requirements on the username and password used, and no additional setup is required beyond configuration in your Helm values file(s).
Upgrade Example
Upgrade the processing, publishing, and checkpointing pods of both engines to use the custom configuration with the following commands:
helm --kube-context context1 upgrade mtx-engine-s1 matrixx/matrixx --namespace matrixx-engine-s1 -f base.yaml -f topology_upgrade.yaml -f cluster1.yaml --version matrixx_version
helm --kube-context context2 upgrade mtx-engine-s1 matrixx/matrixx --namespace matrixx-engine-s1 -f base.yaml -f topology_upgrade.yaml -f cluster2.yaml --version matrixx_version
helm --kube-context context2 upgrade mtx-operators matrixx/matrixx --namespace matrixx-operators -f base.yaml -f topology_upgrade.yaml -f cluster2.yaml --version matrixx_version
helm --kube-context context1 upgrade mtx-operators matrixx/matrixx --namespace matrixx-operators -f base.yaml -f topology_upgrade.yaml -f cluster1.yaml --version matrixx_version
The processing, publishing, and checkpointing pods of both engines are reconfigured with a configuration source using a topology-upgrade.yaml Helm values file with the following contents:
engine:
enabled: true
global:
configurationSources:
pricing-config:
docker:
image: example-pricing-sideloader:matrixx_version
engine-config:
docker:
image: example-engine-config-sideloader:matrixx_version
topology:
operators:
master:
context: context1
namespace: matrixx-operators
agents:
- context: context1
namespace: matrixx-operators
externalAddress: 10.10.10.100
auth:
basic:
username: username1
password: password1
- context: context2
namespace: matrixx-operators
externalAddress: 10.10.10.200
auth:
basic:
username: username2
password: password2
domains:
- subdomains:
- pricing:
configurationSource:
refName: pricing-config
fileName: mtx_pricing_matrixxOne.xml
configuration:
engine:
sources:
- refName: engine-config
engines:
- context: context1
namespace: matrixx-engine-s1
processing:
externalAddress: 10.10.10.101
publishing:
externalAddress: 10.10.10.102
- context: context2
namespace: matrixx-engine-s1
processing:
externalAddress: 10.10.10.201
publishing:
externalAddress: 10.10.10.202
pricing-controller:
enabled: true