The Admin Service
Operations teams can use the Admin Service to administer a MATRIXX deployment with domain-specific commands based on the components that are installed. It allows administration of the installation without providing unrestricted direct access to the Kubernetes cluster. You can secure commands based on user roles and audit command usage.
You can deploy the Admin Service to a single Kubernetes cluster or across multiple clusters. The distributed cache allows data to be shared across clusters. Commands are executed in the most suitable place without requiring that to be explicitly specified.
The namespace where the Admin Service is deployed may or may not be the namespace(s) where commands are discovered
and executed. Namespaces an Admin Service deployment is responsible for are configured with the
executionNamespaces
property. If no namespace is specified the Admin Service attempts to
discover commands from all namespaces in the cluster.
For best results, always configure a namespace scope. If you do not configure a namespace, the Admin Service tries to discover commands from all namespaces in the cluster.
Multiple Admin Service deployments in the same cluster (or even the same namespace) can have different target namespaces in multi-tenancy scenarios or where application environments are sharing the same cluster.
The following example specifies the dev-app1
and dev-app2
namespaces:
executionNamespaces:
dev-app1: {}
dev-app2: {}
For more information about configuring required permissions for each namespace, see the discussion about execution namespace permissions.
Deploy the Admin Service to a single cluster or multiple clusters using Helm properties. MATRIXX Support
recommends deploying at least two instances of the Admin Service as specified with the
replicaCount
Helm property. Deployed to the same namespace, the instances discover each other and share data.
In a multi-cluster deployment, install a single instance. use the multiCluster
Helm property to specify the following for each instance:
- The logical ID of the cluster.
- The logical ID of other clusters, including their addresses.
- Details about how this instance is to the other clusters.
For example, an Admin Service instance that is part of a three-cluster deployment and is accessible over a LoadBalancer service has configuration similar to the following:
multiCluster:
enabled: true
currentClusterId: cluster2
interClusterService:
type: LoadBalancer
remoteClusters:
cluster1:
host: cluster1.domain.com
cluster3:
host: cluster3.domain.com
In multi-cluster deployments:
- The
multiCluster.enabled
property must be set totrue
. - The
multiCluster.currentClusterId
property must be different from the keys used inmultiCluster.remoteClusters
. - The
host
addresses undermultiCluster.remoteClusters
must be accessible from this cluster. - The
multiCluster.interClusterService
must be accessible to the other clusters.
multiCluster.interClusterService
must not be externally accessible.Deploy one instance of the Admin Service per cluster. Distributed cache technology uses a member-to-member protocol and the remote cluster uses an inter-cluster service to connect to the Admin Service instance on the local cluster. This must resolve to the same instance, one service pointing to the same pod, so that the remote member always gets the same local member to avoid synchronization issues.
Synchronization is not an issue in a single cluster deployment. In single-cluster deployments, members discover each other directly using the Kubernetes API.
When the Admin Service starts, it attempts to join other instances. If these are not running or ready, the Admin Service starts with its own cache. Other Admin Service instances discover the existing cache and join it on start-up. If two instances start at the same time and fail to find each other, they each create their own caches, leading to a split-brain scenario. To avoid this, start the Admin Service individually, pausing for one minute between them.