Application management under hybrid multi-cloud with Karmada

Addo Zhang
7 min readMar 15, 2023

--

Background

Over the past few years, the public cloud has attracted a large number of enterprises to deploy applications to the public cloud with greater scalability, flexibility, reliability, and security. As the scale of business continues to expand, enterprises choose to deploy applications on more public clouds for some reasons, such as avoiding vendor lock-in, pursuing lower latency, and higher reliability. Some enterprises choose to deploy some applications in private environments due to data sensitivity and other reasons. The latter is also more like lengthening the process of going to the cloud. Whether it is a multi-cloud or hybrid cloud, there are inevitable differences in infrastructure, and enterprises have to invest a lot of manpower and material resources in adapting to the underlying infrastructure.

The advent of Kubernetes solved this problem perfectly. In addition to shielding the differences in infrastructure layers to solve cross-platform problems, it also provides automated container orchestration, higher scalability, elasticity, and high availability, backed by a large community. The popularity of Kubernetes has been favored by a large number of enterprises. Over time, it became more common for enterprises to use multiple Kubernetes clusters to manage applications.

Managing applications across multiple clusters or even hybrid multi-cloud environments is a new challenge. Karmada emerged precisely to solve this problem.

Karmada

Karmada is an open-source project under the CNCF (Cloud Native Computing Foundation) that aims to provide a platform for Kubernetes clusters to simplify application deployment and management across multiple Kubernetes clusters and improve availability and scalability.

Let’s borrow the architecture diagram of the official website. As you can see from the diagram, Karmada provides a centralized control plane that is responsible for the management of resources and policies, as well as the scheduling of resources; The data plane is the cluster it manages, the cluster that actually runs resources.

The components of the control plane are similar to those of Kubernetes and are responsible for the scheduling of resources. The difference is that Kubernetes’ control plane is responsible for scheduling resources to compute nodes, while Karmada’s control plane is scheduling resources to a cluster.

In a Kubernetes cluster, the control plane selects one or more nodes to run pods based on the resources of the current nodes. In a Karmada multi-cluster, after the Deployment resource is created, the Karmada control plane schedules it to the target cluster according to the policy: the Deployment resource is created in the target cluster, and the sum of the number of replicas in all clusters is the expected number of replicas.

Let’s use an example to illustrate this.

Environment setup

Preconditions

  • Docker
  • k3d
  • kubectl
  • karmadactl
  • 1 virtual machine (minimum configuration of 4C8G)

We chose to use k3d to create 4 k3s clusters (control plane cluster ‘control-plane’ and members ‘cluster-1’, ‘cluster-2’, ‘cluster-3’) in the container, which communicate via native IP addresses and separate ports.

Create multiple clusters

Before creating a cluster, run the following command to obtain the local IP address and save it in a variable:

HOST_IP=$(if [ "$(uname)" == "Darwin" ]; then ipconfig getifaddr en0; else ip -o route get to 8.8.8.8 | sed -n 's/.*src \([0-9.] \+\).*/\1/p'; fi)

Use the k3d command to create a cluster, and the ports of the cluster apiserver are ‘6444’, ‘6445’, ‘6446’, ‘6447’.

API_PORT=6444 #6444 6445 6446 6447
for CLUSTER_NAME in control-plane cluster-1 cluster-2 cluster-3
do
k3d cluster create ${CLUSTER_NAME} \
--image docker.io/rancher/k3s:v1.23.8-k3s2 \
--api-port "${HOST_IP}:${API_PORT}" \
--servers-memory 2g \
--k3s-arg "--disable=traefik@server:0" \
--network multi-clusters \
--timeout 120s \
--wait
((API_PORT=API_PORT+1))
done

After the cluster is created, set the following variables to access the cluster to facilitate cluster access.

k3d kubeconfig get control-plane > /tmp/cp.kubeconfig
k3d kubeconfig get cluster-1 > /tmp/c1.kubeconfig
k3d kubeconfig get cluster-2 > /tmp/c2.kubeconfig
k3d kubeconfig get cluster-3 > /tmp/c3.kubeconfig
#usage
k0="kubectl --kubeconfig /tmp/cp.kubeconfig"
k1="kubectl --kubeconfig /tmp/c1.kubeconfig"
k2="kubectl --kubeconfig /tmp/c2.kubeconfig"
k3="kubectl --kubeconfig /tmp/c3.kubeconfig"

Install Karmada

Install Karmada on the control plane cluster for initialization.

sudo karmadactl --kubeconfig /tmp/cp.kubeconfig init

After initialization, also set the variable ‘kmd’ to facilitate access to Karmada’s control plane, specifying a configuration file using Karmada apiserver (this configuration file is automatically generated at initialization time, so the above command is used).

kmd="kubectl --kubeconfig /etc/karmada/karmada-apiserver.config"

Join the cluster

Next, add the other three clusters to the federation (k3d automatically prefixes the cluster name with ‘k3d-’).

karmadactl --kubeconfig /etc/karmada/karmada-apiserver.config  join k3d-cluster-1 --cluster-kubeconfig=/tmp/c1.kubeconfig
karmadactl --kubeconfig /etc/karmada/karmada-apiserver.config join k3d-cluster-2 --cluster-kubeconfig=/tmp/c2.kubeconfig
karmadactl --kubeconfig /etc/karmada/karmada-apiserver.config join k3d-cluster-3 --cluster-kubeconfig=/tmp/c3.kubeconfig

The following information is returned, indicating that the cluster has been successfully joined.

cluster(k3d-cluster-1) is joined successfully
cluster(k3d-cluster-2) is joined successfully
cluster(k3d-cluster-3) is joined successfully

Visit Karmada’s control plane to view a list of clusters.

$kmd get cluster -o wide
NAME VERSION MODE READY AGE APIENDPOINT
k3d-cluster-1 v1.23.8+k3s2 Push True 24m https://10.0.0.8:6445
k3d-cluster-2 v1.23.8+k3s2 Push True 24m https://10.0.0.8:6446
k3d-cluster-3 v1.23.8+k3s2 Push True 23m https://10.0.0.8:6447

At this point, the multi-cluster environment has been set up.

Test

Here two applications are selected: the server-side application responds to the HTTP request and returns the name of the current pod; A client app that can be used to make HTTP requests.

Deploy the server-side application

Create 2 replicas of Deployment and Service through Karmada’s control plane.

$kmd apply -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: pipy
name: pipy
namespace: default
spec:
replicas: 2
selector:
matchLabels:
app: pipy
strategy: {}
template:
metadata:
labels:
app: pipy
spec:
containers:
- image: flomesh/pipy
name: pipy
command: ["pipy"]
args: ["-e", "pipy().listen(8080).serveHTTP(() => new Message(os.env['HOSTNAME'] +'\n'))"]
---
apiVersion: v1
kind: Service
metadata:
labels:
app: pipy
name: pipy
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 8080
selector:
app: pipy
EOF

Then look at the information for Deployment and there are no replicas running.

$kmd get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
pipy 0/2 0 0 20s

The following warning is found with the ‘describe’ command, indicating that there is no policy matching the resource. Karmada needs to schedule resources to the target cluster according to the multi-cluster scheduling policy ‘PropagationPolicy’.

$kmd apply -f - <<EOF
apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
name: pipy-propagation
spec:
resourceSelectors:
- apiVersion: apps/v1
kind: Deployment
name: pipy
- apiVersion: v1
kind: Service
name: pipy
placement:
clusterAffinity:
clusterNames:
- k3d-cluster-1
- k3d-cluster-2
- k3d-cluster-3
replicaScheduling:
replicaDivisionPreference: Weighted
replicaSchedulingType: Divided
weightPreference:
staticWeightList:
- targetCluster:
clusterNames:
- k3d-cluster-1
weight: 10
- targetCluster:
clusterNames:
- k3d-cluster-2
weight: 10
- targetCluster:
clusterNames:
- k3d-cluster-3
weight: 10
EOF

Propagation strategy

Apply the following policy to schedule ‘Deployment’ and ‘Service’ resources to the three member clusters.

$k1 get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
pipy 1/1 1 1 13s
$k2 get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
pipy 1/1 1 1 13s

In clusters ‘cluster-1’ and ‘cluster-2’ you can see pods running.

$k1 get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
pipy 1/1 1 1 13s
$k2 get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
pipy 1/1 1 1 13s

Only deployment was found on ‘cluster-3’, but the number of replicas is expected to be 0. This is because the deployment created in the control plane has only two copies set up.

$k3 get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
pipy 0/0 0 0 16s

Adjust the number of replicas to ‘3’ in the control plane.

$kmd scale deploy pipy --replicas 3

Then in ‘cluster-3’ you can see the third copy.

$k3 get po
NAME READY STATUS RESTARTS AGE
pipy-8ff5f5987-5nqqw 1/1 Running 0 54s

Deploy the client app

Next, deploy the client app.

$kmd apply -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: curl
name: curl
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: curl
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: curl
spec:
containers:
- image: curlimages/curl
name: curl
command: ["sleep", "365d"]
EOF

Similarly, configure a multi-cluster scheduling policy to schedule only to cluster ‘cluster-3’.

$kmd apply -f - <<EOF
apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
name: curl-propagation
spec:
resourceSelectors:
- apiVersion: apps/v1
kind: Deployment
name: curl
placement:
clusterAffinity:
clusterNames:
- k3d-cluster-3
replicaScheduling:
replicaDivisionPreference: Weighted
replicaSchedulingType: Divided
weightPreference:
staticWeightList:
- targetCluster:
clusterNames:
- k3d-cluster-3
weight: 1
EOF

A request was made from the pod to access the app’s ‘8080’ port, and the response was successfully received. Multiple requests, will get the same result.

curl_client=`$k3 get po -l app=curl -o jsonpath='{.items[0].metadata.name}'`
$k3 exec $curl_client -- curl -s http://pipy.default:8080
pipy-8ff5f5987-5nqqw

Summary

The emergence of Karmada solves the management problem of applications in multiple clusters without changing the deployment mode, and realizes the deployment, synchronization and scheduling of applications in multiple Kubernetes clusters. What is not shown in this article is observability, where the Karmada control plane can aggregate monitoring data from member clusters and display it centrally.

Think

You may also find that the responses received by the client in the demo are from applications in the same cluster.

At this point, suppose you reduce the number of replicas of deployment to 2 and request it again. You will notice that the request failed and the request was not scheduled to another cluster.

It can be seen that although Karmada helps us manage the cross-cluster application, the traffic still cannot cross the cluster. What does this mean? When your application is deployed in multiple clusters, the impact and cost can be imagined due to the dependence of applications on each other.

In the next article, I will take you to try to solve this problem.

--

--

Addo Zhang

CNCF Ambassador | LF APAC OpenSource Evangelist | Microsoft MVP | SA and Evangelist at https://flomesh.io | Programmer | Blogger | Mazda Lover | Ex-BBer