The DevOps 2.3 Toolkit
上QQ阅读APP看书,第一时间看更新

Splitting the Pod and establishing communication through Services

Let's take a look at a ReplicaSet definition for a Pod with only the database:

cat svc/go-demo-2-db-rs.yml  

The output is as follows:

apiVersion: apps/v1beta2
kind: ReplicaSet
metadata:
  name: go-demo-2-db
spec:
  selector:
    matchLabels:
      type: db
      service: go-demo-2
  template:
    metadata:
      labels:
        type: db
        service: go-demo-2
        vendor: MongoLabs
    spec:
      containers:
      - name: db
        image: mongo:3.3
        ports:
        - containerPort: 28017  

We'll comment only on the things that changed.

Since this ReplicaSet defines only the database, we reduced the number of replicas to 1. Truth be told, MongoDB should be scaled as well, but that's out of the scope of this chapter (and probably the book as well). For now, we'll pretend that one replica of a database is enough.

Since selector labels need to be unique, we changed them slightly. The service is still go-demo-2, but the type was changed to db.

The rest of the definition is the same except that the containers now contain only mongo. We'll define the API in a separate ReplicaSet.

Let's create the ReplicaSet before we move to the Service that will reference its Pod.

kubectl create \
    -f svc/go-demo-2-db-rs.yml  

One object was created, three are left to go.

The next one is the Service for the Pod we just created through the ReplicaSet.

cat svc/go-demo-2-db-svc.yml  

The output is as follows:

apiVersion: v1
kind: Service
metadata:
  name: go-demo-2-db
spec:
  ports:
  - port: 27017
  selector:
    type: db
    service: go-demo-2  

This Service definition does not contain anything new. There is no type, so it'll default to ClusterIP. Since there is no reason for anyone outside the cluster to communicate with the database, there's no need to expose it using the NodePort type. We also skipped specifying the nodePort, since only internal communication within the cluster is allowed. The same is true for the protocol. TCP is all we need, and it happens to be the default one. Finally, the selector labels are the same as the labels that define the Pod.

Let's create the Service:

kubectl create \
    -f svc/go-demo-2-db-svc.yml  

We are finished with the database. The ReplicaSet will make sure that the Pod is (almost) always up-and-running and the Service will allow other Pods to communicate with it through a fixed DNS.

Moving to the backend API...

cat svc/go-demo-2-api-rs.yml  

The output is as follows:

apiVersion: apps/v1beta2
kind: ReplicaSet
metadata:
  name: go-demo-2-api
spec:
  replicas: 3
  selector:
    matchLabels:
      type: api
      service: go-demo-2
  template:
    metadata:
      labels:
        type: api
        service: go-demo-2
        language: go
    spec:
      containers:
      - name: api
        image: vfarcic/go-demo-2
        env:
        - name: DB
          value: go-demo-2-db
        readinessProbe:
          httpGet:
            path: /demo/hello
            port: 8080
          periodSeconds: 1
        livenessProbe:
          httpGet:
            path: /demo/hello
            port: 8080

Just as with the database, this ReplicaSet should be familiar since it's very similar to the one we used before. We'll comment only on the differences.

The number of replicas is set to 3. That solves one of the main problems we had with the previous ReplicaSets that defined Pods with both containers. Now the number of replicas can differ, and we have one Pod for the database, and three for the backend API.

The type label is set to api so that both the ReplicaSet and the (soon to come) Service can distinguish the Pods from those created for the database.

We have the environment variable DB set to go-demo-2-db. The code behind the vfarcic/go-demo-2 image is written in a way that the connection to the database is established by reading that variable. In this case, we can say that it will try to connect to the database running on the DNS go-demo-2-db. If you go back to the database Service definition, you'll notice that its name is go-demo-2-db as well. If everything works correctly, we should expect that the DNS was created with the Service and that it'll forward requests to the database.

In earlier Kubernetes versions it used userspace proxy mode. Its advantage is that the proxy would retry failed requests to another Pod. With the shift to the iptables mode, that feature is lost. However, iptables are much faster and more reliable, so the loss of the retry mechanism is well compensated. That does not mean that the requests are sent to Pods "blindly". The lack of the retry mechanism is mitigated with readinessProbe, which we added to the ReplicaSet.

The readinessProbe has the same fields as the livenessProbe. We used the same values for both, except for the periodSeconds, where instead of relying on the default value of 10, we set it to 1. While livenessProbe is used to determine whether a Pod is alive or it should be replaced by a new one, the readinessProbe is used by the iptables. A Pod that does not pass the readinessProbe will be excluded and will not receive requests. In theory, Requests might be still sent to a faulty Pod, between two iterations. Still, such requests will be small in number since the iptables will change as soon as the next probe responds with HTTP code less than 200, or equal or greater than 400.

Ideally, an application would have different end-points for the readinessProbe and the livenessProbe. This one doesn't so the same one should do. You can blame it on me being too lazy to add them.

Let's create the ReplicaSet.

kubectl create \
    -f svc/go-demo-2-api-rs.yml  

Only one object is missing, that is service:

cat svc/go-demo-2-api-svc.yml  

The output is as follows:

apiVersion: v1
kind: Service
metadata:
  name: go-demo-2-api
spec:
  type: NodePort
  ports:
  - port: 8080
  selector:
    type: api
    service: go-demo-2 

There's nothing truly new in this definition. The type is set to NodePort since the API should be accessible from outside the cluster. The selector label type is set to api so that it matches the labels defined for the Pods.

That is the last object we'll create (in this section), so let's move on and do it:

kubectl create \
    -f svc/go-demo-2-api-svc.yml  

We'll take a look at what we have in the cluster:

kubectl get all  

The output is as follows:

NAME             DESIRED CURRENT READY AGE
rs/go-demo-2-api 3       3       3     18m
rs/go-demo-2-db  1       1       1     48m
rs/go-demo-2-api 3       3       3     18m
rs/go-demo-2-db  1       1       1     48m    
NAME                   READY STATUS  RESTARTS AGE
po/go-demo-2-api-6brtz 1/1   Running 0        18m
po/go-demo-2-api-fj9mg 1/1   Running 0        18m
po/go-demo-2-api-vrcxh 1/1   Running 0        18m
po/go-demo-2-db-qcftz  1/1   Running 0        48m  
NAME              TYPE      CLUSTER-IP EXTERNAL-IP PORT(S)        
AGE
svc/go-demo-2-api NodePort 10.0.0.162 <none> 8080:31256/TCP
2m
svc/go-demo-2-db ClusterIP 10.0.0.19 <none> 27017/TCP
48m
svc/kubernetes ClusterIP 10.0.0.1 <none> 443/TCP
1h

Both ReplicaSets for db and api are there, followed by the three replicas of the go-demo-2-api Pods and one replica of the go-demo-2-db Pod. Finally, the two Services are running as well, together with the one created by Kubernetes itself.

I'm not sure why are the ReplicaSets duplicated in this view. My best guess is that it is a bug that will be corrected soon. To be honest, I haven't spent time investigating that since it does not affect how the cluster and ReplicaSets work. If you execute kubectl get rs, you'll see that there are only two of them, not four.

Before we proceed, it might be worth mentioning that the code behind the vfarcic/go-demo-2 image is designed to fail if it cannot connect to the database. The fact that the three replicas of the go-demo-2-api Pod are running means that the communication is established. The only verification left is to check whether we can access the API from outside the cluster. Let's try that out.

PORT=$(kubectl get svc go-demo-2-api \
    -o jsonpath="{.spec.ports[0].nodePort}")
    
curl -i "http://$IP:$PORT/demo/hello"  

We retrieved the port of the service (we still have the Minikube node IP from before) and used it to send a request. The output of the last command is as follows:

HTTP/1.1 200 OK
Date: Tue, 12 Dec 2017 21:27:51 GMT
Content-Length: 14
Content-Type: text/plain; charset=utf-8
    
hello, world!  

We got the response 200 and a friendly hello, world! message indicating that the API is indeed accessible from outside the cluster.

At this point, you might be wondering whether it is overkill to have four YAML files for a single application. Can't we simplify the definitions? Not really. Can we define everything in a single file? Read on.

Before we move further, we'll delete the objects we created. By now, you probably noticed that I like destroying things and starting over. Bear with me. There is a good reason for the imminent destruction:

kubectl delete -f svc/go-demo-2-db-rs.yml
  
kubectl delete -f svc/go-demo-2-db-svc.yml
  
kubectl delete -f svc/go-demo-2-api-rs.yml
    
kubectl delete -f svc/go-demo-2-api-svc.yml  

Everything we created is gone, and we can start over.