Interacting with Clusters Using the Kubernetes API
Up until now, we've been using the Kubernetes kubectl command-line tool, which made interacting with our cluster quite convenient. It does that by extracting the API server address and authentication information from the client kubeconfig file, which is located in ~/.kube/config by default, as we saw in the previous chapter. In this section, we will look at the different ways to directly access the API server with HTTP clients such as curl.
There are two possible ways to directly access the API server via the REST API—by using kubectl in proxy mode or by providing the location and authentication credentials directly to the HTTP client. We will explore both methods to understand the pros and cons of each one.
Accessing the Kubernetes API Server Using kubectl as a Proxy
kubectl has a great feature called kubectl proxy, which is the recommended approach for interacting with the API server. This is recommended because it is easier to use and provides a more secure way of doing so because it verifies the identity of the API server by using a self-signed certificate, which prevents man-in-the-middle (MITM) attacks.
kubectl proxy routes the requests from our HTTP client to the API server while taking care of authentication by itself. Authentication is also handled by using the current configuration in our kubeconfig file.
In order to demonstrate how to use kubectl proxy, let's first create an NGINX Deployment with two replicas in the default namespace and view it using kubectl get pods:
kubectl create deployment mynginx --image=nginx:latest
This should give an output like the following:
deployment.apps/mynginx created
Now, we can scale our Deployment to two replicas with the following command:
kubectl scale deployment mynginx --replicas=2
You should see an output similar to this:
deployment.apps/mynginx scaled
Let's now check whether the pods are up and running:
kubectl get pods
This gives an output similar to the following:
NAME READY STATUS RESTARTS AGE
mynginx-565f67b548-gk5n2 1/1 Running 0 2m30s
mynginx-565f67b548-q6slz 1/1 Running 0 2m30s
To start a proxy to the API server, run the kubectl proxy command:
kubectl proxy
This should give output as follows:
Starting to serve on 127.0.0.1:8001
Note from the preceding screenshot that the local proxy connection is running on 127.0.0.1:8001, which is the default. We can also specify a custom port by adding the --port=<YourCustomPort> flag, while adding an & (ampersand) sign at the end of our command to allow the proxy to run in the terminal background so that we can continue working in the same terminal window. So, the command would look like this:
kubectl proxy --port=8080 &
This should give the following response:
[1] 48285
AbuTalebMBP:~ mohammed$ Starting to serve on 127.0.0.1:8080
The proxy is run as a background job, and in the preceding screenshot, [1] indicates the job number and 48285 indicates its process ID. To exit a proxy running in the background, you can run fg to bring the job back to the foreground:
fg
This will show the following response:
kubectl proxy --port=8080
^C
After getting the proxy to the foreground, we can simply use Ctrl + C to exit it (if there's no other job running).
Note
If you are not familiar with job control, you can learn about it at https://www.gnu.org/software/bash/manual/html_node/Job-Control-Basics.html.
We can now start exploring the API using curl:
curl http://127.0.0.1:8080/apis
Recall that even though we are mostly using YAML for convenience, the data is stored in etcd in JSON format. You will see a long response that begins something like this:
But how do we find the exact path to query the Deployment we created earlier? Also, how do we query the pods created by that Deployment?
You can start by asking yourself a few questions:
- What are the API version and API group used by Deployments?
In Figure 4.27, we saw that the Deployments are in apps/v1, so we can start by adding that to the path:
curl http://127.0.0.1:8080/apis/apps/v1
- Is it a namespace-scoped resource or a cluster-scoped resource? If it is a namespace-scoped resource, what is the name of the namespace?
We also saw in the scope of the API resources section that Deployments are namespace-scoped resources. When we created the Deployment, since we did not specify a different namespace, it went to the default namespace. So, in addition to the apiVersion field, we would need to add namespaces/default/deployments to our path:
curl http://127.0.0.1:8080/apis/apps/v1/namespaces/default/deployments
This will return a large output with the JSON data that is stored on this path. This is the part of the response that gives us the information that we need:
As you can see in this output, this lists all the Deployments in the default namespace. You can infer that from "kind": "DeploymentList". Also, note that the response is in JSON format and is not neatly presented as a table.
Now, we can specify a specific Deployment by adding it to our path:
curl http://127.0.0.1:8080/apis/apps/v1/namespaces/default/deployments/mynginx
You should see this response:
You can use this method with any other resource as well.
Creating Objects Using curl
When you use any HTTP client, such as curl, to send requests to the API server to create objects, you need to change three things:
- Change the HTTP request method to POST. By default, curl will use the GET method. To create objects, we need to use the POST method, as we learned in The Kubernetes API section. You can change this using the -X flag.
- Change the HTTP request header. We need to modify the header to inform the API server what the intention of the request is. We can modify the header using the -H flag. In this case, we need to set the header to 'Content-Type: application/yaml'.
- Include the spec of the object to be created. As you learned in the previous two chapters, each API resource is persisted in the etcd as an API object, which is defined by a YAML spec/manifest file. To create an object, you need to use the --data flag to pass the YAML manifest to the API server so that it can persist it in etcd as an object.
So, the curl command, which we will implement in the following exercise, will look something like this:
curl -X POST <URL-path> -H 'Content-Type: application/yaml' --data <spec/manifest>
At times, you will have the manifest files handy. However, that may not always be the case. Also, we have not yet seen what manifests for namespaces look like.
Let's consider a case where we want to create a namespace. Usually, you would create a namespace as follows:
kubectl create namespace my-namespace
This will give the following response:
namespace/my-namespace created
Here, you can see that we created a namespace called my-namespace. However, for passing the request without using kubectl, we need the spec used to define a namespace. We can get that by using the --dry-run=client and -o flags:
kubectl create namespace my-second-namespace --dry-run=client -o yaml
This will give the following response:
When you run a kubectl command with the --dry-run=client flag, the API server takes it through all the stages of a normal command, except that it does not persist the changes into etcd. So, the command is authenticated, authorized, and validated, but changes are not permanent. This is a great way to test whether a certain command works, and also to get the manifest that the API server would have created for this command, as you can see in the previous screenshot. Let's see how to put this in practice and use curl to create a Deployment.
Exercise 4.04: Creating and Verifying a Deployment Using kubectl proxy and curl
For this exercise, we will create an NGINX Deployment called nginx-example with three replicas in a namespace called example. We will do this by sending our requests to the API server with curl via kubectl proxy:
- First, let's start our proxy:
kubectl proxy &
This should give the following response:
[1] 50034
AbuTalebMBP:~ mohammed$ Starting to serve on 127.0.0.1:8080
The proxy started as a background job and is listening on the localhost at port 8001.
- Since the example namespace does not exist, we should create that namespace before creating the Deployment. As we learned in the previous section, we need to get the spec that should be used to create the namespace. Let's use the following command:
kubectl create namespace example --dry-run -o yaml
Note
For Kubernetes versions 1.18+, please use --dry-run=client.
This will give the following output:
Now, we have the spec required for creating the namespace.
- Now, we need to send a request to the API server using curl. Namespaces belong to the core group and hence the path will be /api/v1/namespaces. The final curl command to create the namespace after adding all required parameters should look like the following:
curl -X POST http://127.0.0.1:8001/api/v1/namespaces -H 'Content- Type: application/yaml' --data "
apiVersion: v1
kind: Namespace
metadata:
creationTimestamp: null
name: example
spec: {}
status: {}
"
Note
You can discover the required path for any resource, as shown in the previous exercise. In this command, the double-quotes (") after --data allow you to enter multi-line input in Bash, which is delimited by another double-quote at the end. So, you can copy the output from the previous step here before the delimiter.
Now, if everything was correct in our command, you should get a response like the following:
- The same procedure applies to Deployment. So, first, let's use the kubectl create command with --dry-run=client to get an idea of how our YAML data looks:
kubectl create deployment nginx-example -n example --image=nginx:latest --dry-run -o yaml
Note
For Kubernetes versions 1.18+, please use --dry-run=client.
You should get the following response:
Note
Notice that the namespace will not show if you are using the --dry-run=client flag because we need to specify it in our API path.
- Now, the command for creating the Deployment will be constructed similarly to the command for creating the namespace. Note that the namespace is specified in the API path:
curl -X POST http://127.0.0.1:8001/apis/apps/v1/namespaces/example/ deployments -H 'Content-Type: application/yaml' --data "
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
run: nginx-example
name: nginx-example
spec:
replicas: 3
selector:
matchLabels:
run: nginx-example
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
run: nginx-example
spec:
containers:
- image: nginx:latest
name: nginx-example
resources: {}
status: {}
"
If everything is correct, you should get a response like the following from the API server:
Note that the kubectl proxy process is still running in the background. If you are done with interacting with the API server using kubectl proxy, then you may want to stop the proxy from running in the background. To do that, run the fg command to bring the kubectl proxy process to the foreground and then press Ctrl + C.
So, we have seen how we can interact with the API server using kubectl proxy, and by using curl, we have been able to create an NGINX Deployment in a new namespace.