Serverless Architectures with Kubernetes
上QQ阅读APP看书,第一时间看更新

Kubernetes and Serverless

Serverless and Kubernetes arrived on the cloud computing scene at about the same time, in 2014. AWS supports serverless through AWS Lambda, whereas Kubernetes became open source with the support of Google and its long and successful history in container management. Organizations started to create AWS Lambda functions for their short-lived temporary tasks, and many start-ups have been focused on products running on the serverless infrastructure. On the other hand, Kubernetes gained dramatic adoption in the industry and became the de facto container management system. It enables running both stateless applications, such as web frontends and data analysis tools, and stateful applications, such as databases, inside containers. The containerization of applications and microservices architectures have proven to be effective for both large enterprises and start-ups.

Therefore, running microservices and containerized applications is a crucial factor for successful, scalable, and reliable cloud-native applications. Also, the following two essential elements strengthen the connection between Kubernetes and serverless architectures:

  • Vendor lock-in: Kubernetes isolates the cloud provider and creates a managed environment for running serverless workloads. In other words, it is not straightforward to run your AWS Lambda functions in Google Cloud Functions if you want to move to a new provider next year. However, if you use a Kubernetes-backed serverless platform, you will be able to quickly move between cloud providers or even on-premises systems.
  • Reuse of services: As the mainstream container management system, Kubernetes runs most of its workload in your cloud environment. It offers an opportunity to deploy serverless functions side by side with existing services. It makes it easier to operate, install, connect, and manage both serverless and containerized applications.

Cloud computing and deployment strategies are always evolving to create more developer-friendly environments with lower costs. Kubernetes and containerization adoption has already won the market and the love of developers such that any cloud computation without Kubernetes won't be seen for a very long time. By providing the same benefits, serverless architectures are gaining popularity; however, this does not pose a threat to Kubernetes. On the contrary, serverless applications will make containerization more accessible, and consequently, Kubernetes will profit. Therefore, it is essential to learn how to run serverless architectures on Kubernetes to create future-proof, cloud-native, scalable applications. In the following exercise, we will combine functions and containers and package our functions as containers.

Possible answers:

  • Serverless – data preparation
  • Serverless – ephemeral API operations
  • Kubernetes – databases
  • Kubernetes – server-related operations

Exercise 2: Packaging an HTTP Function as a Container

In this exercise, we will package the HTTP function from Exercise 1 as a container to be a part of a Kubernetes workload. Also, we will run the container and trigger the function via its container.

Note

The code files for the exercises in this chapter can be found here: https://github.com/TrainingByPackt/Serverless-Architectures-with-Kubernetes/tree/master/Lesson01/Exercise2.

To successfully complete the exercise, we need to ensure the following steps are executed:

  1. Create a file named Dockerfile in the same folder as the files from Exercise 1:

    FROM golang:1.12.5-alpine3.9 AS builder

    ADD . .

    RUN go build *.go

    FROM alpine:3.9

    COPY --from=builder /go/function ./function

    RUN chmod +x ./function

    ENTRYPOINT ["./function"]

    In this multi-stage Dockerfile, the function is built inside the golang:1.12.5-alpine3.9 container. Then, the binary is copied into the alpine:3.9 container as the final application package.

  2. Build the Docker image with the following command in the terminal: docker build . -t hello-serverless.

    Each line of Dockerfile is executed sequentially, and finally, with the last step, the Docker image is built and tagged: Successfully tagged hello-serverless:latest:

    Figure 1.14: The build of the Docker container
  3. Start a Docker container from the hello-serverless image with the following command in your Terminal: docker run -it --rm -p 8080:8080 hello-serverless.

    With that command, an instance of the Docker image is instantiated with port 8080 mapping the host system to the container. Furthermore, the --rm flag will remove the container when it is exited. The log lines indicate that the container of the function is running as expected:

    Figure 1.15: The start of the function container
  4. Open http://localhost:8080 in your browser:
    Figure 1.16: The WelcomeServerless output

    It reveals that the WelcomeServerless function running in the container was successfully invoked via the HTTP request, and the response is retrieved.

  5. Press Ctrl + C to exit the container:
Figure 1.17: Exiting the container

In this exercise, we saw how we can package a simple function as a container. In addition, the container was started and the function was triggered with the help of Docker's networking capabilities. In the following exercise, we will implement a parameterized function to show how to pass values to functions and return different responses.

Exercise 3: Parameterized HTTP Functions

In this exercise, we will convert the WelcomeServerless function from Exercise 2 into a parameterized HTTP function. Also, we will run the container and trigger the function via its container.

Note

The code files for the exercises in this chapter can be found here: https://github.com/TrainingByPackt/Serverless-Architectures-with-Kubernetes/tree/master/Lesson01/Exercise3.

To successfully complete the exercise, we need to ensure that the following steps are executed:

  1. Change the contents of function.go from Exercise 2 to the following:

    package main

    import (

    "fmt"

    "net/http"

    )

    func WelcomeServerless(w http.ResponseWriter, r *http.Request) {

    names, ok := r.URL.Query()["name"]

        

        if ok && len(names[0]) > 0 {

            fmt.Fprintf(w, names[0] + ", Hello Serverless World!")

    } else {

    fmt.Fprintf(w, "Hello Serverless World!")

    }

    }

    In the new version of the WelcomeServerless function, we now take URL parameters and return responses accordingly.

  2. Build the Docker image with the following command in your terminal: docker build . -t hello-serverless.

    Each line of Dockerfile is executed sequentially, and with the last step, the Docker image is built and tagged: Successfully tagged hello-serverless:latest:

    Figure 1.18: The build of the Docker container
  3. Start a Docker container from the hello-serverless image with the following command in the terminal: docker run -it –rm -p 8080:8080 hello-serverless.

    With that command, the function handlers will start on port 8080 of the host system:

    Figure 1.19: The start of the function container
  4. Open http://localhost:8080 in your browser:
    Figure 1.20: The WelcomeServerless output

    It reveals the same response as in the previous exercise. If we provide URL parameters, we should get personalized Hello Serverless World messages.

  5. Change the address to http://localhost:8080?name=Ece in your browser and reload the page. We are now expecting to see a personalized Hello Serverless World message with the name provided in URL parameters:
    Figure 1.21: Personalized WelcomeServerless output
  6. Press Ctrl + C to exit the container:
Figure 1.22: Exiting the container

In this exercise, how generic functions are used with different parameters was shown. Personal messages based on input values were returned by a single function that we deployed. In the following activity, a more complex function will be created and managed as a container to show how they are implemented in real life.

Activity 1: Twitter Bot Backend for Bike Points in London

The aim of this activity is to create a real-life function for a Twitter bot backend. The Twitter bot will be used to search for available bike points in London and the number of available bikes in the corresponding locations. The bot will answer in a natural language form; therefore, your function will take input for the street name or landmark and output a complete human-readable sentence.

Transportation data for London is publicly available and accessible via the Transport for London (TFL) Unified API (https://api.tfl.gov.uk). You are required to use the TFL API and run your functions inside containers.

Once completed, you will have a container running for the function:

Figure 1.23: The running function inside the container

When you query via an HTTP REST API, it should return sentences similar to the following when bike points are found with available bikes:

Figure 1.24: Function response when bikes are available

When there are no bike points found or no bikes are available at those locations, the function will return a response similar to the following:

Figure 1.25: Function response when a bike point is located but no bike is found

The function may also provide the following response:

Figure 1.26: Function response when no bike point or bike is found

Execute the following steps to complete this activity:

  1. Create a main.go file to register function handlers, as in Exercise 1.
  2. Create a function.go file for the FindBikes function.
  3. Create a Dockerfile for building and packaging the function, as in Exercise 2.
  4. Build the container image with Docker commands.
  5. Run the container image as a Docker container and make the ports available from the host system.
  6. Test the function's HTTP endpoint with different queries.
  7. Exit the container.

    Note

    The files main.go, function.go and Dockerfile can be found here: https://github.com/TrainingByPackt/Serverless-Architectures-with-Kubernetes/tree/master/Lesson01/Activity1.

    The solution for the activity can be found on page 372.

In this activity, we built the backend of a Twitter bot. We started by defining main and FindBikes functions. Then we built and packaged this serverless backend as a Docker container. Finally, we tested it with various inputs to find the closest bike station. With this real-life example, the background operations of a serverless platform and how to write serverless functions were illustrated.