Sunday, May 10, 2020

GCP Study Notes 10 : docker, Containers, Kubernetes, and Kubernetes Engine


An application and its dependencies are called an image. A container is simply a running instance of an image. You need software to build container images and to run them. Docker is one tool that does both. Docker is an open source technology that allows you to create and run applications in containers. But it doesn't offer a way to orchestrate those applications at scale like Kubernetes does.

Containers are not an intrinsic primitive feature of Linux. Instead, their power to isolate workloads is derived from the composition of several technologies:

One foundation is the Linux process. Each Linux process has its own virtual memory address space separate from all others. And Linux processes are rapidly created, and destroyed.

Containers use Linux namespaces to control what an application can see. Process ID numbers, directory trees, IP addresses and more. By the way, Linux namespaces are not the same thing as Kubernetes namespaces, which you'll learn more about later on in this course.

Containers use Linux cgroups to control what an application can use. Its maximum consumption of CPU time, memory, IO bandwidth, and other resources.

Finally, containers use union file systems to efficiently encapsulate applications, and their dependencies into a set of clean minimal layers.

A container image is structured in layers. The tool you use to build the image reads instructions from a file called the container manifest. In the case of a Docker-formatted container image, that's called a Dockerfile. Each instruction in the Dockerfile specifies a layer inside the container image. Each layer is read-only. When a container runs from this image, it will also have a writable ephemeral topmost layer.

We've already discussed Compute Engine, which is GCPs Infrastructure as a Service offering, which lets you run Virtual Machine in the cloud and gives you persistent storage and networking for them,and App Engine, which is one of GCP's platform as a service offerings. Now I'm going to introduce you to a service called Kubernetes Engine. It's like an Infrastructure as a Service offering in that it saves you infrastructure chores. It's also like a platform as a service offering, in that it was built with the needs of developers in mind.

First, I'll tell you about a way to package software called Containers. I'll describe why Containers are useful, and how to manage them in Kubernetes Engine. Let's begin by remembering that infrastructure as a service offering let you share compute resources with others by virtualizing the hardware. Each Virtual Machine has its own instance of an operating system, your choice, and you can build and run applications on it with access to memory, file systems, networking interfaces, and the other attributes that physical computers also have. But flexibility comes with a cost. In an environment like this, the smallest unit of compute is a Virtual Machine together with its application.

The guest OS, that is the operating system maybe large, even gigabytes in size. It can take minutes to boot up. Often it's worth it. Virtual Machine are highly configurable, and you can install and run your tools of choice. So you can configure the underlying system resources such as disks and networking, and you can install your own web server database or a middle ware. But suppose your application is a big success. As demand for it increases, you have to scale out in units of an entire Virtual Machine with a guest operating system for each. That can mean your resource consumption grows faster than you like.

Now, let's make a contrast with a Platform as a Service environment like App Engine. From the perspective of someone deploying on App Engine, it feels very different. Instead of getting a blank Virtual Machine, you get access to a family of services that applications need. So all you do is write your code and self-contained workloads that use these services and include any dependent libraries. As demand for your application increases, the platform scales your applications seamlessly and independently by workload and infrastructure.

This scales rapidly, but you give up control of the underlying server architecture. That's where Containers come in. The idea of a Container is to give you the independent scalability of workloads like you get in a PaaS environment, and an abstraction layer of the operating system and hardware, like you get in an Infrastructure as a Service environment. What do you get as an invisible box around your code and its dependencies with limited access to its own partition of the file system and hardware?

Remember that in Windows, Linux, and other operating systems, a process is an instance of a running program. A Container starts as quickly as a new process. Compare that to how long it takes to boot up an entirely new instance of an operating system. All you need on each host is an operating system that supports Containers and a Container run-time. In essence, you're visualizing the operating system rather than the hardware.

The environment scales like PaaS but gives you nearly the same flexibility as Infrastructure as a Service. The container abstraction makes your code very portable. You can treat the operating system and hardware as a black box. So you can move your code from development, to staging, to production, or from your laptop to the Cloud without changing or rebuilding anything. If you went to scale for example a web server, you can do so in seconds, and deploy dozens or hundreds of them depending on the size of your workload on a single host.

Well, that's a simple example. Let's consider a more complicated case. You'll likely want to build your applications using lots of Containers, each performing their own function, say using the micro-services pattern. The units of code running in these Containers can communicate with each other over a network fabric. If you build this way, you can make applications modular. They deploy it easily and scale independently across a group of hosts.
The host can scale up and down, and start and stop Containers as demand for your application changes, or even as hosts fail and are replaced. A tool that helps you do this well is Kubernetes. Kubernetes makes it easy to orchestrate many Containers on many hosts. Scale them, roll out new versions of them, and even roll back to the old version if things go wrong.

First, I'll show you how you build and run containers. The most common format for Container images is the one defined by the open source tool Docker. In my example, I'll use Docker to bundle an application and its dependencies into a Container. You could use a different tool. For example, Google Cloud offers Cloud Build, a managed service for building Containers. It's up to you.

Here is an example of some code you may have written. It's a Python web application, and it uses the very popular Flask framework. Whenever a web browser talks to it by asking for its top-most document, it replies "hello world". Or if the browser instead appends/version to the request, the application replies with its version. Great. So how do you deploy this application?

It needs a specific version of Python and a specific version of Flask, which we control using Python's requirements.txt file, together with its other dependencies too. So you use a Docker file to specify how your code gets packaged into a Container. For example, Ubuntu is a popular distribution of Linux. Let's start there. You can install Python the same way you would on your development environment. Of course, now that it's in a file, it's repeatable.
What's inside a Dockerfile?

From the Dockerfile above, there are a few commands inside.
The FROM statement starts out by creating a base layer pulled from a public repository. This one happens to be the Ubuntu Linux runtime environment of a specific version. The COPY command adds a new layer, containing some files copied in from your build tools' current directory. The RUN command builds your application using the make command, and puts some result of the build into a third layer. And finally, the last layer specifies what command to run within the container when it's launched. Each layer is only a set of differences from the layer before it. When you write a Dockerfile, you should organize the layers least likely to change through to the layers that are most likely to change.

All changes made to the running container, such as writing new files, modifying existing files and deleting files are written to this thin writable container layer. In the ephemeral, when the container is deleted the contents of this writable layer are lost forever. The underlying container image itself remains unchanged. This fact about containers has an implication for your application design. Whenever you want to store data permanently, you must do so somewhere other than a running container image.

Because each container has its own writable container layer, and all changes are stored in this layer. Multiple containers can share access to the same, underlying image, and yet have their own data state. The diagram here shows multiple containers, showing the same Ubuntu 15.04 image. Because each layer is only a set of differences from the layer before it, you get smaller images. For example, your base application image may be 200 megabytes, but the difference of the next point release might only be 200 kilobytes. When you build a container, instead of copying the whole image, it creates a layer with just the differences. When you run a container, the container run time pulls down the layers it needs. When you update, you only need to copy the difference. This is much faster than running a new virtual machine.

It's very common to use publicly available open source container images as a base for your own images or for unmodified use. For example, you've already seen the Ubuntu container image which provides an Ubuntu Linux environment inside of a container. Alpine is popular Linux environment and a container, noted for being very, very small. The NGINX web server is frequently used in its container packaging.

Google maintains a container registry, gcr.io. This registry contains many public open source images. And Google Cloud customers also use it to store their own private images in a way that integrates well with Cloud IAM.

To generate a build with Cloud Build, you define a series of steps. For example, you can configure build steps to fetch dependencies, compile source code, run integration tests or use tools such as Dock or Cradle and Maven. Each build step in Cloud Build runs in a Docker container.

Then Cloud Build can deliver your newly built images to various execution environments. Not only GKE but also App Engine, and Cloud Functions.

Let's copy in the requirements.txt file we created earlier, and use it to install our applications dependencies. We'll also copy in the files that make up our application and tell the environment that launches this Container how to run it. Then I use the docker build command to build the Container. This builds the Container and stores it on the local system as a runnable image. Then I can use the docker run command to run the image. In a real-world situation, you'd probably upload the image to a Container Registry service, such as the Google Container Registry and share or download it from there. Great, we packaged an application, but building a reliable, scalable, distributed system takes a lot more. How about application configuration, service discovery, managing updates, and monitoring?

Kubernetes is an open-source orchestrator for containers so you can better manage and scale your applications. Kubernetes offers an API that lets people, that is authorized people, not just anybody, control its operation through several utilities.

In Kubernetes, a node represents a computing instance. In Google Cloud, nodes are virtual machines running in Compute Engine.

#code to create a Kebernetes cluster
gcloud container clusters create k1

Whenever Kubernetes deploys a container or a set of related containers, it does so inside an abstraction called a pod. A pod is the smallest deployable unit in Kubernetes. Think of a pod as if it were a running process on your cluster. It could be one component of your application or even an entire application.

It's common to have only one container per pod. But if you have multiple containers with a hard dependency, you can package them into a single pod. They'll automatically share networking and they can have disk storage volumes in common. Each pod in Kubernetes gets a unique IP address and set of ports for your containers. Because containers inside a pod can communicate with each other using the localhost network interface, they don't know or care which nodes they're deployed on.

Running the kubectl run command starts a deployment with a container running a pod. In this example, the container running inside the pod is an image of the popular nginx open source web server. The kubectl command is smart enough to fetch an image of nginx of the version we request from a container registry. So what is a deployment? A deployment represents a group of replicas of the same pod. It keeps your pods running even if a node on which some of them run on fails. You can use a deployment to contain a component of your application or even the entire application.

To see the running nginx pods, run the command kubectl get pods. By default, pods in a deployment or only accessible inside your cluster, but what if you want people on the Internet to be able to access the content in your nginx web server? To make the pods in your deployment publicly available, you can connect a load balancer to it by running the kubectl expose command. Kubernetes then creates a service with a fixed IP address for your pods. A service is the fundamental way Kubernetes represents load balancing.

To be specific, you requested Kubernetes to attach an external load balancer with a public IP address to your service so that others outside the cluster can access it. In GKE, this kind of load balancer is created as a network load balancer. This is one of the managed load balancing services that Compute Engine makes available to virtual machines. GKE makes it easy to use it with containers. Any client that hits that IP address will be routed to a pod behind the service. In this case, there is only one pod, your simple nginx pod.

So what exactly is a service? A service groups a set of pods together and provides a stable endpoint for them. In our case, a public IP address managed by a network load balancer, although there are other choices. But why do you need a service? Why not just use pods' IP addresses directly? Suppose instead your application consisted of a front end and a back end. Couldn't the front end just access the back end using those pods' internal IP addresses without the need for a service? Yes, but it would be a management problem. As deployments create and destroy pods, pods get their own IP addresses, but those addresses don't remain stable over time. Services provide that stable endpoint you need.

What if you need more power? To scale a deployment, run the kubectl scale command. Now our deployment has 3 nginx web servers, but they're all behind the service and they're all available through one fixed IP address. You could also use auto scaling with all kinds of useful parameters.

Instead of issuing commands, you provide a configuration file that tells Kubernetes what you want your desired state to look like and Kubernetes figures out how to do it. These configuration files then become your management tools. To make a change, edit the file and then present the changed version to Kubernetes. The command on the slide is one way we could get a starting point for one of these files based on the work we've already done.

Lab: building and running containerized applications, orchestrating and scaling them on a cluster. Finally, deploying them using rollouts.
1. need to enable Kubernetes Engine API & Container Registry API.
#================================================
#setup the environment variable called MY_ZONE:  
export MY_ZONE=us-central1-a

#Start a Kubernetes cluster managed by Kubernetes Engine, 
#Name the cluster webfrontend and configure it to run 2 nodes:
gcloud container clusters create webfrontend --zone $MY_ZONE --num-nodes 2
#those 2 nodes are VMs, you can see that in the VM API. 

#check the version of kubernetes: 
kubectl version

#launch a single instance of nginx container. (Nginx is a popular web server.)
kubectl create deploy nginx --image=nginx:1.17.10
#create a deployment consisting of a single pod containing the nginx container. 

#View the pod running the nginx container:
kubectl get pods
NAME                     READY   STATUS    RESTARTS   AGE
nginx-6cc5778b4d-qdch4   1/1     Running   0          75s

#Expose the nginx container to the Internet:
kubectl expose deployment nginx --port 80 --type LoadBalancer
#Kubernetes created a service and an external load balancer with a public IP address attached to it. 
#The IP address remains the same for the life of the service.

#View the new service:
kubectl get services
NAME         TYPE           CLUSTER-IP     EXTERNAL-IP    PORT(S)        AGE
kubernetes   ClusterIP      10.51.240.1             443/TCP        18m
nginx        LoadBalancer   10.51.249.77   35.226.19.36   80:31318/TCP   51s

#Scale up the number of pods to 3 running on your service:
kubectl scale deployment nginx --replicas 3

kubectl get pods
NAME                     READY   STATUS    RESTARTS   AGE
nginx-6cc5778b4d-bp9fr   1/1     Running   0          68s
nginx-6cc5778b4d-qdch4   1/1     Running   0          7m16s
nginx-6cc5778b4d-st48z   1/1     Running   0          68s

#double check the external IP is not changed: 
kubectl get services
NAME         TYPE           CLUSTER-IP     EXTERNAL-IP    PORT(S)        AGE
kubernetes   ClusterIP      10.51.240.1             443/TCP        23m
nginx        LoadBalancer   10.51.249.77   35.226.19.36   80:31318/TCP   4m54s
#===========================================================
We've discussed two GCP products that provide the compute infrastructure for applications: Compute Engine and Kubernetes Engine.
The App Engine platform manages the hardware and networking infrastructure required to run your code. To deploy an application on App Engine, you just hand App Engine your code and the App Engine service takes care of the rest. App Engine provides you with a built-in services that many web applications need. NoSQL databases, in-memory caching, load balancing, health checks, logging and a way to authenticate users. App engine will scale your application automatically in response to the amount of traffic it receives. So you only pay for those resources you use. There are no servers for you to provision or maintain. That's why App Engine is especially suited for applications where the workload is highly variable or unpredictable like web applications and mobile backend. App Engine offers two environments: standard and flexible.

Google App Engine Standard Environment: Of the two App Engine Environments, Standard is the simpler. It offers a simpler deployment experience than the Flexible environment and fine-grained auto-scale. Like the Standard Environment, it also offers a free daily usage quota for the use of some services. What's distinctive about the Standard Environment though, is that low utilization applications might be able to run at no charge.

Google provides App Engine software development kits in several languages, so that you can test your application locally before you upload it to the real App Engine service. The SDKs also provide simple commands for deployment. Now, you may be wondering what does my code actually run on? I mean what exactly is the executable binary? App Engine's term for this kind of binary is the runtime.

In App Engine Standard Environment, you use a runtime provided by Google. We'll see your choices shortly. App Engine Standard Environment provides runtimes for specific versions of Java, Python, PHP and Go. The runtimes also include libraries that support App Engine APIs. And for many applications, the Standard Environment runtimes and libraries may be all you need. If you want to code in another language, Standard Environment is not right for you.

You'll want to consider the Flexible Environment. The Standard Environment also enforces restrictions on your code by making it run in a so-called "Sandbox." That's a software construct that's independent of the hardware, operating system, or physical location of the server it runs on. The Sandbox is one of the reasons why App Engine Standard Environment can scale and manage your application in a very fine-grained way.

Like all Sandboxes, it imposes some constraints. For example, your application can't write to the local file system. It'll have to write to a database service instead if it needs to make data persistent. Also, all the requests your application receives has a 60-second timeout, and you can't install arbitrary third party software. If these constraints don't work for you, that would be a reason to choose the Flexible Environment.

Instead of the sandbox, App Engine flexible environment lets you specify the container your App Engine runs in. Yes, containers. Your application runs inside Docker containers on Google Compute Engine Virtual Machines, VMs. App Engine manages these Compute Engine machines for you. They're health checked, healed as necessary, and you get to choose which geographical region they run in, and critical backward-compatible updates to their operating systems are automatically applied. All this so that you can just focus on your code. App Engine flexible environment apps use standard run times, can access App Engine services such as data store, memcached, task queues, and so on.

Notice that Standard environment starts up instances of your application faster, but that you get less access to the infrastructure in which your application runs. For example, Flexible environment lets you SSH into the virtual machines on which your application runs. It lets you use local disk for scratch base, it lets you install third-party software, and it lets your application make calls to the network without going through App Engine. On the other hand, Standard environment's billing can drop to zero for the completely idle application.

App Engine standard environment is for people who want the service to take maximum control of their application's deployment and scaling. Kubernetes Engine gives the application owner the full flexibility of Kubernetes. App Engine flexible edition is somewhere in between.

App Engine environment treats containers as a means to an end, but for Kubernetes Engine, containers are a fundamental organizing principle.
You'd like for the API to have a single coherent way for it to know which end user is making the call. That's when you use Cloud Endpoints. It implements these capabilities and more using an easy to deploy proxy in front of your software service, and it provides an API console to wrap up those capabilities in an easy-to-manage interface. Cloud Endpoints supports applications running in GCP's compute platforms in your choice of languages and your choice of client technologies.

Apigee Edge is also a platform for developing and managing API proxies. It has a focus on business problems like rate limiting, quotas, and analytics. Many users of Apigee Edge are providing a software service to other companies and those features come in handy.Because of the backend services for Apigee Edge need not be in GCP, engineers often use it when they are "taking apart" a legacy application. Instead of replacing a monolithic application in one risky move, they can instead use Apigee Edge to peel off its services one by one, standing up microservices to implement each in turn, until the legacy application can be finally retired.


No comments:

Post a Comment

GCP Study notes 13: Architecting with Google Kubernetes Engine: Foundations (courseRA notes)

Architecting with Google Compute Engine Specialization : 4 Courses in this Specialization. 1. Google Cloud Platform Fundamentals: Core In...