Kubernetes In Google Cloud - Cloud

Introduction to Docker


  • Theory
    • Docker is an open platform for developing, shipping, and running applications
    • Docker does this by combining kernel containerization features with workflows and tooling that helps you manage and deploy your applications
  • Hello World
    • docker run hello-world -=> Run a hello world container
    • docker images -=> Take a look at the container image it pulled from Docker Hub
    • docker run hello-world -=> The second time you run this, the docker daemon finds the image in your local registry and runs the container from that image
    • docker ps -=> Look at the running containers
    • docker ps -a -=> To see all containers, including ones that have finished executing
  • Build
    • mkdir test && cd test -=> To create and switch into a folder named test
    • Create a Dockerfile
    • Create the node application
    • docker build -t node-app:0.1 .
    • docker run -p 4000:80 --name my-app node-app:0.1 -=> Run
    • curl http://localhost:4000
    • docker stop my-app && docker rm my-app -=> To stop and remove the container
    • docker run -p 4000:80 --name my-app -d node-app:0.1 docker ps -=> Start the container in the background
    • docker logs [container_id]
  • Debug
    • docker logs -f [container_id] -=> If you want to follow the log's output as the container is running
    • docker exec -it [container_id] bash
    • docker inspect [container_id] -=> Examine a container's metadata in Docker
  • Publish

Orchestrating the Cloud with Kubernetes


  • Theory
    • Cluster > Pods > Container > Nodes > Instance
  • Kubernetes Create
    • Deployments keep the pods up and running even when the nodes they run on fail
  • Pods
    • Pods represent and hold a collection of one or more containers
    • Pods also have Volumes. Volumes are data disks that live as long as the pods live, and can be used by the containers in that pod
    • Pods also share a network namespace. This means that there is one IP Address per pod
  • Creating Pods
  • Interacting with Pods
    • By default, pods are allocated a private IP address and cannot be reached outside of the cluster
  • Services
    • Services use labels to determine what Pods they operate on
    • The level of access a service provides to a set of pods depends on the Service's type
      • ClusterIP (internal) -=> The default type means that this Service is only visible inside of the cluster
      • NodePort gives each node in the cluster an externally accessible IP and
      • LoadBalancer adds a load balancer from the cloud provider which forwards traffic from the service to Nodes within it
  • Creating a Service
  • Adding Labels to Pods
  • Deploying Applications with Kubernetes
    • Deployments are a declarative way to ensure that the number of Pods running is equal to the desired number of Pods, specified by the user
    • Behind the scenes Deployments use Replica Sets to manage starting and stopping the Pods
      • A ReplicaSet's purpose is to maintain a stable set of replica Pods running at any given time
    • Pods are tied to the lifetime of the Node they are created on

Managing Deployments Using Kubernetes Engine


  • Theory
    • Dev Ops practices will regularly make use of multiple deployments to manage application deployment scenarios such as "Continuous Deployment", "Blue-Green Deployments", "Canary Deployments" and more
    • Heterogeneous deployments typically involve connecting two or more distinct infrastructure environments or regions to address a specific technical or operational need
    • Heterogeneous deployments are called "hybrid", "multi-cloud", or "public-private", depending upon the specifics of the deployment
    • Various business and technical challenges can arise in deployments that are limited to a single environment or region -=> Maxed out resources, Limited geographic reach, Limited availability, Vendor lock-in, Inflexible resources
    • Three common scenarios for heterogeneous deployment are multi-cloud deployments, fronting on-premises data, and continuous integration/continuous delivery (CI/CD) processes
  • Learn about the deployment object
    • kubectl explain deployment -=> The explain command in kubectl can tell us about the Deployment object
    • kubectl explain deployment --recursive -=> See all of the fields
    • kubectl explain deployment.metadata.name
  • Create a deployment
  • Scale a Deployment
  • Rolling update
    • When a Deployment is updated with a new version, it creates a new ReplicaSet and slowly increases the number of replicas in the new ReplicaSet as it decreases the replicas in the old ReplicaSet
    • Trigger a rolling update
    • Pause a rolling update
    • Resume a rolling update
    • Rollback an update
  • Canary deployments
    • Canary deployments allow you to release a change to a small subset of your users to mitigate risk associated with new releases
    • Create a canary deployment
      • A canary deployment consists of a separate deployment with your new version and a service that targets both your normal, stable deployment as well as your canary deployment
    • Verify the canary deployment
    • Canary deployments in production - session affinity
  • Blue-green deployments
    • There are instances where it is beneficial to modify the load balancers to point to that new version only after it has been fully deployed
    • Blue-Green Rollback

Continuous Delivery with Jenkins in Kubernetes Engine


  • Theory
    • Jenkins is the go-to automation server used by developers who frequently integrate their code in a shared repository
    • Kubernetes Engine is Google Cloud's hosted version of Kubernetes - a powerful cluster manager and orchestration system for containers
    • Jenkins is an open-source automation server that lets you flexibly orchestrate your build, test, and deployment pipelines
    • When you need to set up a continuous delivery (CD) pipeline, deploying Jenkins on Kubernetes Engine provides important benefits over a standard VM-based deployment
    • Jenkins
  • Provisioning Jenkins
    • Creating a Kubernetes cluster
  • Setup Helm
    • Helm is a package manager that makes it easy to configure and deploy Kubernetes applications
  • Configure and Install Jenkins
  • Connect to Jenkins
  • Understanding the Application
    • In backend mode: gceme listens on port 8080 and returns Compute Engine instance metadata in JSON format
    • In frontend mode: gceme queries the backend gceme service and renders the resulting JSON in the user interface
  • Deploying the Application
    • Production: The live site that your users access
    • Canary: A smaller-capacity site that receives only a percentage of your user traffic
  • Creating the Jenkins Pipeline
    • Creating a repository to host the sample app source code
    • Adding your service account credentials
    • Creating the Jenkins job
  • Creating the Development Environment
    • Creating a development branch
    • Modifying the pipeline definition
      • The Jenkinsfile that defines that pipeline is written using the Jenkins Pipeline Groovy syntax
    • Modify the site
    • Kick off Deployment
    • Deploying a Canary Release
    • Deploying to production
Share: