• Google Kubernetes Engine logo. Source: CNCF 2018b.
    image
  • Load balancing based on pods rather than only VMs. Source: Pattan and Xia 2018.
    image
  • GKE is a balance between the control of GCE and the fully managed service of Google App Engine. Source: Google Cloud 2018a.
    image
  • Illustrating a GKE cluster. Source: Moudgil 2017.
    image

Google Kubernetes Engine

Improve this article. Show messages.

Summary

image
Google Kubernetes Engine logo. Source: CNCF 2018b.

Google Kubernetes Engine (GKE) is a managed environment for deploying, managing and scaling containerized applications using the Google Cloud Platform infrastructure. The environment that Google Kubernetes Engine provides consists of multiple machines, specifically Google Compute Engine instances, which are grouped together to form a cluster. Google Kubernetes Engine draws on the same reliable infrastructure and design principles that run popular Google services and provides the same benefits like automatic management, monitoring and liveness probes for application containers, automatic scaling, rolling updates, and more.

Milestones

Jun
2014

The first public commit of Kubernetes code is made in GitHub. A few days later, it's released to the world at DockerCon.

Jul
2015

Version 1.0 of Kubernetes is released at Open Source Convention (OSCON). With Kubernetes as the its first project, Cloud Native Computing Foundation (CNCF) is founded.

May
2018

Google offers its own Google Kubernetes Engine (GKE) as a service for cluster management. Similar services from Microsoft and Amazon follow.

Jul
2018

As an alpha release, Google releases GKE On-Prem so that enterprises can deploy GKE in their own data centers. This allows Google to offer a hybrid cloud platform.

Oct
2018
image

It's now possible to do container-native load balancing for GKE-based applications. Previously, load balancers worked at the granularity of VMs. These VMs then used IPTable rules to route workloads to the right pods. This was suboptimal and resulted in traffic hops between nodes. With Network Endpoint Groups (NEGs), load balancing can now be done based on the availability and health of pods rather than nodes.

Discussion

  • Why use Kubernetes?

    Before managing applications with Kubernetes and GKE, you should know how containers work and what advantages they provide. If you were to build an online retail application with user sign-in, inventory management, billing and shipping, you could break up your application into smaller modules called microservices. These are isolated and elastic, for high-availability and scalability.

    Containers provided the environment to deploy microservices. You can run them on the same or even different machines, start and stop them quickly, but you can't specify how many machines or containers to keep running. What to do if containers fail? How to connect a container to other containers and enable persistent storage? For these aspects, you need a container orchestration system like Kubernetes.

    Kubernetes is an open source orchestrator for a container environment. It provides the ability to define how many machines to use, how many containers to deploy, how to scale them, where the persistent disks reside, and how to deploy a group of containers as a single unit.

  • If Kubernetes is open-source, then why use GKE?
    image
    GKE is a balance between the control of GCE and the fully managed service of Google App Engine. Source: Google Cloud 2018a.

    When you use the GKE to setup a cluster, you also gain the benefit of advanced cluster management features that Google Cloud Platform provides. These include:

    • Leverage CI/CD tools in GCP to help you build and serve application containers
    • Use Google Cloud Build to build container images from various source code repositories
    • Use Google Container Registry to store and serve your container images
    • Load balancing for Google Compute Engine instances
    • Node pools to designate subsets of nodes within a cluster for additional flexibility
    • Automatic scaling of your cluster's node instance count
    • Automatic upgrades for your cluster's node software
    • Node auto-repair to maintain node health and availability
    • Logging and monitoring with Stackdriver for visibility into your cluster
  • How does GKE work?
    image
    Illustrating a GKE cluster. Source: Moudgil 2017.

    A cluster is the foundation of GKE. The Kubernetes objects that represent containerized applications all run within the cluster. A cluster consists of at least one cluster master and multiple worker machines called nodes, which are the worker machines that run containerized applications and other workloads. The worker machines are Google Compute Engine (GCE) instances that GKE creates on your behalf when you create a cluster.

    These master and node machines run the Kubernetes cluster orchestration system. The cluster master runs the Kubernetes control plane processes, including the Kubernetes API server, scheduler, and core resource controllers. The API server process is the hub for all communication for the cluster. All internal cluster processes (such as the cluster nodes, system and components, application controllers) all act as clients of the API server; the API server is the single "source of truth" for the entire cluster.

  • What are some criticisms of GKE?

    GKE is still considered new. Documentation and support could be improved. It's also not well integrated into the rest of Google Cloud Platform.

    On a more technical note, some complain that the configurability of GKE is limited, including limited control on authentication, authorization or admission control. It's expected that future versions of GKE will give users more control, such as via ConfigMaps.

References

  1. Beda, Joe. 2018. "4 Years of K8s." Kubernetes Blog, June 06. Accessed 2018-09-29.
  2. Burns, Brendan. 2018. "The History of Kubernetes & the Community Behind It." Kubernetes Blog, July 20. Accessed 2018-09-29.
  3. CNCF. 2018a. "Homepage." Cloud Native Computing Foundation. Accessed 2018-09-30.
  4. CNCF. 2018b. "Software Conformance." Cloud Native Computing Foundation. Accessed 2018-11-24.
  5. Google Cloud. 2018a. "Choosing an option to run containers." Accessed 2018-11-24.
  6. Google Cloud. 2018b. "Kubernetes Engine." Accessed 2018-12-12.
  7. Moudgil, Rishabh. 2017. "How to monitor Google Kubernetes Engine with Datadog." Blog, Datadog, December 19. Accessed 2018-11-24.
  8. Papp, Andrea. 2018. "The History of Kubernetes on a Timeline." RisingStack Blog, June 20. Updated 2018-07-20. Accessed 2018-09-29.
  9. Pattan, Neha and Minhan Xia. 2018. "Introducing container-native load balancing on Google Kubernetes Engine." Blog, Google Cloud. October 12. Accessed 2018-11-24.
  10. Sverdlik, Yevgeniy. 2018. "Google is Building a Version of Kubernetes Engine for On-Prem Data Centers." Data Center Knowledge, July 24. Accessed 2018-11-24.
  11. Thorpe, Stefan. 2018. "The Heavyweight Championship: A Kubernetes Managed Service Comparison — EKS vs. GKE vs. AKS." DZone, July 27. Accessed 2018-12-12.
  12. yuvipanda. 2017. "Is there cons to using GKE vs running your own masters?" r/kubernetes, Reddit. Accessed 2018-12-12.

Milestones

Jun
2014

The first public commit of Kubernetes code is made in GitHub. A few days later, it's released to the world at DockerCon.

Jul
2015

Version 1.0 of Kubernetes is released at Open Source Convention (OSCON). With Kubernetes as the its first project, Cloud Native Computing Foundation (CNCF) is founded.

May
2018

Google offers its own Google Kubernetes Engine (GKE) as a service for cluster management. Similar services from Microsoft and Amazon follow.

Jul
2018

As an alpha release, Google releases GKE On-Prem so that enterprises can deploy GKE in their own data centers. This allows Google to offer a hybrid cloud platform.

Oct
2018
image

It's now possible to do container-native load balancing for GKE-based applications. Previously, load balancers worked at the granularity of VMs. These VMs then used IPTable rules to route workloads to the right pods. This was suboptimal and resulted in traffic hops between nodes. With Network Endpoint Groups (NEGs), load balancing can now be done based on the availability and health of pods rather than nodes.

Tags

See Also

Further Reading

  1. Official Website
  2. Official Documentation
  3. Sanche, Daniel. 2018. "Kubernetes 101: Pods, Nodes, Containers, and Clusters." Medium, January 02. Accessed 2018-09-29.
  4. Burns, Brendan, Brian Grant, David Oppenheimer, Eric Brewer, and John Wilkes. 2016. "Borg, Omega, and Kubernetes." ACM Queue, vol. 14, no. 1, March 02. Accessed 2018-09-30.
  5. Morgan, Timothy Prickett. 2016. "A Decade Of Container Control At Google." The Next Platform, March 22. Accessed 2018-10-01.
  6. Yegulalp, Serdar. 2018. "What is Kubernetes? Container orchestration explained." InfoWorld, April 04. Accessed 2018-10-01.

Top Contributors

Last update: 2018-12-12 14:23:30 by arvindpdmn
Creation: 2018-11-01 04:10:48 by sivaraj

Article Stats

813
Words
1
Chats
2
Authors
6
Edits
0
Likes
144
Hits

Cite As

Devopedia. 2018. "Google Kubernetes Engine." Version 6, December 12. Accessed 2018-12-16. https://devopedia.org/google-kubernetes-engine
BETA V0.18