User Tools

Site Tools


kubernetes

Kubernetes on Windows

Microsoft Ignite - Published on Sep 29, 2017

Over the past few months, Kubernetes has skyrocketed in popularity with Microsoft Azure customers. Join Gabe Monroy and Patrick Lang for a deep dive into Kubernetes. This session also covers the state of Kubernetes on Windows, both on-premises and in the cloud.

<ref name=“github first-commit”>

</ref>

}}

Kubernetes (commonly referred to as “k8s”) is an open source container cluster manager originally designed by Google and donated to the Cloud Native Computing Foundation. It aims to provide a “platform for automating deployment, scaling, and operations of application containers across clusters of hosts”.<ref>

</ref> It usually works with the Docker container tool and coordinates between a wide cluster of hosts running Docker.

History

Kubernetes (from κυβερνήτης: Greek for “helmsman” or “pilot”) was founded by Joe Beda, Brendan Burns and Craig McLuckie<ref>

</ref> and first announced by Google in 2014.<ref>

</ref> Its development and design are heavily influenced by Google's Borg system,<ref>

</ref><ref>

</ref> and many of the top contributors to the project previously worked on Borg. The original name for Kubernetes within Google was project Seven of Nine, a reference to a Star Trek character that is often considered to be a 'friendlier' Borg.<ref>

</ref> After Google's lawyers rebuked McLuckie, Burns and Beda for the Project Seven Name, McLuckie came up with the Kubernetes name. The seven spokes on the wheel of the Kubernetes logo is a small acknowledgment of Kubernetes original name.

Kubernetes v1.0 was released on July 21, 2015.<ref>

</ref> Along with the Kubernetes v1.0 release, Google partnered with the Linux Foundation to form the Cloud Native Computing Foundation (CNCF)<ref>

</ref> and offered Kubernetes as a seed technology.

Design

Kubernetes defines a set of building blocks (“primitives”) which collectively provide mechanisms for deploying, maintaining, and scaling applications. The components which make up Kubernetes are designed to be loosely coupled and extensible so that it can meet a wide variety of different workloads. The extensibility is provided in large part by the Kubernetes API, which is used by internal components as well as extensions and containers running on Kubernetes.<ref name=“do-intro”>

</ref>

Pods

The basic scheduling unit in Kubernetes is called a “pod”. It adds a higher level of abstraction to containerized components. A pod consists of one or more containers that are guaranteed to be co-located on the host machine and can share resources.<ref name=“do-intro” /> Each pod in Kubernetes is assigned a unique (within the cluster) IP address, which allows applications to use ports without the risk of conflict.<ref name=“kubernetes-101-networking”>

</ref> A pod can define a volume, such as a local disk directory or a network disk, and expose it to the containers in the pod.<ref name=“kubernetes-for-developers”>

</ref> Pods can be manually managed through the Kubernetes API, or their management can be delegated to a controller.<ref name=“do-intro” />

Labels and Selectors

Kubernetes enables clients (users or internal components) to attach key-value pairs called “labels” to any API object in the system, such as pods and nodes. Correspondingly, “label selectors” are queries against labels that resolve to matching objects.<ref name=“do-intro” />

Labels and selectors are the primary grouping mechanism in Kubernetes, and are used to determine the components to which to apply an operation.<ref name=“containerizing-docker-on-Kubernetes”>

</ref>

For example, if the Pods of an application have labels for “tier” (front-end, back-end, etc.) and “release_track” (canary, production, etc.), then an operation on all of the back-end canary nodes could use a selector such as the following:<ref name=“redhat-docker-and-kubernetes-training-labels-examples”>

</ref> <blockquote>

tier=back-end AND release_track=canary

</blockquote>

Controllers

A controller is a reconciliation loop that drives actual cluster state toward the desired cluster state.<ref name=“coreos-replication-controller”>

</ref> It does this by managing a set of pods. One kind of controller is a Replication Controller, which handles replication and scaling by running a specified number of copies of a pod across the cluster. It also handles creating replacement pods when the node a pod is running on fails.<ref name=“coreos-replication-controller” /> Other controllers that are part of the core Kubernetes system include a “DaemonSet Controller” for running exactly one pod on every machine (or some subset of machines), and a “Job Controller” for running pods that run to completion, e.g. as part of a batch job.<ref name=“exciting-experimental-features”>

</ref> The set of pods that a controller manages is determined by label selectors that are part of the controller’s definition.<ref name=“redhat-docker-and-kubernetes-training-labels-examples”/>

Services

A Kubernetes service is a set of pods that work together, such as one tier of a multi-tier application. The set of pods that constitute a service are defined by a label selector.<ref name=“do-intro” /> Kubernetes provides service discovery and request routing by assigning a stable IP address and DNS name to the service, and load balances traffic in a round-robin manner to network connections of that IP address among the pods matching the selector (even as failures cause the pods to move from machine to machine).<ref name=“kubernetes-101-networking” /> By default a service is exposed inside a cluster (e.g. back end pods might be grouped into a service, with requests from the front-end pods load-balanced among them), but a service can also be exposed outside a cluster (e.g. for clients to reach frontend pods)<ref name=“kubernetes-101-external-access”>

</ref>

Architecture

Kubernetes follows the master-slave architecture.The components of Kubernetes<ref>

</ref> can be divided into those that manage an individual node and those that are part of the control plane.<ref name=“do-intro” /><ref name=“:0”>

</ref>

Kubernetes control plane

The Kubernetes Master is the main controlling unit of the cluster that manages its workload and directs communication across the system. The Kubernetes control plane consists of various components, each its own process, that can run both on a single master node or on multiple masters supporting high-availability clusters.<ref name=“:0” /> The various components of Kubernetes control plane are as follows:

etcd

etcd is a persistent lightweight, distributed key-value data store developed by CoreOS that reliably stores the configuration data of the cluster, representing the overall state of the cluster at any given point of time. Other components watch for changes to this store to bring themselves into the desired state.<ref name=“:0” />

API Server

The API server is a key component and serves the kubernetes API using JSON over HTTP, which provides both the internal and external interface to Kubernetes.<ref name=“do-intro” /><ref name=“:1”>

</ref> The API server processes and validates REST requests and updates state of the API objects in etcd, thereby allowing clients to configure workloads and containers across Worker nodes.

Scheduler

The scheduler is the pluggable component that selects which node an unscheduled pod should run on based on resource availability. Scheduler tracks resource utilization on each node to ensure that workload is not scheduled in excess of the available resources. For this purpose, the scheduler must know the resource availability and their existing workloads assigned across servers.

Controller Manager

The controller manager is the process that the core Kubernetes controllers like DaemonSet Controller, Replication Controller run in. The controllers communicate with the API server to create, update, and delete the resources they manage (pods, service endpoints, etc.)<ref name=“:1” />

Kubernetes node

The Node also known as Worker or Minion is the single machine (or virtual machine) where containers(workloads) are deployed. Every node in the cluster must run the container runtime (such as Docker), as well as the below mentioned components, for communication with master for network configuration of these containers.

kubelet

Kubelet is responsible for the running state of each node (that is, ensuring that all containers on the node are healthy). It takes care of starting, stopping, and maintaining application containers (organized into pods) as directed by the control plane.<ref name=“do-intro” /><ref>

</ref>

Kubelet monitors the state of a pod and if not in the desired state, the pod will be redeployed to the same node. The node status is relayed every few seconds via heartbeat messages to the master. Once the master detects a node failure, the Replication Controller observes this state change and launches pods on other healthy nodes.

kube-proxy

The kube-proxy is an implementation of a network proxy and a load balancer, and it supports the service abstraction along with other networking operation.<ref name=“do-intro” /> It is responsible for routing traffic to the appropriate container based on IP and port number of the incoming request.

cAdvisor

cAdvisor is an agent that monitors and gathers resource usage and performance metrics such as CPU, memory, file and network usage of containers on each node.

References

kubernetes.txt · Last modified: 2017/09/30 23:14 by Mike J. Kreuzer PhD MCSE MCT Microsoft Cloud Ecosystem