Kubernetes Basics

Kubernetes is a container cluster manager developed by Google. You might already read some official documents and introduction articles on the internet and heard about concepts like Pods, Replication Controller, Services.

Let’s reinforce those concepts by playing with the free cluster provided by TryK8S.

Kubectl CLI

kubectl is the command line interface you need to interact with a Kubernetes cluster. Let’s download it from the internet.

# OS X
$ curl -O https://storage.googleapis.com/kubernetes-release/release/v1.2.2/bin/darwin/amd64/kubectl
# Linux
$ curl -O https://storage.googleapis.com/kubernetes-release/release/v1.2.2/bin/linux/amd64/kubectl
# Move kubectl to /usr/local/bin
$ chmod +x kubectl
$ mv kubectl /usr/local/bin/kubectl

In your cluster detail page, you can find the master ip address and authentication certificates. kubectl needs these settings to talk to your cluster.

$ kubectl config set-cluster default-cluster --server=https://${MASTER_HOST} --certificate-authority=${CA_CERT}
$ kubectl config set-credentials default-admin --certificate-authority=${CA_CERT} --client-key=${ADMIN_KEY} --client-certificate=${ADMIN_CERT}
$ kubectl config set-context default-system --cluster=default-cluster --user=default-admin
$ kubectl config use-context default-system

Now you can get cluster status and make changes to your cluster using kubectl. It has bunch of commands. We’ll go through them one by one in the future. If you add --help after the command, you’ll get the help infomration about this command.

# Get worker nodes
$ kubectl get nodes
NAME         LABELS                              STATUS    AGE   kubernetes.io/hostname=   Ready     1d
# Get help
$ kubectl get nodes --help

To make things easier, we have setup kubectl for you on the master. You can also ssh to the master and control your cluster from there.


Pod is the basic unit of work running in the Kubernetes cluster. A pod has one or several containers running in the same node sharing ip address and data volumes. Let’s create a pod using the kubectl create command. First we create a pod definition file called pod.yml.

apiVersion: v1
kind: Pod
  name: nginx
  - name: nginx
    image: nginx
    - containerPort: 80

Then we send this file to the master and check the status of the newly created pod.

$ kubectl create -f pod.yml
pod "nginx" created
$ kubectl get pod nginx
nginx     0/1       Pending   0          20s
$ kubectl describe pod nginx
Name:				nginx
Namespace:			default
Image(s):			nginx
Start Time:			Mon, 07 Mar 2016 09:57:04 +0000
Labels:				<none>
Status:				Running
Replication Controllers:	<none>
    Container ID:	docker://9eb1f5c1c78fc1329a63eeaed3d7555162a29910dd261463f3a55a8eab81a349
    Image:		nginx
    Image ID:		docker://7c2e12c53e4af75208f300bc9fae3c2090e7ae86d25ea8b962f5a65177663fb6
    State:		Running
      Started:		Mon, 07 Mar 2016 09:57:26 +0000
    Ready:		True
    Restart Count:	0
    Environment Variables:
  Type		Status
  Ready 	True
    Type:	Secret (a secret that should populate this volume)
    SecretName:	default-token-3h40e
  FirstSeen	LastSeen	Count	From			SubobjectPath				Reason		Message
  ─────────	────────	─────	────			─────────────				──────		───────
  28s		28s		1	{kubelet}	implicitly required container POD	Pulled		Container image "gcr.io/google_containers/pause:0.8.0" already present on machine
  28s		28s		1	{scheduler }							Scheduled	Successfully assigned nginx to
  28s		28s		1	{kubelet}	implicitly required container POD	Created		Created with docker id 7bcc30ddbbd3
  27s		27s		1	{kubelet}	spec.containers{nginx}			Pulling		Pulling image "nginx"
  27s		27s		1	{kubelet}	implicitly required container POD	Started		Started with docker id 7bcc30ddbbd3
  6s		6s		1	{kubelet}	spec.containers{nginx}			Pulled		Successfully pulled image "nginx"
  6s		6s		1	{kubelet}	spec.containers{nginx}			Created		Created with docker id 9eb1f5c1c78f
  6s		6s		1	{kubelet}	spec.containers{nginx}			Started		Started with docker id 9eb1f5c1c78f

Now we have a running nginx server running in the cluster although we don’t know how to access this server yet.

We’ll cover how to expose pods to the internet and create pods with multiple containers in future tutorials. Right now, all you need to know is that you can create resouce in the cluster with kubectl create command and you can use kubectl get and kubectl describe commands to get resource status. We’ll use these commands in the following tutorials.

Replication Controller

Another way to schedule some work in the cluster is using replication controller. Let’s create a new replication controller running the same nginx server. First step is also creating a definition file rc.yml.

apiVersion: v1
kind: ReplicationController
  name: nginx-controller
  replicas: 2
  # selector identifies the set of Pods that this
  # replication controller is responsible for managing
    app: nginx
  # podTemplate defines the 'cookie cutter' used for creating
  # new pods when necessary
        # Important: these labels need to match the selector above
        # The api server enforces this constraint.
        app: nginx
      - name: nginx
        image: nginx
        - containerPort: 80

Then send it to the master. And we can see the pods created by the replication controller.

$ kubectl create -f rc.yml
replicationcontroller "nginx-controller" created
$ kubectl get replicationcontroller
nginx-controller   nginx          nginx               app=nginx        2          2m
$ kubectl get pods
NAME                     READY     STATUS    RESTARTS   AGE
nginx-controller-e1av1   1/1       Running   0          2m
nginx-controller-u92qh   1/1       Running   0          2m

Understand the relationship between pod and replication controller is very important.

  • Pod represents the workload running in the cluster. Where as replication controller manages what kind of pod and how many of them is running in the cluster.
  • Replication controller acts like a virtual manager constantly checking the status of the cluster and executing kube create and kube delete when the current status doesn’t match the desired state.
  • When a node goes offline, replication controller will schedule new pods on other nodes. But if we create pods directly, those pods wouldn’t be rescheduled.
  • For most time, we should use replication controller and avoid using pod directly because we should describe the state we want and let Kubernetes manages it for us.


Service exposes ports from a group of pods to other pods in the cluster. It’s where the magic happens.

In a distributed system, running a set of work is easy. Wiring them up is hard. In a Kubernetes cluster, pods got created, migrated, updated and deleted frequently. If a bunch of pods is used by another bunch of pods, it will be a nightmare if we have to update the consumer pods whenever the provider pods got updated. This problem got solved by inserting service as a middle layer between the provider and consumer. The ip address of a service is static so no matter how the provider changes, the consumer remains unchanged.

Let’s create a service definition service.yml for the the pods created by the nginx-controller replication controller in previous step.

apiVersion: v1
kind: Service
  name: nginx-service
  - port: 8000 # the port that this service should serve on
    # the container on each pod to connect to, can be a name
    # (e.g. 'www') or a number (e.g. 80)
    targetPort: 80
    protocol: TCP
  # just like the selector in the replication controller,
  # but this time it identifies the set of pods to load balance
  # traffic to.
    app: nginx

Creating the service and get its status.

$ kubectl create -f service.yml
service "nginx-service" created
$ kubectl describe service nginx-service
Name:			nginx-service
Namespace:		default
Labels:			<none>
Selector:		app=nginx
Type:			ClusterIP
Port:			<unnamed>	8000/TCP
Session Affinity:	None
No events.

We can see the service ip is and it will load balance with the two pods at and

Let’s scale the replication controller and check the service status again.

$ kubectl scale --replicas=3 rc nginx-controller
replicationcontroller "nginx-controller" scaled
$ kubectl describe service nginx-service
Name:			nginx-service
Namespace:		default
Labels:			<none>
Selector:		app=nginx
Type:			ClusterIP
Port:			<unnamed>	8000/TCP
Session Affinity:	None
No events.

Now there are three backend pods for the service.

Wrap up

In this tutorial, we learned how to:

  • Setup kubectl locally to manage a remote cluster
  • Create pods, replication controllers and services by creating definition files and send them to the master using kubectl create
  • Manage the cluster using kubectl get, kubectl describe and kubectl delete