This is my understanding of the various terms/names, used in the context of Cloud Native Infrastructure or deployment. Trying to put my thoughts into words, so it might help others or maybe others would disagree me via comments, and that would help me correct myself. Well... at least, that is the thought.
The documentation would be heavily skewed towards Linux, since that's what I normally work with.
What are Containers?
In simple terms, it's a collection of processes on the host, isolated to run with its own set of resources. Linux namespaces and Control groups, helps a user to either carve out/create pseudo resources(with the help of other kernel drivers like tty, devpts ...) and enforce limits on shared resource consumption.
Namespaces, a kernel feature like PID, UTS, NET, IPC, User & Mount can be controlled at process granularity to selectively control the required isolation. But normally for containers, all namespaces would be enforced, since the processes running within the container should be isolated completely from the host.
Control group or cgroups(usually referred) are a kernel feature, that help the namespaced processes(defined above) to be limited in its resource consumption of shared resources, like CPU, Memory, Disk I/O and Network I/O.
In short, containers are a bunch of processes, isolated by namespaces and resources are limited by cgroups.
Docker
It's the name of the company, as well as the software(Container Orchestrator) that it provides, to help build & deploy containers in an efficient and easy manner. Docker relies on underlying Linux namespaces and cgroups to provide the container(isolation & resource limitation) functionality. dockerd, along with its cohorts from OCI specification (containerd, runc and libcontainer) provide the lifecycle management of the container, as well as manages the image push/pull from remote Docker registry to local registry.
Before Docker, Libvirt's LXC driver was used to deploy containers. But building the container images was not structured and each team would have to develop their own tools. This was a big pain point, especially in a development environment, when the bits used in the container, used to change on a daily basis. Docker came along and made it easier to build container images, as well as to distribute them using their Registry format. This made a big difference for developers and everybody hooked on to this mechanism of deploying containers and made containers to what they are today. A first class citizens, in all Public Cloud offering.
Docker CE(Community Edition) is the free version and Docker EE(Enterprise Edition) is the paid version with support from Docker.
Kubernetes
Kubernetes or K8s is a Container Orchestration Engine(COE). The project to develop this software, originated from Google(heavily influenced by Google's internal project Borg) and later open sourced. Kubernetes cannot deploy containers on its own. It relies on Docker/CRI-O or any other Container Orchestrator that implements the OCI specifications(Runtime & Image) to deploy the containers.
To help make the differentiation between Docker & K8. Docker can be used to deploy containers on a single node. If you have multiple nodes that can be used to deploy containers, then you use K8s as a scheduler, that can talks to these individual Docker instances and deploy the container on suitable node based on the resource requirements of that container.
In short, it's a layer on top of Docker to help deploy containers on a multiple node setup called K8s cluster. This is especially useful in cases, if one of the node becomes unhealthy due to resource shortage or hardware issues, K8s would be able to migrate the existing containers to an another node in the cluster. Also since K8s scheduler would have an eagle's eye view of all the available/occupied resources, it would be able to distribute the deployment of containers onto the various nodes evenly.
Furthermore, the smallest deployable unit in K8s is called a POD. It's a collection of containers, allowing the containers to share network and filesystem namespace. This is designed, based on the real world needs to run a collection of containers, as a single unit for execution(share resources for communication like shared memory..._), but still keep the containers independent from packaging perspective. It could very well be written in a different language and it would not make a difference.
You can compare K8s with Docker swarm, but K8s has pretty much won the public vote, as a Cloud Native Container Orchestrator Engine. K8s does provide additional functionality and is not limited to, network connectivity using CNI(Container Network Interface) plugins, persistent storage using CSI(Container Storage Interface) plugins and vendor specific device plugins(to support GPUs and other vendor specific devices). These features, along with its robust community support, makes it a first choice for both on-premise, as well as public cloud container orchestration.
CNCF
Cloud Native Computing Foundation, a vendor neutral committee responsible for governing the various software components that help with deploying/maintaining/monitoring software in Cloud Native Infrastructure. Micro Services is one of the main software architecture that is publicized and supported by the CNCF community.
OCI
Open Container Initiative is governed by CNCF and it includes the Runtime specification and Image specification for container deployment. The goal is to ensure multiple disparate components that deal with cloud native infrastructure would be able to work together, as along as they stick to the specification for interface compatibility. containerd, runc and libcontainer are few reference implementations of these specifications.
Watch out for other topics like Service Mesh, Micro Services, K8 plugins in later posts.
References:
[1] containerd
[2] Libvirt LXC
[3] OCI