When I install Kubernetes on a host, all the core components (etcd, Kube manager, scheduler) are all installed on a host by default.
Is there a use case or need to deploy Kubernetes components separately (e.g. on different servers)? It could be for scalability or security reasons.
it's unclear how you installed kubernetes, but single-node installs are for sure not common. look at "kops" or "kubespray", both projects automate deployment of Kubernetes clusters. I'm not a devops engineer but really keen to know how do people scale up Kubernetes admin clusters. Is it usually being scaled up? How do they sync across each other if they have multiple instances of say, etcd, kube controller etc. ? Most of the time, you would have 3 "control-plane" / master nodes hosting etcd + kube-apiserver + kube-controllers-manager + kube-scheduler. Then we can add regular nodes, adding capacity. Answering "how do you isolate ..." question. Kubernetes clusters would usually be deployed into a dedicated subnet / VLAN, which can be isolated from the rest of your LAN / clients. You would only need to expose kubernetes API service, and probably your Ingress Controllers ports 80/443.