You could have Docker and Kubernetes installed on the same host and allow the user to access both binaries. A malicious user can potentially bypass Kubernetes security (e.g. PodSecurityPolicies), by avoiding the use of
kubectl to deploy containers. Instead, the user can directly interact with the Docker (e.g.
docker command) or
runc binaries to directly deploy containers.
This will pose a security risk as the user can run vulnerable and risky containers. What are the best practices in real life to ensure that this does not happen?
Some options include employing a level of MAC restrictions to only allow the user to interact with
kubectl but was wondering what are more efficient ways, especially in real life?
*There are also several levels of abstraction related to containers which can bypass the security controls in higher layers:
How narrowly can you define the tasks which you would like users to be able to perform? If you're able to narrow it down to a clearly-defined list then consider creating each of those tasks as jobs in a CI/CD tool (maybe allowing parameters if needed) and have users trigger those instead of allowing users to access or run their own commands or scripts directly on the host. "have Docker and Kubernetes installed ... and allow the user to access both binaries". Sounds wrong. First: I would no longer use Docker runtime with kubernetes, at all. Second: as a Kubernetes cluster admin, the only time I need SSH access to a node, is when I can't start a debug pod on that node. Which would usually limit ssh to major outages affecting master nodes. (see: https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/#node-shell-session ). The kubectl binary should be on your workstation, some bastion node, ... you shouldn't use k8s nodes for daily operations.