In-Cluster vs Out-of-Cluster Auth Configuration
Alright, we understand the Kubernetes API structure. Now, how does our Go application prove its identity to the API server to get permission to make requests? This is where authentication comes in. client-go
makes this relatively straightforward by handling the two most common scenarios:
Out-of-Cluster: Your Go application is running outside the Kubernetes cluster (e.g., on your local development machine, a CI/CD server).
In-Cluster: Your Go application is running inside a Kubernetes cluster as a Pod.
Let's look at how client-go
helps us get the necessary configuration (rest.Config
) object in both cases. This config object bundles together the API server endpoint and the credentials needed to talk to it.
Out-of-Cluster: Using kubeconfig
This is the most common scenario during development or when running tools from outside the cluster. You likely already have a kubeconfig
file on your machine, typically located at ~/.kube/config
. This file stores information about:
Clusters: API server addresses and certificate authorities.
Users: Client certificates, tokens, or other credentials.
Contexts: A combination of a cluster, a user, and an optional default namespace, defining how you connect.
kubectl
uses this file to know which cluster to talk to and how to authenticate. client-go
provides convenient helpers to load configuration directly from this file.
The primary tool for this is the clientcmd
package within client-go
. The clientcmd.BuildConfigFromFlags
function is often used. It tries to load configuration in a specific order:
From flags passed to it (an explicit master URL and kubeconfig path).
From the
KUBECONFIG
environment variable if it's set.From the default kubeconfig location (
~/.kube/config
).
Here's a conceptual Go snippet showing how you might get the configuration object using flags:
This snippet sets up flags to optionally specify the kubeconfig
file path. BuildConfigFromFlags
then uses this path (or defaults) to load the cluster connection details and credentials into a *rest.Config
struct.
In-Cluster: Using the Service Account Token
When your Go application runs as a Pod inside the Kubernetes cluster, things get even easier. Kubernetes automatically provides each Pod with a Service Account, and it mounts the necessary credentials (a token and the CA certificate) into the Pod's filesystem, typically at /var/run/secrets/kubernetes.io/serviceaccount/
.
client-go
provides a dedicated function, rest.InClusterConfig()
, that knows exactly where to look for these automatically mounted credentials. It reads the token, the CA certificate, and the API server's address (provided via environment variables like KUBERNETES_SERVICE_HOST
and KUBERNETES_SERVICE_PORT
).
Using it is incredibly simple:
If your code runs inside a Pod within Kubernetes, rest.InClusterConfig() is the way to go. It requires zero explicit configuration files or flags related to authentication.
Choosing the Right Method
Use Out-of-Cluster configuration (
clientcmd.BuildConfigFromFlags
) when your application runs outside the K8s cluster (developer machine, external tools, CI/CD).Use In-Cluster configuration (
rest.InClusterConfig
) when your application is deployed as a Pod inside the K8s cluster (operators, controllers, custom API servers, internal tools).
Often, you might write code that needs to work in both scenarios. A common pattern is to first try loading the out-of-cluster config (e.g., from flags or environment) and, if that fails or isn't specified, fall back to trying the in-cluster config.
Now that we know how to get the *rest.Config
object containing the connection details and credentials, the next step is to use this config to create an actual clientset – the object we'll use to interact with specific Kubernetes resources like Pods and Services.
Last updated
Was this helpful?