In-Cluster vs Out-of-Cluster Auth Configuration

Alright, we understand the Kubernetes API structure. Now, how does our Go application prove its identity to the API server to get permission to make requests? This is where authentication comes in. client-go makes this relatively straightforward by handling the two most common scenarios:

  1. Out-of-Cluster: Your Go application is running outside the Kubernetes cluster (e.g., on your local development machine, a CI/CD server).

  2. In-Cluster: Your Go application is running inside a Kubernetes cluster as a Pod.

Let's look at how client-go helps us get the necessary configuration (rest.Config) object in both cases. This config object bundles together the API server endpoint and the credentials needed to talk to it.

Out-of-Cluster: Using kubeconfig

This is the most common scenario during development or when running tools from outside the cluster. You likely already have a kubeconfig file on your machine, typically located at ~/.kube/config. This file stores information about:

  • Clusters: API server addresses and certificate authorities.

  • Users: Client certificates, tokens, or other credentials.

  • Contexts: A combination of a cluster, a user, and an optional default namespace, defining how you connect.

kubectl uses this file to know which cluster to talk to and how to authenticate. client-go provides convenient helpers to load configuration directly from this file.

The primary tool for this is the clientcmd package within client-go. The clientcmd.BuildConfigFromFlags function is often used. It tries to load configuration in a specific order:

  1. From flags passed to it (an explicit master URL and kubeconfig path).

  2. From the KUBECONFIG environment variable if it's set.

  3. From the default kubeconfig location (~/.kube/config).

Here's a conceptual Go snippet showing how you might get the configuration object using flags:

package main

import (
	"flag"
	"path/filepath"

	// Import necessary client-go packages
	"k8s.io/client-go/tools/clientcmd"
	"k8s.io/client-go/util/homedir"
	// rest "k8s.io/client-go/rest" // We'll need this later
)

func main() {
	var kubeconfig *string
	// Find home directory to suggest default kubeconfig path
	if home := homedir.HomeDir(); home != "" {
		// Set default kubeconfig path relative to home directory
		kubeconfig = flag.String("kubeconfig", filepath.Join(home, ".kube", "config"), "(optional) absolute path to the kubeconfig file")
	} else {
		// If home directory is not found, just ask for the path
		kubeconfig = flag.String("kubeconfig", "", "absolute path to the kubeconfig file")
	}
	// Define an optional master URL flag (less common nowadays with kubeconfig)
	masterURL := flag.String("master", "", "The address of the Kubernetes API server. Overrides any value in kubeconfig. Only required if out-of-cluster.")

	flag.Parse() // Parse the command-line flags

	// Use the current context in kubeconfig
	// masterURL flag overrides the server URL in kubeconfig
	config, err := clientcmd.BuildConfigFromFlags(*masterURL, *kubeconfig)
	if err != nil {
		// Handle error - couldn't load configuration
		panic(err.Error())
	}

	// Now 'config' holds the *rest.Config object needed to create a clientset
	// We'll use this 'config' object in the next section...
	// fmt.Printf("Successfully loaded configuration!\nAPI Host: %s\n", config.Host)
}

This snippet sets up flags to optionally specify the kubeconfig file path. BuildConfigFromFlags then uses this path (or defaults) to load the cluster connection details and credentials into a *rest.Config struct.

In-Cluster: Using the Service Account Token

When your Go application runs as a Pod inside the Kubernetes cluster, things get even easier. Kubernetes automatically provides each Pod with a Service Account, and it mounts the necessary credentials (a token and the CA certificate) into the Pod's filesystem, typically at /var/run/secrets/kubernetes.io/serviceaccount/.

client-go provides a dedicated function, rest.InClusterConfig(), that knows exactly where to look for these automatically mounted credentials. It reads the token, the CA certificate, and the API server's address (provided via environment variables like KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT).

Using it is incredibly simple:

package main

import (
	"k8s.io/client-go/rest"
	// "fmt" // For printing
)

func main() {
	// Create the in-cluster config
	config, err := rest.InClusterConfig()
	if err != nil {
		// Handle error - likely not running inside a cluster
		panic(err.Error())
	}

	// 'config' now holds the necessary configuration to talk to the API server
	// from within the cluster.
	// fmt.Printf("Successfully loaded in-cluster configuration!\nAPI Host: %s\n", config.Host)
}

If your code runs inside a Pod within Kubernetes, rest.InClusterConfig() is the way to go. It requires zero explicit configuration files or flags related to authentication.

Choosing the Right Method

  • Use Out-of-Cluster configuration (clientcmd.BuildConfigFromFlags) when your application runs outside the K8s cluster (developer machine, external tools, CI/CD).

  • Use In-Cluster configuration (rest.InClusterConfig) when your application is deployed as a Pod inside the K8s cluster (operators, controllers, custom API servers, internal tools).

Often, you might write code that needs to work in both scenarios. A common pattern is to first try loading the out-of-cluster config (e.g., from flags or environment) and, if that fails or isn't specified, fall back to trying the in-cluster config.

Now that we know how to get the *rest.Config object containing the connection details and credentials, the next step is to use this config to create an actual clientset – the object we'll use to interact with specific Kubernetes resources like Pods and Services.

Last updated

Was this helpful?