Installation — Deploy KubeStellar Console

This guide covers all deployment options for KubeStellar Console, the multi-cluster Kubernetes dashboard with AI-powered operations.

Try it first! See a live preview at kubestellarconsole.netlify.app


Fastest Path

Prerequisites: You must install the kubestellar-mcp plugins before running this command — they are not installed by start.sh. See Step 1: Install Claude Code Plugins first.

One command downloads pre-built binaries, starts the backend + agent, and opens your browser:

curl -sSL https://raw.githubusercontent.com/kubestellar/console/main/start.sh | bash

This downloads and starts the console binary only. It does not install kubestellar-mcp plugins. Typically takes under 45 seconds. No OAuth or GitHub credentials required — you get a local dev-user session automatically.


System Components

KubeStellar Console has 7 components that work together. For the full architectural deep-dive, data flow diagrams, and component interactions, see the Architecture page.

Component Summary

#ComponentWhat it doesRequired?
1GitHub OAuth AppLets users sign in with GitHubOptional — without it, a local dev-user session is created
2FrontendReact web app you see in browserYes — included in the console executable
3BackendGo server that handles API callsYes — included in the console executable
4MCP BridgeHosts kubestellar-ops and kubestellar-deploy MCP servers; Backend queries them for cluster dataYes — spawned as a child process by the console executable
5AI Coding Agent + PluginsAny MCP-compatible AI coding agent (Claude Code, Copilot, Cursor, Gemini CLI) with kubestellar-ops/deploy pluginsYes — Claude Marketplace or Homebrew
6kc-agentLocal MCP+WebSocket server on port 8585 for kubectl executionYes — spawned by the console executable
7KubeconfigYour cluster credentialsYes — your existing ~/.kube/config

Installation Steps

Step 1: Install Claude Code Plugins

The console uses kubestellar-mcp plugins to talk to your clusters. See the full kubestellar-mcp documentation for details.

Option A: Install from Claude Code Marketplace (recommended)

# In Claude Code, run:
/plugin marketplace add kubestellar/claude-plugins

Then:

  1. Go to /pluginMarketplaces tab → click Update
  2. Go to /pluginDiscover tab
  3. Install kubestellar-ops and kubestellar-deploy

Verify with /mcp - you should see:

plugin:kubestellar-ops:kubestellar-ops · ✓ connected
plugin:kubestellar-deploy:kubestellar-deploy · ✓ connected

Option B: Install via Homebrew (source: homebrew-tap)

brew tap kubestellar/tap
brew install kubestellar-ops kubestellar-deploy

Step 2: Set Up Kubeconfig

The console reads clusters from your kubeconfig. Make sure you have access:

# List your clusters
kubectl config get-contexts
 
# Test access to a cluster
kubectl --context=your-cluster get nodes

To add more clusters, merge kubeconfigs:

KUBECONFIG=~/.kube/config:~/.kube/cluster2.yaml kubectl config view --flatten > ~/.kube/merged
mv ~/.kube/merged ~/.kube/config

Step 3: Deploy the Console

Choose your deployment method:


Curl Quickstart

Downloads pre-built binaries and starts the console:

curl -sSL https://raw.githubusercontent.com/kubestellar/console/main/start.sh | bash

This starts the backend (port 8080) and opens the frontend in your browser. No OAuth credentials needed — a local dev-user session is created automatically.


Run from Source

For contributors or if you want to build from source. No GitHub OAuth required.

Prerequisites

  • Go 1.24+
  • Node.js 20+
  • kubestellar-ops and kubestellar-deploy installed (see Step 1)

Setup

git clone https://github.com/kubestellar/console.git
cd console
./start-dev.sh

This compiles the Go backend, installs npm dependencies, starts a Vite dev server on port 5174, and creates a local dev-user session (no GitHub login needed).

Open http://localhost:5174


Run from Source with OAuth

To enable GitHub login (for multi-user deployments or to test the full auth flow):

1. Create a GitHub OAuth App

  1. Go to GitHub Developer SettingsOAuth AppsNew OAuth App

  2. Fill in:

    • Application name: KubeStellar Console
    • Homepage URL: http://localhost:8080
    • Authorization callback URL: http://localhost:8080/auth/github/callback
  3. Click Register application

  4. Copy the Client ID and generate a Client Secret

2. Clone the Repository

git clone https://github.com/kubestellar/console.git
cd console

3. Configure Environment

Create a .env file inside the cloned console/ directory (the repo root) with your OAuth credentials:

GITHUB_CLIENT_ID=your_client_id
GITHUB_CLIENT_SECRET=your_client_secret
FEEDBACK_GITHUB_TOKEN=ghp_your_personal_access_token

Recommended: FEEDBACK_GITHUB_TOKEN is a GitHub Personal Access Token (PAT) with public_repo scope that enables users to submit bug reports, feature requests, and feedback directly from the console. Without it, the in-app feedback and issue submission features are disabled. We strongly encourage setting this token so your users can contribute feedback seamlessly. You can create one at GitHub Settings → Tokens.

Important: The .env file must be in the same directory as startup-oauth.sh. The script loads it from its own directory, so creating it elsewhere will not work.

4. Start the Console

./startup-oauth.sh

Open http://localhost:8080 and sign in with GitHub.

Tip: Once running, click your profile avatar → the Developer panel shows your OAuth status, console version, and quick links.

EnvironmentCallback URL
Local devhttp://localhost:8080/auth/github/callback
Kuberneteshttps://console.your-domain.com/auth/github/callback
OpenShifthttps://ksc.apps.your-cluster.com/auth/github/callback

Helm Installation

1. Add Secrets

Create a secret with your OAuth credentials:

kubectl create namespace ksc
 
kubectl create secret generic ksc-secrets \
  --namespace ksc \
  --from-literal=github-client-id=YOUR_CLIENT_ID \
  --from-literal=github-client-secret=YOUR_CLIENT_SECRET

Recommended: Add a FEEDBACK_GITHUB_TOKEN to enable in-app feedback and issue submission. This is a GitHub Personal Access Token (PAT) with public_repo scope that allows users to submit bug reports, feature requests, and feedback directly from the console UI. Without it, these features are disabled. We strongly encourage including this token. You can create one at GitHub Settings → Tokens.

Optionally add Claude API key for AI features and the feedback token:

kubectl create secret generic ksc-secrets \
  --namespace ksc \
  --from-literal=github-client-id=YOUR_CLIENT_ID \
  --from-literal=github-client-secret=YOUR_CLIENT_SECRET \
  --from-literal=claude-api-key=YOUR_CLAUDE_API_KEY \
  --from-literal=feedback-github-token=YOUR_FEEDBACK_GITHUB_TOKEN

2. Install Chart

From GitHub Container Registry:

helm install ksc oci://ghcr.io/kubestellar/charts/kubestellar-console \
  --namespace ksc \
  --set github.existingSecret=ksc-secrets

From source:

git clone https://github.com/kubestellar/console.git
cd console
 
helm install ksc ./deploy/helm/kubestellar-console \
  --namespace ksc \
  --set github.existingSecret=ksc-secrets

3. Access the Console

Port forward (development):

kubectl port-forward -n ksc svc/ksc-kubestellar-console 8080:8080

Open http://localhost:8080

Ingress (production):

helm upgrade ksc ./deploy/helm/kubestellar-console \
  --namespace ksc \
  --set github.existingSecret=ksc-secrets \
  --set ingress.enabled=true \
  --set ingress.hosts[0].host=ksc.your-domain.com

4. Run kc-agent Locally

The Helm chart deploys the console backend inside your cluster, but kc-agent is not included in the Helm deployment. kc-agent is a lightweight local process that bridges your browser to your local kubeconfig via WebSocket and MCP. You must run it separately on your workstation.

Install kc-agent:

# Via Homebrew
brew tap kubestellar/tap
brew install kc-agent

Start kc-agent:

kc-agent

This starts the agent on port 8585. It reads your local ~/.kube/config and exposes kubectl execution over WebSocket (for the browser console) and MCP (for AI coding agents).

Why local? kc-agent runs on your machine because it needs direct access to your kubeconfig and kubectl. The in-cluster console connects to kc-agent over WebSocket to execute commands against clusters that are only reachable from your workstation.

Without kc-agent: The console will still load, but cluster interactions that require kubectl (terminal commands, AI missions that modify resources) will not work. If the console was deployed without OAuth, it will fall back to demo mode. See Architecture for details.

OpenShift Installation

OpenShift uses Routes instead of Ingress:

helm install ksc ./deploy/helm/kubestellar-console \
  --namespace ksc \
  --set github.existingSecret=ksc-secrets \
  --set route.enabled=true \
  --set route.host=ksc.apps.your-cluster.com

The console will be available at https://ksc.apps.your-cluster.com

Docker Installation

For single-node or development deployments:

docker run -d \
  --name ksc \
  -p 8080:8080 \
  -e GITHUB_CLIENT_ID=your_client_id \
  -e GITHUB_CLIENT_SECRET=your_client_secret \
  -e FEEDBACK_GITHUB_TOKEN=ghp_your_personal_access_token \
  -v ~/.kube:/root/.kube:ro \
  -v ksc-data:/app/data \
  ghcr.io/kubestellar/console:latest

Kubernetes Deployment via Script

One command that handles helm, secrets, and ingress:

curl -sSL https://raw.githubusercontent.com/kubestellar/console/main/deploy.sh | bash

Supports --context, --openshift, --ingress <host>, and --github-oauth flags.

Multi-Cluster Access

The console reads clusters from your kubeconfig. To access multiple clusters:

  1. Merge kubeconfigs:

    KUBECONFIG=~/.kube/config:~/.kube/cluster2.yaml kubectl config view --flatten > ~/.kube/merged
    mv ~/.kube/merged ~/.kube/config
  2. Mount merged config in container/pod

  3. Verify access:

    kubectl config get-contexts

Upgrading

helm upgrade ksc oci://ghcr.io/kubestellar/charts/kubestellar-console \
  --namespace ksc \
  --reuse-values

Uninstalling

helm uninstall ksc --namespace ksc
kubectl delete namespace ksc

Troubleshooting

”MCP bridge failed to start”

Cause: kubestellar-ops or kubestellar-deploy plugins are not installed.

Solution: Follow Step 1: Install Claude Code Plugins or see the full kubestellar-mcp documentation.

# Via Homebrew
brew tap kubestellar/tap
brew install kubestellar-ops kubestellar-deploy

GitHub OAuth 404 or Blank Page

Cause: OAuth credentials not configured correctly.

Solutions:

  1. Verify the secret contains correct credentials
  2. Check callback URL matches exactly (see Run from Source with OAuth)
  3. View pod logs: kubectl logs -n ksc deployment/ksc-kubestellar-console

”GITHUB_CLIENT_SECRET is not set”

Cause: You’re running startup-oauth.sh without a .env file.

Solutions:

  1. Create a .env file with GITHUB_CLIENT_ID and GITHUB_CLIENT_SECRET (see Run from Source with OAuth)
  2. Or use ./start-dev.sh instead — it doesn’t require OAuth credentials

”exchange_failed” After GitHub Login

Cause: The Client Secret is wrong or has been regenerated.

Solutions:

  1. Go to GitHub Developer Settings → your OAuth App
  2. Generate a new Client Secret
  3. Update GITHUB_CLIENT_SECRET in your .env file
  4. Restart the console

”csrf_validation_failed”

Cause: The callback URL in GitHub doesn’t match the console’s URL.

Solutions:

  1. Verify the Authorization callback URL in your GitHub OAuth App settings matches exactly: http://localhost:8080/auth/github/callback
  2. Clear your browser cookies for localhost
  3. Restart the console

Clusters Not Showing

Cause: kubeconfig not mounted or MCP bridge not running.

Solutions:

  1. Verify kubeconfig is mounted in the pod
  2. Check MCP bridge status in logs
  3. Verify kubestellar-mcp tools are installed: which kubestellar-ops kubestellar-deploy

Plugin Shows Disconnected

Cause: Binary not in PATH or not working.

Solutions:

  1. Verify binary is installed: which kubestellar-ops
  2. Verify binary works: kubestellar-ops version
  3. Restart Claude Code

See kubestellar-mcp troubleshooting for more details.