Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.pangolin.net/llms.txt

Use this file to discover all available pages before exploring further.

Try free on Pangolin Cloud

Fastest way to get started with Pangolin using the hosted control plane. No credit card required.
Use Kustomize when you want to manage Newt with rendered manifests, environment-specific overlays, and explicit patches in Git. For Newt, the supported Kustomize workflow is:
  1. Render the Newt Helm chart to manifests.
  2. Use the rendered output as the Kustomize base.
  3. Create overlays per site, cluster, or environment.
  4. Apply the overlay with kubectl apply -k or reconcile it with Argo CD or Flux.

When to use Kustomize for Newt

Use Kustomize if you:
  • want site-specific or environment-specific overlays
  • need explicit patches committed to Git
  • prefer reviewing rendered Kubernetes manifests before applying them
  • use Argo CD or Flux with Kustomize sources
  • want to customize Helm-rendered output without forking the chart
For a simpler single-site setup, use Newt Helm.

Supported approach

The Newt chart does not provide native Kustomize bases. Render the Helm chart first, then use Kustomize on the rendered manifests.
Do not manage the same Newt resources with both a live Helm release and Kustomize. Pick one ownership model per environment.
Recommended ownership model:
  • Use Helm only to render the Newt chart.
  • Use Kustomize, Argo CD, or Flux to apply and reconcile the rendered manifests.
  • Re-render the base when upgrading the chart or changing Helm values.

Example directory structure

newt-deployment/
├── base/
│   ├── kustomization.yaml
│   └── newt.yaml
├── overlays/
│   ├── site-a/
│   │   ├── kustomization.yaml
│   │   └── patches/
│   │       └── deployment-resources.patch.yaml
│   └── site-b/
│       ├── kustomization.yaml
│       └── patches/
│           └── deployment-resources.patch.yaml
└── values/
    ├── values-base.yaml
    ├── values-site-a.yaml
    └── values-site-b.yaml

Step 1: Create the namespace

Create the namespace before applying rendered manifests:
kubectl create namespace pangolin
If your cluster uses Pod Security Admission, namespace labels, or other policy labels, apply them before creating workloads. Example:
kubectl label namespace pangolin \
  pod-security.kubernetes.io/enforce=baseline \
  pod-security.kubernetes.io/audit=restricted \
  pod-security.kubernetes.io/warn=restricted

Step 2: Create Newt credentials

Create a Kubernetes Secret for each Newt site or instance.
kubectl create secret generic newt-auth-site-a \
  --namespace pangolin \
  --from-literal=PANGOLIN_ENDPOINT=https://pangolin.example.com \
  --from-literal=NEWT_ID=<site-a-newt-id> \
  --from-literal=NEWT_SECRET=<site-a-newt-secret>
For a second site:
kubectl create secret generic newt-auth-site-b \
  --namespace pangolin \
  --from-literal=PANGOLIN_ENDPOINT=https://pangolin.example.com \
  --from-literal=NEWT_ID=<site-b-newt-id> \
  --from-literal=NEWT_SECRET=<site-b-newt-secret>
Use existing Kubernetes Secrets for production. Do not commit Newt credentials into Helm values, rendered manifests, or Kustomize patches.

Step 3: Create base values

Create values/values-base.yaml:
newtInstances:
  - name: main-tunnel
    enabled: true
    replicas: 1
    auth:
      existingSecretName: newt-auth-site-a
This values file uses an existing Secret. The default Secret keys are:
PANGOLIN_ENDPOINT
NEWT_ID
NEWT_SECRET
Use auth.keys.* only when your Secret uses different key names.

Step 4: Render Newt to the base

Add and update the Helm repository:
helm repo add fossorial https://charts.fossorial.io
helm repo update fossorial
Render the Newt chart:
mkdir -p base overlays/site-a/patches overlays/site-b/patches values

helm template newt fossorial/newt \
  --namespace pangolin \
  --values values/values-base.yaml \
  > base/newt.yaml
You can also render from the GHCR OCI chart:
helm template newt oci://ghcr.io/fosrl/helm-charts/newt \
  --version 1.4.0 \
  --namespace pangolin \
  --values values/values-base.yaml \
  > base/newt.yaml

Step 5: Create the base kustomization

# base/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
  - newt.yaml
The namespace is already rendered by Helm through --namespace pangolin. You can also set namespace: pangolin in Kustomize, but avoid changing namespaces in overlays unless you have verified all rendered resources and references.

Step 6: Inspect the rendered resource names

Before writing patches, check the generated names:
kustomize build base | grep -E "^(kind:|  name:)"
Or list the deployments:
kustomize build base | yq '. | select(.kind == "Deployment") | .metadata.name'
Use the actual rendered Deployment name in your patch targets.
Do not assume the rendered Deployment name without checking the generated manifests. Helm naming can change with release name, chart name, nameOverride, or fullnameOverride.

Step 7: Create site-specific overlays

Example overlay for Site A:
# overlays/site-a/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
  - ../../base

labels:
  - pairs:
      app.kubernetes.io/site: site-a
      app.kubernetes.io/environment: production

patches:
  - path: patches/deployment-resources.patch.yaml
    target:
      group: apps
      version: v1
      kind: Deployment
      name: newt-main-tunnel
Example resource patch:
# overlays/site-a/patches/deployment-resources.patch.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: newt-main-tunnel
spec:
  replicas: 1
  template:
    spec:
      containers:
        - name: newt
          resources:
            requests:
              cpu: 100m
              memory: 128Mi
            limits:
              memory: 256Mi
Replace newt-main-tunnel with the actual Deployment name from your rendered manifests.
Example overlay for Site B with a different Secret is usually better handled by rendering a second base with a different values file. Create values/values-site-b.yaml:
newtInstances:
  - name: main-tunnel
    enabled: true
    replicas: 1
    auth:
      existingSecretName: newt-auth-site-b
Then render a separate base for Site B:
mkdir -p site-b/base

helm template newt-site-b fossorial/newt \
  --namespace pangolin \
  --values values/values-site-b.yaml \
  > site-b/base/newt.yaml
For different credentials, endpoints, provisioning keys, or instance names, prefer separate Helm-rendered bases. Use Kustomize patches for environment-level changes such as labels, annotations, resources, scheduling, or NetworkPolicy adjustments.

Common Kustomize patches for Newt

Patch resource requests and limits

# overlays/site-a/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
  - ../../base

patches:
  - path: patches/resources.patch.yaml
    target:
      group: apps
      version: v1
      kind: Deployment
      name: newt-main-tunnel
# overlays/site-a/patches/resources.patch.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: newt-main-tunnel
spec:
  template:
    spec:
      containers:
        - name: newt
          resources:
            requests:
              cpu: 200m
              memory: 256Mi
            limits:
              memory: 512Mi

Patch log level

Prefer configuring log level through Helm values before rendering. If you still need a manifest patch, patch the generated environment variable carefully after inspecting the rendered Deployment. Example JSON6902-style patch:
# overlays/site-a/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
  - ../../base

patches:
  - target:
      group: apps
      version: v1
      kind: Deployment
      name: newt-main-tunnel
    patch: |-
      - op: add
        path: /spec/template/spec/containers/0/env/-
        value:
          name: LOG_LEVEL
          value: DEBUG
Only use index-based JSON patches after checking the rendered manifest. Container order and environment variable layout can change between chart versions.

Add node affinity

# overlays/site-a/patches/node-affinity.patch.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: newt-main-tunnel
spec:
  template:
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: site
                    operator: In
                    values:
                      - site-a
Reference the patch:
# overlays/site-a/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
  - ../../base

patches:
  - path: patches/node-affinity.patch.yaml
    target:
      group: apps
      version: v1
      kind: Deployment
      name: newt-main-tunnel

Add annotations

# overlays/site-a/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
  - ../../base

patches:
  - target:
      group: apps
      version: v1
      kind: Deployment
      name: newt-main-tunnel
    patch: |-
      - op: add
        path: /metadata/annotations
        value:
          example.com/owner: platform

Do not rename rendered Helm resources by default

Avoid Kustomize options such as namePrefix and nameSuffix for Helm-rendered bases unless you have validated every generated reference. Renaming rendered resources can break:
  • Service selectors
  • Secret references
  • ConfigMap references
  • ServiceAccount references
  • NetworkPolicy selectors
  • Prometheus monitor selectors
If you need different resource names, prefer changing the Helm release name or chart naming values before rendering.

Apply the overlay

Preview the rendered output:
kustomize build overlays/site-a
Compare with the live cluster:
kustomize build overlays/site-a | kubectl diff -f -
Apply the overlay:
kubectl apply -k overlays/site-a
Verify the deployment:
kubectl get pods --namespace pangolin \
  -l app.kubernetes.io/name=newt

kubectl logs --namespace pangolin \
  -l app.kubernetes.io/name=newt \
  --tail=50

Updating the rendered base

When upgrading the Newt chart, re-render the base and review the changes.
helm repo update fossorial
Render the updated chart output:
helm template newt fossorial/newt \
  --namespace pangolin \
  --values values/values-base.yaml \
  > base/newt.yaml
Or with OCI:
helm template newt oci://ghcr.io/fosrl/helm-charts/newt \
  --version 1.4.0 \
  --namespace pangolin \
  --values values/values-base.yaml \
  > base/newt.yaml
Validate the overlay:
kustomize build overlays/site-a
Review the diff:
git diff
kustomize build overlays/site-a | kubectl diff -f -
Commit the updated base and overlays:
git add base/ overlays/ values/
git commit -m "Update Newt rendered manifests"
Apply after review:
kubectl apply -k overlays/site-a

Ownership model

Do not run helm upgrade against a release that is managed by Kustomize. Avoid this pattern:
helm upgrade newt fossorial/newt --namespace pangolin
kubectl apply -k overlays/site-a
Use one of these models instead:
ModelDescription
Helm-managedHelm installs and upgrades the live release. Kustomize is not used for the same resources.
Kustomize-managedHelm renders manifests only. Kustomize applies and owns the live resources.
GitOps-managedArgo CD or Flux applies the Kustomize overlay and owns reconciliation.

Validation

Validate Kustomize output:
kustomize build overlays/site-a
Run a server-side dry run:
kustomize build overlays/site-a | kubectl apply -f - --dry-run=server
Preview live changes:
kustomize build overlays/site-a | kubectl diff -f -
Check live resources:
kubectl get all --namespace pangolin
kubectl get events --namespace pangolin --sort-by=.lastTimestamp

Troubleshooting

The patch does not apply

Check the rendered resource name and kind:
kustomize build base | grep -E "^(kind:|  name:)"
Then verify the patch target in your overlay.

The pod does not start

Check pod status and events:
kubectl get pods --namespace pangolin
kubectl describe pod <pod-name> --namespace pangolin
kubectl get events --namespace pangolin --sort-by=.lastTimestamp

Newt does not connect

Check logs:
kubectl logs --namespace pangolin \
  -l app.kubernetes.io/name=newt \
  --tail=100
Verify:
  • the Secret exists in the same namespace
  • PANGOLIN_ENDPOINT is reachable from the pod
  • NEWT_ID and NEWT_SECRET are correct
  • outbound DNS and HTTPS are allowed
  • TLS certificates for the Pangolin endpoint are valid

Next steps

Helm Install

Install Newt with Helm.

Configuration

Review Newt chart options.

Troubleshooting

Debug Newt deployment and connection issues.

GitOps

Deploy Kustomize overlays with Argo CD or Flux.