Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.pangolin.net/llms.txt

Use this file to discover all available pages before exploring further.

Try free on Pangolin Cloud

Fastest way to get started with Pangolin using the hosted control plane. No credit card required.
This page covers the main Newt Kubernetes configuration options for Helm and Kustomize workflows. For exhaustive option coverage, refer to the chart resources:

README

values.yaml

values.schema.json

Version context

This page is aligned with the Newt Helm chart 1.4.0.
ItemValue
Chart version1.4.0
App version1.12.3
Kubernetes version>=1.30.14-0
Default imagedocker.io/fosrl/newt:1.12.3
Chart 1.4.0 also publishes the Newt image metadata for Docker Hub and GHCR and includes Artifact Hub signing metadata.

Configuration sections

Image and global defaults

Use global.image to control the Newt container image used by all instances.
global:
  image:
    registry: docker.io
    repository: fosrl/newt
    tag: ""
    digest: ""
    imagePullPolicy: IfNotPresent
    imagePullSecrets: []

  logLevel: INFO
Recommendations:
  • Leave tag empty to use the chart appVersion.
  • Use digest when you need immutable image pinning.
  • Use imagePullSecrets when pulling from a private registry.
  • Use per-instance overrides only when allowGlobalOverride is enabled for that instance.
The chart can render Namespace resources, including Pod Security Admission labels.
namespace:
  create: false
  name: ""
  labels: {}
  podSecurity:
    enforce: ""
    warn: ""
    audit: ""
Recommended production pattern:
  1. Create the namespace manually.
  2. Apply required Pod Security Admission labels or policy labels.
  3. Install the chart into that namespace.
kubectl create namespace pangolin
Example namespace labels:
kubectl label namespace pangolin \
  pod-security.kubernetes.io/enforce=baseline \
  pod-security.kubernetes.io/audit=restricted \
  pod-security.kubernetes.io/warn=restricted
Per-instance namespace overrides are available when allowGlobalOverride: true is set:
newtInstances:
  - name: main-tunnel
    allowGlobalOverride: true
    namespace:
      name: pangolin
      create: false
      labels: {}
      podSecurity:
        enforce: ""
        warn: ""
        audit: ""
Creating the namespace manually is recommended when your cluster uses Pod Security Admission, policy labels, admission webhooks, or namespace annotations.
For production, use an existing Kubernetes Secret.
newtInstances:
  - name: main-tunnel
    enabled: true
    auth:
      existingSecretName: newt-auth
Create the Secret before installing the chart:
kubectl create secret generic newt-auth \
  --namespace pangolin \
  --from-literal=PANGOLIN_ENDPOINT=https://pangolin.example.com \
  --from-literal=NEWT_ID=<newt-id> \
  --from-literal=NEWT_SECRET=<newt-secret>
The default Secret keys are:
PANGOLIN_ENDPOINT
NEWT_ID
NEWT_SECRET
Use auth.keys.* only when your Secret uses different key names:
newtInstances:
  - name: main-tunnel
    enabled: true
    auth:
      existingSecretName: newt-auth
      keys:
        endpointKey: PANGOLIN_ENDPOINT
        idKey: NEWT_ID
        secretKey: NEWT_SECRET
auth.keys.* are Secret key names, not credential values.Inline credentials are supported, but should only be used for local testing:
newtInstances:
  - name: main-tunnel
    enabled: true
    auth:
      pangolinEndpoint: "https://pangolin.example.com"
      id: "<newt-id>"
      secret: "<newt-secret>"
Inline credentials can appear in rendered manifests and Helm release history. Use auth.existingSecretName for production.
Do not commit plaintext credentials to Git. For GitOps workflows, use encrypted or external secret backends such as SOPS, Sealed Secrets, External Secrets Operator, Vault, or Infisical.
Chart 1.4.0 also includes auth.createSecret and auth.envVarsDirect modes for generated Secret and direct environment-variable workflows. Use these only when they match your operational model.
Provisioning supports installs where Newt bootstraps credentials from a provisioning key.Use provisioning when Newt should bootstrap credentials from a provisioning key instead of using a static NEWT_ID and NEWT_SECRET.
newtInstances:
  - name: main-tunnel
    enabled: true
    auth:
      pangolinEndpoint: https://pangolin.example.com
      provisioningKey: "<provisioning-key>"
      newtName: "my-site"
    configPersistence:
      enabled: true
      type: emptyDir
      mountPath: /var/lib/newt
      fileName: config.json
Provisioning requires writable config persistence so Newt can store the generated configuration.For durable storage, use an existing PVC:
newtInstances:
  - name: main-tunnel
    enabled: true
    auth:
      pangolinEndpoint: https://pangolin.example.com
      provisioningKey: "<provisioning-key>"
      newtName: "my-site"
    configPersistence:
      enabled: true
      type: persistentVolumeClaim
      existingClaim: my-newt-config
      mountPath: /var/lib/newt
      fileName: config.json
You can also provide a provisioning blueprint:
newtInstances:
  - name: main-tunnel
    enabled: true
    auth:
      pangolinEndpoint: https://pangolin.example.com
      provisioningKey: "<provisioning-key>"
      newtName: "my-site"
    configPersistence:
      enabled: true
      type: emptyDir
    provisioningBlueprintFile: /etc/newt/provisioning-blueprint.yaml
    provisioningBlueprintData: |
      version: 1
      routes: []
Each Newt instance is configured under newtInstances[].
newtInstances:
  - name: main-tunnel
    enabled: true
    replicas: 1
    logLevel: INFO
    mtu: 1280
    dns: ""
    pingInterval: ""
    pingTimeout: ""
    acceptClients: false
    useNativeInterface: false
    interface: newt
    keepInterface: false
    noCloud: false
    disableClients: false
Key settings:
SettingPurpose
replicasNumber of replicas for this Newt instance
mtuWireGuard interface MTU
dnsOptional DNS server address pushed to the client
pingInterval / pingTimeoutOptional Newt ping timing overrides
acceptClientsAllows client connections at runtime
useNativeInterfaceUses native WireGuard interface when native mode is enabled
noCloudDisables cloud connectivity
disableClientsDisables client connections
Newt 1.11 changed upstream ping defaults. Set pingInterval and pingTimeout explicitly if you need older timing behavior.
Service exposure is controlled separately from acceptClients.
newtInstances:
  - name: main-tunnel
    enabled: true
    service:
      enabled: false
      type: ClusterIP
      port: 51820
      testerPort: ""
      externalTrafficPolicy: ""
      loadBalancerSourceRanges: []
Important behavior:
  • acceptClients does not create a Service.
  • newtInstances[].service.enabled controls whether a Service is created.
  • Tester port exposure is disabled by default unless enabled through test settings or explicit legacy tester-port configuration.
Common Service types:
TypeUse case
ClusterIPInternal cluster access
LoadBalancerExternal exposure through cloud load balancer
NodePortNode-level port exposure
Use configPersistence when Newt needs writable configuration storage.
newtInstances:
  - name: main-tunnel
    configPersistence:
      enabled: false
      type: emptyDir
      mountPath: /var/lib/newt
      fileName: config.json
      existingClaim: ""
Storage types:
TypeBehavior
emptyDirEphemeral storage, recreated with the pod
persistentVolumeClaimDurable storage using an existing PVC
Provisioning-based installs should enable config persistence. For production provisioning, prefer a PVC over emptyDir.
emptyDir is recreated when a pod is replaced. Newt can require a reconnect and handshake after restart, which may briefly interrupt active traffic.
For production, prefer an existing PersistentVolumeClaim to keep writable Newt configuration across restarts and rescheduling.
The chart supports blueprints, provisioning blueprints, mTLS certificate mounts, Docker socket mounts, and up/down scripts.Blueprint example:
newtInstances:
  - name: main-tunnel
    blueprintFile: /etc/newt/blueprint.yaml
    blueprintData: |
      version: 1
      routes: []
Provisioning blueprint example:
newtInstances:
  - name: main-tunnel
    provisioningBlueprintFile: /etc/newt/provisioning-blueprint.yaml
    provisioningBlueprintData: |
      version: 1
      routes: []
mTLS using an existing PEM Secret:
newtInstances:
  - name: main-tunnel
    mtls:
      enabled: true
      mode: pem
      pem:
        secretName: newt-mtls
        clientCertPath: /certs/client.crt
        clientKeyPath: /certs/client.key
        caPath: /certs/ca.crt
Up/down scripts:
global:
  updownScripts:
    route.sh: |
      #!/bin/sh
      echo "Newt interface changed"

newtInstances:
  - name: main-tunnel
    updown:
      enabled: true
      mountPath: /opt/newt/updown
Use Secrets for certificates and sensitive script inputs. Avoid inline private keys or credentials in values files.
ServiceAccount creation is enabled by default.
serviceAccount:
  create: true
  name: ""
  automountServiceAccountToken: false
RBAC is disabled by default in chart 1.4.0:
rbac:
  create: false
  clusterRole: false
Enable RBAC only when your selected configuration needs Kubernetes API permissions:
rbac:
  create: true
  clusterRole: false
Per-instance ServiceAccount overrides are available when allowGlobalOverride: true is set:
newtInstances:
  - name: main-tunnel
    allowGlobalOverride: true
    serviceAccount:
      create: true
      name: newt-main-tunnel
      automountServiceAccountToken: false
Chart 1.4.0 changed the RBAC default to rbac.create=false. Existing installations that relied on auto-created RBAC must opt in explicitly during upgrade.
Global resource requests and limits apply to Newt workloads.
global:
  resources:
    requests:
      cpu: 100m
      memory: 128Mi
      ephemeral-storage: 128Mi
    limits:
      cpu: 200m
      memory: 256Mi
      ephemeral-storage: 256Mi
Scheduling defaults:
global:
  priorityClassName: ""
  nodeSelector: {}
  tolerations: []
  affinity:
    nodeAffinity: {}
    podAffinity: {}
    podAntiAffinity: {}
  topologySpreadConstraints: []
Pod Disruption Budget:
global:
  podDisruptionBudget:
    enabled: false
    minAvailable: 1
    maxUnavailable: ""
Recommendations:
  • Start with the chart defaults.
  • Increase requests and limits based on traffic volume.
  • Use node selectors, tolerations, affinity, or topology spread constraints when you need placement control.
  • Enable a PodDisruptionBudget only when your replica count and maintenance policy support it.
Avoid CPU limits unless you explicitly need hard caps. CPU limits can trigger throttling even when spare node CPU exists. For most deployments, use CPU requests and memory limits as the starting point.
Health probes are disabled by default.
global:
  health:
    enabled: false
    path: /tmp/healthy
    readinessFailureThreshold: 3
Per-instance health options:
newtInstances:
  - name: main-tunnel
    healthFile: /tmp/healthy
    enforceHcCert: false
Helm test jobs are disabled by default:
global:
  tests:
    enabled: false
    image:
      repository: registry.k8s.io/kubectl
      tag: "1.30.14"
      pullPolicy: IfNotPresent
Enable tests only when you want chart test jobs and tester-port related resources.
Metrics are disabled by default.
global:
  metrics:
    enabled: false
    port: 9090
    path: /metrics
    adminAddr: ":2112"
    asyncBytes: false
    region: ""
    otlpEnabled: false
    pprofEnabled: false
The default adminAddr is :2112, which listens on all interfaces and allows in-cluster scraping. Use 127.0.0.1:2112 only when scraping from other pods is not required.Metrics Service:
global:
  metrics:
    service:
      enabled: false
      type: ClusterIP
      port: 2112
      portName: metrics
Prometheus Operator resources:
global:
  metrics:
    podMonitor:
      enabled: false
    serviceMonitor:
      enabled: false
    prometheusRule:
      enabled: false
Example with ServiceMonitor:
global:
  metrics:
    enabled: true
    service:
      enabled: true
    serviceMonitor:
      enabled: true
Optional pprof endpoint:
global:
  metrics:
    pprofEnabled: true
NetworkPolicy rendering is disabled by default.
global:
  networkPolicy:
    enabled: false
    defaultMode: merge
    components:
      defaultApp:
        enabled: true
      dns:
        enabled: false
      kubeApi:
        enabled: false
      custom:
        enabled: false
    ruleSets: {}
Per-instance NetworkPolicy overrides:
newtInstances:
  - name: main-tunnel
    networkPolicy:
      enabled: null
      mode: merge
      useGlobalComponents:
        defaultApp: true
        dns: false
        kubeApi: false
        custom: true
      components:
        dns:
          enabled: false
        custom:
          enabled: false
      includeRuleSets: []
Modes:
ModeBehavior
inheritUse global components and rule sets only
mergeCombine global and instance-level policy settings
replaceUse only the instance-level policy settings
Enable DNS egress rules if your default network policy blocks DNS.

Configuration by install method

Helm

Use a values file:
helm upgrade --install newt fossorial/newt \
  --namespace pangolin \
  --values values-newt.yaml
Use inline values only for small tests:
helm upgrade --install newt fossorial/newt \
  --namespace pangolin \
  --set 'newtInstances[0].name=main-tunnel' \
  --set 'newtInstances[0].auth.existingSecretName=newt-auth'
See Site (newt) Helm for the installation flow.

Kustomize

Render the chart with Helm, then use Kustomize overlays:
helm template newt fossorial/newt \
  --namespace pangolin \
  --values values-newt.yaml \
  > base/newt.yaml
Then apply an overlay:
kubectl apply -k overlays/site-a
See Newt Kustomize for the Kustomize workflow.

GitOps

Store Helm values or Kustomize overlays in Git. Argo CD or Flux reconciles the desired state. Argo CD Helm example:
spec:
  source:
    helm:
      values: |
        newtInstances:
          - name: main-tunnel
            enabled: true
            auth:
              existingSecretName: newt-auth
Flux HelmRelease example:
spec:
  values:
    newtInstances:
      - name: main-tunnel
        enabled: true
        auth:
          existingSecretName: newt-auth
See GitOps for GitOps guidance.

Next steps

Helm Install

Install Newt with Helm.

Kustomize Install

Install Newt with rendered manifests and Kustomize overlays.

Troubleshooting

Debug Newt deployment and connection issues.

GitOps

Deploy Newt with Argo CD or Flux.