Skip to main content

Command Palette

Search for a command to run...

How to Automate OpenZiti SDK Integration Using a 4-Phase AI Skill

Detect, Plan, Transform, Provision — 82 Go files, 5 minutes, zero broken files

Published
10 min read
How to Automate OpenZiti SDK Integration Using a 4-Phase AI Skill

Embedding the OpenZiti SDK into an existing service is a good idea in theory and a grind in practice. You need to understand the SDK patterns for the language, find every call site where a raw TCP listener or HTTP client lives, replace each one correctly, wire up the Ziti context, update go.mod, and not break anything. For one service, that's an afternoon. For a fleet of microservices, it simply doesn't happen.

What is OpenZiti? OpenZiti is an open-source, zero-trust networking platform that embeds security directly into application connectivity. It replaces traditional VPNs and network perimeters with identity-aware, policy-driven service access — no open ports, no implicit trust. It's developed by NetFoundry,

The SKILL-driven auto-Zitification workflow changes that math. Running on OpenCode with Claude Sonnet 4.6 and ziti-mcp-server v0.9.0, it took 82 Go files, found every transport candidate, generated reviewable diffs, transformed the code, and provisioned live network identities and policies, in under 5 minutes, with zero broken files.

TL;DR

  • The auto-Zitification SKILL runs 4 phases: Detect, Plan, Transform, Provision, with a mandatory human approval gate before any file is written

  • Phase 1 uses zitifier-detect-go, a true AST analyzer (go/analysis framework, full type resolution), not grep

  • The review gate in Phase 2 caught two real issues in the demo target before touching a single file

  • Phase 4 provisions identities, a Ziti service, and dial/bind policies live via ziti-mcp-server v0.9.0

  • Claude Sonnet 4.6 reached the approve gate in 2:50; gpt-4o-mini never finished

  • Demo target: stefanprodan/podinfo, 82 Go files, gRPC + HTTP + TCP, used by CNCF Flux and Flagger

What Is the Auto-Zitification SKILL?

The SKILL is a 4-phase orchestration defined in a SKILL.md file and executed by OpenCode, an open-source, provider-agnostic AI coding agent. Each phase runs as a named subagent with declared tool permissions. No phase can exceed its scope: the detection agent can't write files, the provisioning agent can't touch source code.

%%{init: {'theme': 'base', 'themeVariables': {'primaryColor': '#1e293b', 'primaryTextColor': '#e2e8f0', 'primaryBorderColor': '#3b82f6', 'lineColor': '#64748b', 'background': '#0f172a', 'mainBkg': '#1e293b', 'edgeLabelBackground': '#1e293b'}}}%%
flowchart LR
  classDef phase fill:#1e3a5f,stroke:#3b82f6,color:#93c5fd,stroke-width:2px
  classDef gate fill:#3d1515,stroke:#ef4444,color:#fca5a5,stroke-width:2px,stroke-dasharray:4
  classDef prov fill:#14291f,stroke:#10b981,color:#6ee7b7,stroke-width:2px

  D["🔍 DETECT\nAST analysis\nzitifier-detect-go"]:::phase
  P["📋 PLAN\nPer-file diffs\nreview gate"]:::phase
  G["✋ HUMAN\nAPPROVAL\nREQUIRED"]:::gate
  T["✏️ TRANSFORM\nWrite SDK replacements\n.pre-ziti.bak backups"]:::phase
  V["⚙️ PROVISION\nIdentities · Service\nPolicies via MCP"]:::prov

  D --> P --> G --> T --> V

4 phases, one mandatory gate. No file is written until the human approves the plan.

Why Doesn't Manual Zitification Scale?

Embedding the OpenZiti SDK by hand requires knowing the idiomatic patterns for each language, following net.Conn assignments across variable assignments, and catching every call site, including the ones inside helper functions three layers deep. Miss one and you get a subtle fallback bug that only shows under load. Go, Node.js, and Python each have different SDK patterns, so there's no cross-language muscle memory to build on.

At microservice scale the math breaks. Hours per service multiplied by hundreds of services means this work simply doesn't get scheduled. The SKILL makes it schedulable.

What Is True AST Detection and Why Does It Matter?

zitifier-detect-go uses the go/analysis framework, the same infrastructure behind go vet, staticcheck, and gopls. It performs full type resolution via NeedTypesInfo, which means it knows that net.Listen is net.Listen even when it's been aliased, assigned to an interface, or shadowed by a local variable. Grep-based detection cannot do this.

The practical difference shows in the false-positive rate. Grep flags http.Get("github.com/some/import-path") as a HIGH confidence external call. The AST detector reads the URL argument, classifies external domains as LOW confidence, and skips them. It also validates the network argument on net.Dial, Unix sockets and UDP get skipped automatically. Only TCP variants are candidates.

Detection output is structured JSON:

{ "file": "pkg/api/grpc/server.go",
  "line": 88,
  "type": "SERVER",
  "pattern": "net.Listen",
  "confidence": "HIGH" }

For podinfo's 82 Go files, the detector found 4 candidates across 3 transport patterns:

File Type Pattern Confidence
pkg/api/http/server.go SERVER http.ListenAndServe HIGH
pkg/api/grpc/server.go SERVER net.Listen("tcp",...) HIGH
pkg/api/http/echo.go CLIENT http.Client{Transport:...} HIGH
cmd/podcli/check.go CLIENT net.Dial / http.Get HIGH
pkg/api/http/server.go AMBIGUOUS gorilla/websocket Upgrader MEDIUM

No false positives. No missed candidates.

What Did the Review Gate Actually Catch?

Before a single file is written, Phase 2 generates per-file diffs and surfaces warnings. In the podinfo run, it caught two real issues.

Warning 1, Per-request Ziti context. The initial plan placed initZiti() inside echoHandler(), a per-request HTTP handler. Ziti contexts are expensive to initialize and must be created once at startup, stored as a struct field, and reused. Creating one per request would crater performance under any real load. The gate flagged this before writing.

Warning 2, Dropped transport wrapper. The echo.go client originally used otelhttp.NewTransport(http.DefaultTransport) for OpenTelemetry distributed tracing. The naive replacement would have discarded that wrapper entirely, silently breaking distributed traces. The correct fix composes the transports: wrap the Ziti transport with OTel, don't replace it.

Both issues were caught at the approve gate. Neither required finding a bug after the fact. This is the point of the gate, it's architectural, not advisory.

What Does the Transformation Look Like?

The gRPC server transformation is the cleanest example. net.Listen returns a net.Listener interface, and zitiCtx.Listen returns the same interface. It's a true drop-in, no gRPC-specific code changes needed anywhere else in the stack.

Before:

import (
  "fmt"
  "net"
)

func (s *Server) ListenAndServe() {
  listener, err := net.Listen(
    "tcp",
    fmt.Sprintf(":%v", s.config.Port),
  )
  // ...
}

After:

import (
  "os"
  "github.com/openziti/sdk-golang/ziti"
)

func (s *Server) ListenAndServe() {
  zitiCtx, _ := initZiti()
  listener, err := zitiCtx.Listen(
    os.Getenv("ZITI_SERVICE_NAME") + "-grpc")
  // ...
}

Every touched file gets a .pre-ziti.bak backup before writing. The go.mod update is handled automatically. The shared initZiti() helper is created once and referenced across all transformed files.

How Does the Provisioning Phase Work?

Phase 4 runs entirely through ziti-mcp-server v0.9.0, which exposes the full Ziti Management API to the agent as 201 tools. The provisioner queries the controller version to confirm network type (the demo ran against a self-hosted OpenZiti quickstart), then:

  1. Creates podinfo-client and podinfo-server identities, writes enrollment JWTs to disk

  2. Creates a Ziti service named per the ZITI_SERVICE_NAME env var, covering both HTTP and gRPC traffic

  3. Wires a Dial policy for the client identity and a Bind policy for the server identity

  4. Creates a Service Edge Router Policy for all routers

  5. Emits ziti edge enroll commands for each JWT

The whole provisioning step runs in the same 5-minute window as the code transformation. The no-listening-ports model applies immediately after enrollment; the service binds to the Ziti fabric, not to a TCP port.

Both self-hosted OpenZiti and NetFoundry-managed networks are supported.

Which Model Should You Use?

Model choice has a material impact on agentic workflows. Here's what the podinfo run showed:

Model Time to 'approve gate' Code quality Rate limits Verdict
gpt-4o-mini Never finished Compile error loops Annoying Wrong tool
gpt-4o ~5 min Good Tier 1 limited Usable
Claude Sonnet 4.6 2:50 Excellent No issues Recommended

Claude Sonnet 4.6 reached the approve gate in 2 minutes 50 seconds and completed the full run in under 5 minutes. The code quality was high enough that both review gate warnings were substantive, not noise, and the transformations required no manual corrections.

One practical note: running this required a personal Claude Pro account to get an API key. NetFoundry's Team plan doesn't distribute API keys directly to engineers. If your org is in the same situation, a personal Pro account is the current workaround until that changes.

Why OpenCode for This Workflow?

The initial evaluation ran on OpenCode rather than Claude Code for one specific reason: provider-agnostic validation. One of the goals was confirming the SKILL works across models, Claude, GPT, Gemini, local models. OpenCode handles that in a single config line change. Claude Code is provider-locked by design.

The other factor is the explicit subagent architecture. Each phase in the SKILL runs as a named agent with declared tool permissions:

%%{init: {'theme': 'base', 'themeVariables': {'primaryColor': '#1e293b', 'primaryTextColor': '#e2e8f0', 'primaryBorderColor': '#3b82f6', 'lineColor': '#64748b', 'background': '#0f172a', 'mainBkg': '#1e293b', 'edgeLabelBackground': '#1e293b'}}}%%
flowchart TB
  classDef agent fill:#1e3a5f,stroke:#3b82f6,color:#93c5fd,stroke-width:2px
  classDef scope fill:#1e293b,stroke:#64748b,color:#94a3b8,stroke-width:1px
  classDef mcp fill:#14291f,stroke:#10b981,color:#6ee7b7,stroke-width:2px

  SKILL["📋 SKILL.md\nZitifier Orchestrator"]

  subgraph D ["zitifier-detect-go · READ ONLY"]
    DA["read files\nrun binary\nJSON output"]:::scope
  end

  subgraph P ["zitifier-transform (plan) · READ + DIFF"]
    PA["read files\ngenerate diffs\nawait approval"]:::scope
  end

  subgraph T ["zitifier-transform (apply) · READ + WRITE"]
    TA["read files\nwrite files\nbackup (.bak)"]:::scope
  end

  subgraph V ["zitifier-provision · MCP ONLY"]
    VA["ziti-mcp-server\nidentities\nservices/policies"]:::mcp
  end

  SKILL --> D --> P --> T --> V

Each agent is a .md file in .opencode/agents/, versioned and diffable. Tool scopes are declared per-agent, detect can't write, provision can't touch source files.

That audit trail matters for security-conscious teams. Every agent's capabilities are in a file you can read, diff, and review in a PR.

The next evaluation is Claude Code, the SKILL.md format is portable and the same 4-phase logic applies. That's the subject of a follow-up post.

Frequently Asked Questions About Auto-Zitification

Does this work with languages other than Go?

Go is the first supported language via zitifier-detect-go. A Node.js/TypeScript equivalent using the TypeScript Compiler API is in active development (zitifier-detect-node) and uses the same JSON output schema and confidence levels. Python and Java equivalents are on the roadmap but not yet scoped.

What happens if the detector flags something incorrectly?

The review gate in Phase 2 exists precisely for this. Every candidate appears in the diff with full context before any file is written. You can reject individual candidates at the approval step. The .pre-ziti.bak backups also mean you can revert any transformation after the fact without touching version control.

Does it work with NetFoundry-managed networks?

Yes. The provisioning phase uses ziti-mcp-server, which supports both self-hosted OpenZiti and NetFoundry-managed networks (V8 only). The --profile flag in ziti-mcp-server lets you pre-configure separate credentials for each environment.

Is the SKILL open source?

The component tools are open source: zitifier-detect-go is Apache-2.0 at github.com/openziti/zitifier-detect-go, and ziti-mcp-server is Apache-2.0 at github.com/openziti/ziti-mcp-server. The SKILL.md orchestration file and its sub-agents will be published alongside the component tools.

What's next on the roadmap?

Five items are scoped: zitifier-detect-node for TypeScript (testing now), a shared Ziti context helper to replace per-file initZiti() calls, transport composition to preserve OTel/Prometheus middleware, and scale validation against a large production Go codebase, Mattermost server at 500,000 lines is the current candidate.

Thanks

If you like OpenZiti, it's always very much appreciated if you take a moment to drop us a star. We see and appreciate every repository star.

File issues, contribute tooling improvements, or drop into the OpenZiti community Discourse if you run into anything unexpected.


The full demo runs at youtu.be/b-G_zvmzaks, all 4 phases including live provisioning against a Ziti network.

zitifier-detect-go: github.com/openziti/zitifier-detect-go

ziti-mcp-server: github.com/openziti/ziti-mcp-server

OpenCode: opencode.ai

45 views