Skip to main content
← Back to list
01Issue
BugOpenSwamp Club
AssigneesNone

Swamp skill should guide AI agents to create extension models instead of inline shell scripts

Opened by swampadmin · 2/3/2026

Problem

When AI agents (like Claude Code) need to process or transform data from a swamp model method run, they default to piping stdout through complex inline shell scripts (python -c, deno eval, etc.) rather than creating a proper extension model or adding a method to an existing one.

This leads to:

  • Fragile shell-escaped code that breaks on special characters
  • Logic that is untestable and unreusable
  • Violations of the "extend, don't be clever" principle from CLAUDE.md

Example

After running listUserRepos, the AI attempted to pipe JSON output through an inline python3 -c script with complex string processing to build a security report. This failed repeatedly due to shell escaping issues. The correct approach was to create a dedicated @bixu/github-security extension model with its own helpers and tests.

Proposed Solution

The swamp-extension-model skill and/or swamp-model skill should include stronger guidance for AI agents:

  1. When an agent needs to transform or aggregate data from a model method, it should create a new extension model (or add a method to an existing one) rather than processing stdout inline
  2. The skill should explicitly call out anti-patterns: inline python3 -c, deno eval, jq pipelines for anything beyond trivial formatting
  3. The decision tree in the skill should include: "Need to process model output? -> Create an extension model method for it"

Scope

Changes would be needed in the swamp skill documentation (swamp-extension-model and swamp-model skills) to add guidance about when to create models vs. when inline processing is acceptable.

02Bog Flow
OPENTRIAGEDIN PROGRESSSHIPPED

Open

2/3/2026, 5:48:32 PM

No activity in this phase yet.

03Sludge Pulse

Sign in to post a ripple.