Skip to main content
This guide covers everything you need to build components for ShipSec Studio.

Getting Started

File Location

Components live in worker/src/components/<category>/:
worker/src/components/
├── security/        # Security tools (dnsx, subfinder, nuclei)
├── core/            # Core utilities (http-request, file-loader)
├── ai/              # AI components (llm, agents)
├── notification/    # Notifications (slack, email)
└── manual-action/   # Human-in-the-loop (approvals, forms)

Category Source of Truth

Component categories are defined once in packages/shared/src/component-categories.ts.
  • Backend categorization and API metadata read from this shared registry.
  • Frontend schema validation and category styling also read from the same registry.
When adding or renaming a category, update this shared file so backend and frontend stay in sync.

ID Naming Convention

<namespace>.<tool>.<action>

Examples:
  shipsec.dnsx.run       # Security tool
  core.http.request      # Core utility  
  ai.llm.generate        # AI component
  notification.slack     # Notification

Runner Types

TypeUse CaseExample
inlinePure TypeScript (HTTP calls, transforms, logic)FileLoader, WebhookPost, HTTP Request
dockerCLI tools in containersSubfinder, DNSX, Nuclei
remoteExternal executors (future)K8s jobs, ECS tasks

Inline Component Example

import { z } from 'zod';
import { 
  componentRegistry, 
  defineComponent,
  inputs,
  outputs,
  port,
} from '@shipsec/component-sdk';

const definition = defineComponent({
  id: 'core.http.request',
  label: 'HTTP Request',
  category: 'transform',
  runner: { kind: 'inline' },
  
  inputs: inputs({
    url: port(z.string().url(), {
      label: 'URL',
      description: 'The URL to fetch',
    }),
    method: port(z.enum(['GET', 'POST']).default('GET'), {
      label: 'Method',
      description: 'HTTP verb to use',
    }),
  }),
  
  outputs: outputs({
    status: port(z.number(), { label: 'Status Code' }),
    body: port(z.string(), { label: 'Response Body' }),
  }),

  async execute({ inputs }, context) {
    context.logger.info(`Fetching ${inputs.url}`);
    const response = await fetch(inputs.url, { method: inputs.method });
    return { status: response.status, body: await response.text() };
  }
});

export default definition;

Docker Component Example

import { z } from 'zod';
import { 
  defineComponent,
  inputs,
  outputs,
  port,
  runComponentWithRunner,
  DockerRunnerConfig
} from '@shipsec/component-sdk';

const definition = defineComponent({
  id: 'shipsec.tool.scan',
  label: 'Tool Scanner',
  category: 'security',
  runner: {
    kind: 'docker',
    image: 'tool:latest',
    entrypoint: 'sh',
    command: ['-c', 'tool "$@"', '--'],
    network: 'bridge'
  },
  
  inputs: inputs({
    target: port(z.string(), { label: 'Target Host' }),
  }),
  
  outputs: outputs({
    results: port(z.any(), { label: 'Scan Results' }),
  }),

  async execute({ inputs }, context) {
    const args = ['-json', '-target', inputs.target];
    
    const runnerConfig: DockerRunnerConfig = {
      ...this.runner,
      command: [...(this.runner.command ?? []), ...args],
    };

    const rawOutput = await runComponentWithRunner(
      runnerConfig,
      async (stdout) => {
         // Logic to parse tool stdout
         return JSON.parse(stdout);
      },
      inputs,
      context
    );
    
    return { results: rawOutput };
  }
});

export default definition;

ExecutionContext

The context passed to execute() provides services and utilities:
async execute({ inputs, params }, context) {
  // Logging (shows in UI timeline)
  context.logger.info('Starting scan...');
  context.logger.warn('Rate limit approaching');
  context.logger.error('Failed to connect');
  
  // Progress events (shows in UI)
  context.emitProgress('Processing 50 targets...');
  
  // Secrets (encrypted, from secret manager)
  const apiKey = await context.secrets?.get('API_KEY');
  
  // File downloads (from MinIO)
  const file = await context.storage?.downloadFile(inputs.fileId);
  
  // Artifact uploads (saved to MinIO, shown in UI)
  await context.artifacts?.upload({
    name: 'report.json',
    content: Buffer.from(JSON.stringify(results)),
    mimeType: 'application/json',
  });
  
  // Run metadata
  const { runId, componentRef } = context;
}

Component Definition

A component is defined using defineComponent and must specify its inputs, outputs, and optional parameters.

Inputs vs. Parameters

Understanding the difference between Inputs and Parameters is critical for building good components.
AspectInputs (inputs())Parameters (parameters())
When setRuntime (during execution)Design-time (during workflow building)
SourceUpstream node outputs or Manual OverridesSidebar form fields in the UI
VisibilityConnection handles on the nodeConfig panel in the sidebar
Use CaseDynamic data (e.g., target IP, file ID)Static config (e.g., model name, timeout)

Defining Inputs (Ports)

Inputs represent the data that flows into your component from other parts of the workflow. They appear as connection handles on the left side of the node.
inputs: inputs({
  ipAddress: port(z.string().ip(), { 
    label: 'IP Address',
    description: 'The IP address to scan',
    valuePriority: 'connection-first', // default: connection-first
  }),
})
Supported valuePriority values:
  • connection-first: Use the value from the port connection if it exists, otherwise use the manual override.
  • manual-first: Always use the manual override if a value is provided, even if a port is connected.

Defining Parameters

Parameters are configuration settings for the component that are set when the user is designing the workflow. They do not accept connections from other nodes; they are always static values (or manual strings).
parameters: parameters({
  threads: param(z.number().default(10), {
    label: 'Threads',
    editor: 'number',
    min: 1,
    max: 100,
    description: 'Number of concurrent threads to use.'
  }),
  mode: param(z.enum(['fast', 'thorough']).default('fast'), {
    label: 'Scan Mode',
    editor: 'select',
    options: [
      { label: 'Fast Scan', value: 'fast' },
      { label: 'Thorough Scan', value: 'thorough' },
    ],
  }),
})

Parameter Editors

The editor field in param() determines how the field is rendered in the UI sidebar:
  • text: Standard text input.
  • textarea: Multi-line text area.
  • number: Numeric input with optional min/max.
  • boolean: Checkbox/switch.
  • select: Dropdown menu (requires options).
  • multi-select: Multi-selection dropdown.
  • json: Code editor for JSON objects.
  • secret: Masked password-style input.
  • variable-list: Specialized editor for logic-script variables.

Visibility Rules

You can use visibleWhen to show or hide parameters based on the values of other parameters:
parameters: parameters({
  useProxy: param(z.boolean().default(false), { editor: 'boolean', label: 'Use Proxy' }),
  proxyUrl: param(z.string().optional(), { 
    editor: 'text', 
    label: 'Proxy URL',
    visibleWhen: { useProxy: true } 
  }),
})

Connection Types

When defining a port, you can specify its connectionType for compatibility checks in the canvas.
port(z.string(), {
  label: 'API Key',
  connectionType: { kind: 'primitive', name: 'secret' }
})
Supported primitives: text, number, boolean, secret, json, file, any. Lists: { kind: 'list', element: ConnectionType }. Objects with contracts: { kind: 'primitive', name: 'json', contract: 'aws-credentials' }.

Entry Point Runtime Input Types

The Entry Point component supports dynamic runtime inputs that users provide when triggering workflows:
TypeDescriptionUI Rendering
textText inputMulti-line textarea
numberNumeric inputNumber field
fileFile uploadFile picker
jsonJSON dataJSON textarea
arrayList of valuesComma-separated or JSON array
secretSensitive dataPassword field (masked)
Example: Secret runtime input
// Entry point configuration
const runtimeInputs = [
  { id: 'apiKey', label: 'API Key', type: 'secret', required: true },
  { id: 'target', label: 'Target URL', type: 'text', required: true },
];
When a workflow with secret inputs is triggered:
  1. The UI shows a password field for the secret
  2. The value flows through as a port.secret() output
  3. Downstream components receive the secret string value

Dynamic Ports (resolvePorts)

Components can dynamically generate input/output ports based on parameter values:
import { inputs, port } from '@shipsec/component-sdk';

export default defineComponent({
  // ...
  parameters: parameters({
    variables: param(z.array(VariableSchema), { ... }),
  }),

  resolvePorts(params) {
    const dynamicInputShape: Record<string, any> = {};
    
    for (const v of params.variables) {
      dynamicInputShape[v.name] = port(z.string(), { 
        label: v.label || v.name 
      });
    }
    
    return {
      inputs: inputs(dynamicInputShape),
    };
  }
});
Use cases: Workflow calls, Slack templates, manual actions with dynamic options.

Retry Policy

Components can specify custom retry behavior (maps to Temporal activity retry):
const definition: ComponentDefinition<Input, Output> = {
  id: 'shipsec.api.call',
  // ... other fields ...
  
  retryPolicy: {
    maxAttempts: 5,              // Max retries (0 = unlimited, 1 = no retry)
    initialIntervalSeconds: 2,   // Initial delay
    maximumIntervalSeconds: 120, // Max delay
    backoffCoefficient: 2.0,     // Exponential backoff
    nonRetryableErrorTypes: [    // Errors that should NOT retry
      'AuthenticationError',
      'ValidationError',
    ],
  },
};
Default policy: 3 attempts, 1s initial, 60s max, 2x backoff.

Error Handling

Use SDK error types for proper retry behavior:
import { 
  NetworkError,        // Retryable - network issues
  RateLimitError,      // Retryable - with delay
  ServiceError,        // Retryable - 5xx errors
  AuthenticationError, // Non-retryable - bad credentials
  ValidationError,     // Non-retryable - bad input
  NotFoundError,       // Non-retryable - resource missing
  fromHttpResponse,    // Convert HTTP response to error
  wrapError,           // Wrap unknown errors
} from '@shipsec/component-sdk';

async execute({ inputs }, context) {
  try {
    const response = await fetch(inputs.url);
    
    if (!response.ok) {
      throw fromHttpResponse(response, await response.text());
    }
    
    return await response.json();
  } catch (error) {
    throw wrapError(error, 'Failed to call API');
  }
}

Analytics Output Port (Results)

Security components should include a results output port for analytics integration. This port outputs structured findings that can be indexed into OpenSearch via the Analytics Sink.

Schema Requirements

The results port must output list<json> (array of records):
outputs: outputs({
  // ... other outputs ...

  results: port(z.array(z.record(z.string(), z.unknown())), {
    label: 'Results',
    description:
      'Analytics-ready findings array. Each item includes scanner name and asset key. Connect to Analytics Sink.',
    connectionType: { kind: 'list', element: { kind: 'primitive', name: 'json' } },
  }),
}),

Required Fields

Each finding in the results array must include:
FieldTypeDescription
scannerstringScanner identifier (e.g., 'nuclei', 'trufflehog', 'supabase-scanner')
asset_keystringPrimary asset identifier (host, domain, target, etc.)
finding_hashstringStable hash for deduplication (16-char hex from SHA-256)
Additional fields from the scanner output should be spread into the finding object.

Finding Hash

The finding_hash is a stable identifier that enables deduplication across workflow runs. It should be generated from the key identifying fields of each finding. Purpose:
  • Track if a finding is new or recurring across scans
  • Deduplicate findings in dashboards
  • Calculate first-seen and last-seen timestamps
  • Identify which findings have been resolved (no longer appearing)
How to generate: Import from the component SDK:
import { generateFindingHash } from '@shipsec/component-sdk';

// Usage
const hash = generateFindingHash(finding.templateId, finding.host, finding.matchedAt);
Key fields per scanner:
ScannerFields Used
NucleitemplateId + host + matchedAt
TruffleHogDetectorType + Redacted + filePath
Supabase Scannercheck_id + projectRef + resource
Choose fields that uniquely identify a finding but remain stable across runs (avoid timestamps, random IDs, etc.).

Example Implementation

import { generateFindingHash } from '@shipsec/component-sdk';

async execute({ inputs, params }, context) {
  // ... run scanner and get findings ...

  // Build analytics-ready results with scanner metadata
  const results: Record<string, unknown>[] = findings.map((finding) => ({
    ...finding,                           // Spread all finding fields
    scanner: 'my-scanner',               // Scanner identifier
    asset_key: finding.host ?? inputs.target,  // Primary asset
    finding_hash: generateFindingHash(   // Stable deduplication hash
      finding.ruleId,
      finding.host,
      finding.matchedAt
    ),
  }));

  return {
    findings,      // Original findings array
    results,       // Analytics-ready array for Analytics Sink
    rawOutput,     // Raw output for debugging
  };
}

How It Works

  1. Component outputs results: Each scanner outputs its findings with scanner and asset_key fields
  2. Connect to Analytics Sink: In the workflow canvas, connect the results port to Analytics Sink’s data input
  3. Indexed to OpenSearch: Each item in the array becomes a separate document with:
    • Finding data at root level (nested objects serialized to JSON strings)
    • Workflow context under shipsec.* namespace
    • Consistent @timestamp for all findings in the batch

Document Structure in OpenSearch

{
  "check_id": "DB_RLS_DISABLED",
  "severity": "CRITICAL",
  "title": "RLS Disabled",
  "metadata": "{\"table\":\"users\"}",
  "scanner": "supabase-scanner",
  "asset_key": "abc123xyz",
  "finding_hash": "a1b2c3d4e5f67890",
  "shipsec": {
    "organization_id": "org_123",
    "run_id": "run_abc123",
    "workflow_id": "wf_xyz789",
    "workflow_name": "Supabase Security Audit",
    "component_id": "core.analytics.sink",
    "node_ref": "analytics-sink-1"
  },
  "@timestamp": "2024-01-21T10:30:00Z"
}

shipsec Context Fields

The Analytics Sink automatically adds workflow context under the shipsec namespace:
FieldDescription
organization_idOrganization that owns the workflow
run_idUnique identifier for this workflow execution
workflow_idID of the workflow definition
workflow_nameHuman-readable workflow name
component_idComponent type (e.g., core.analytics.sink)
node_refNode reference in the workflow graph
asset_keyAuto-detected or specified asset identifier

Example Queries

# Find all findings for an asset
asset_key: "api.example.com"

# Find new findings (first seen today)
finding_hash: X AND @timestamp: [now-1d TO now] AND NOT (finding_hash: X AND @timestamp: [* TO now-1d])

# All findings from a specific workflow run
shipsec.run_id: "run_abc123"

# Aggregate findings by scanner
scanner: * | stats count() by scanner

# Track recurring findings
finding_hash: "a1b2c3d4" | sort @timestamp
Nested objects in findings are automatically serialized to JSON strings to prevent OpenSearch field explosion (1000 field limit).

Docker Component Requirements

All Docker-based components run with PTY (pseudo-terminal) enabled by default in workflows. Your component MUST be designed for PTY mode.

Shell Wrapper Pattern (Required)

All Docker-based components MUST use a shell wrapper for PTY compatibility:
// ✅ CORRECT - Shell wrapper pattern
runner: {
  kind: 'docker',
  image: 'tool:latest',
  entrypoint: 'sh',                      // Shell wrapper
  command: ['-c', 'tool "$@"', '--'],    // Wraps CLI execution
  network: 'bridge',
}
// ❌ WRONG - Direct binary execution
runner: {
  kind: 'docker',
  image: 'tool:latest',
  entrypoint: 'tool',                    // No shell wrapper - will hang
  command: ['-read-stdin', '-output'],
}

Why Shell Wrappers?

BenefitDescription
TTY signal handlingShell properly handles SIGTERM, SIGHUP
Clean exitShell ensures process cleanup
Buffering controlShell manages stdout/stderr correctly
No stdin issuesShell doesn’t wait for stdin input

Pattern Decision Tree

Does your Docker image have a shell (/bin/sh)?
├─ YES → Use Shell Wrapper Pattern
│         entrypoint: 'sh', command: ['-c', 'tool "$@"', '--']

└─ NO (Distroless) → Use Default Entrypoint Pattern
          Omit entrypoint, pass args via command: []
          The image's built-in ENTRYPOINT handles execution.

Distroless Images (Default Entrypoint Pattern)

Many ProjectDiscovery images (subfinder, dnsx, naabu, amass, notify) are distroless and do not contain /bin/sh. For these images, omit the entrypoint field entirely and let Docker use the image’s default entrypoint:
// ✅ CORRECT - Distroless image (no shell available)
runner: {
  kind: 'docker',
  image: 'ghcr.io/shipsecai/subfinder:latest',
  // No entrypoint — uses image default (/usr/local/bin/subfinder)
  command: [],
  network: 'bridge',
}
// ❌ WRONG - Distroless image with shell wrapper (exit code 127)
runner: {
  kind: 'docker',
  image: 'ghcr.io/shipsecai/subfinder:latest',
  entrypoint: 'sh',  // sh does not exist in distroless images!
  command: ['-c', 'subfinder "$@"', '--'],
}
In the execute() function, append tool arguments directly to command:
const runnerConfig: DockerRunnerConfig = {
  ...baseRunner,
  command: [...(baseRunner.command ?? []), ...toolArgs],
};
Distroless Go binaries (like ProjectDiscovery tools) handle PTY signals correctly. Verified with docker run --rm -t image args... — output streams and exits cleanly.

File System Access

All components that require file-based input/output MUST use the IsolatedContainerVolume utility for multi-tenant security.
For detailed patterns and security guarantees, see Isolated Volumes.

Quick Example

import { IsolatedContainerVolume } from '../../utils/isolated-volume';

async execute({ inputs }, context) {
  const tenantId = (context as any).tenantId ?? 'default-tenant';
  const volume = new IsolatedContainerVolume(tenantId, context.runId);

  try {
    await volume.initialize({
      'targets.txt': inputs.targets.join('\n')
    });

    const runnerConfig: DockerRunnerConfig = {
      ...this.runner,
      command: [...(this.runner.command ?? []), '-l', '/inputs/targets.txt'],
      volumes: [volume.getVolumeConfig('/inputs', true)]  // read-only
    };

    return await runComponentWithRunner(runnerConfig, parseOutput, inputs, context);
  } finally {
    await volume.cleanup();  // ALWAYS cleanup
  }
}

UI-Only Components

Components that are purely for UI purposes (documentation, notes):
const definition: ComponentDefinition<Input, void> = {
  id: 'core.ui.text',
  label: 'Text Block',
  category: 'input',
  runner: { kind: 'inline' },
  inputSchema,
  outputSchema: z.void(),
  metadata: {
    uiOnly: true,  // Excluded from workflow execution
  },
  async execute() {
    // No-op for UI-only components
  }
};

Testing

Unit Tests

Located alongside component: worker/src/components/<category>/__tests__/<component>.test.ts
import { describe, it, expect, vi } from 'bun:test';
import * as sdk from '@shipsec/component-sdk';
import { componentRegistry } from '../../index';

describe('my-component', () => {
  it('should process input correctly', async () => {
    const component = componentRegistry.get('my.component.id');
    
    const context = sdk.createExecutionContext({
      runId: 'test-run',
      componentRef: 'test-node',
    });
    
    // Mock the runner for Docker components
    vi.spyOn(sdk, 'runComponentWithRunner').mockResolvedValue('mock output');
    
    const result = await component!.execute({
      inputs: { target: 'example.com' },
      params: {}
    }, context);
    
    expect(result.success).toBe(true);
  });
});
Run: bun --cwd worker test

Integration Tests (Docker)

Same folder with -integration.test.ts. Uses real Docker containers.
const enableDocker = process.env.ENABLE_DOCKER_TESTS === 'true';
const dockerDescribe = enableDocker ? describe : describe.skip;

dockerDescribe('Component Integration', () => {
  // Tests that run real Docker containers
});
Run: ENABLE_DOCKER_TESTS=true bun --cwd worker test

Testing Checklist

  • Used entrypoint: 'sh' with command: ['-c', 'tool "$@"', '--']
  • Tested with docker run --rm -t (PTY mode)
  • Container exits cleanly without hanging
  • No stdin-dependent operations
  • Tool arguments appended after '--' in command array
  • Workflow run completes successfully

PTY Testing

# Test with PTY mode (what workflows use)
docker run --rm -t your-image:latest sh -c 'tool "$@"' -- -flag value

# Verify it doesn't wait for stdin
timeout 5 docker run --rm -t your-image:latest sh -c 'tool "$@"' -- --help

E2E Tests (Full Stack)

E2E tests validate your component works with the entire platform: Backend API, Worker, Temporal, and infrastructure. Located in e2e-tests/. These tests create real workflows via the API and execute them. Prerequisites:
# Start full local environment
just dev

# Verify services are running (via nginx)
curl http://localhost/api/v1/health -H "x-internal-token: local-internal-token"
Run E2E tests:
RUN_E2E=true bun --cwd e2e-tests test
Example E2E test pattern:
import { describe, test, expect } from 'bun:test';

const API_BASE = 'http://localhost/api/v1';
const HEADERS = {
  'Content-Type': 'application/json',
  'x-internal-token': 'local-internal-token',
};

// Only run when RUN_E2E=true and services are available
const runE2E = process.env.RUN_E2E === 'true';
const e2eDescribe = runE2E ? describe : describe.skip;

// Helper to poll workflow status until completion
async function pollRunStatus(runId: string, timeoutMs = 180000) {
  const startTime = Date.now();
  while (Date.now() - startTime < timeoutMs) {
    const res = await fetch(`${API_BASE}/workflows/runs/${runId}/status`, { headers: HEADERS });
    const status = await res.json();
    if (['COMPLETED', 'FAILED', 'CANCELLED'].includes(status.status)) {
      return status;
    }
    await new Promise(r => setTimeout(r, 1000));
  }
  throw new Error(`Timeout waiting for workflow ${runId}`);
}

e2eDescribe('My Component E2E', () => {
  test('should execute in a real workflow', async () => {
    // 1. Create workflow with your component
    const workflow = {
      name: 'Test: My Component',
      nodes: [
        {
          id: 'start',
          type: 'core.workflow.entrypoint',
          position: { x: 0, y: 0 },
          data: { label: 'Start', config: { runtimeInputs: [] } },
        },
        {
          id: 'my-node',
          type: 'my.component.id',  // Your component ID
          position: { x: 200, y: 0 },
          data: {
            label: 'My Component',
            config: { target: 'example.com' },
          },
        },
      ],
      edges: [{ id: 'e1', source: 'start', target: 'my-node' }],
    };

    // 2. Create workflow via API
    const createRes = await fetch(`${API_BASE}/workflows`, {
      method: 'POST',
      headers: HEADERS,
      body: JSON.stringify(workflow),
    });
    const { id: workflowId } = await createRes.json();

    // 3. Execute workflow
    const runRes = await fetch(`${API_BASE}/workflows/${workflowId}/run`, {
      method: 'POST',
      headers: HEADERS,
      body: JSON.stringify({ inputs: {} }),
    });
    const { runId } = await runRes.json();

    // 4. Poll until completion
    const result = await pollRunStatus(runId);
    
    // 5. Assert results
    expect(result.status).toBe('COMPLETED');
  }, 180000);  // 3 minute timeout for workflow execution
});
E2E tests are not run in CI yet. They require the full local environment (just dev) and are intended for manual validation during development.

Complete Example

import { z } from 'zod';
import { 
  componentRegistry, 
  ComponentDefinition,
  DockerRunnerConfig,
  runComponentWithRunner,
  port,
} from '@shipsec/component-sdk';
import { IsolatedContainerVolume } from '../../utils/isolated-volume';

const inputSchema = z.object({
  domains: z.array(z.string()).min(1),
  threads: z.number().optional().default(10),
});

const outputSchema = z.object({
  results: z.array(z.object({
    domain: z.string(),
    records: z.array(z.string()),
  })),
  rawOutput: z.string(),
});

type Input = z.infer<typeof inputSchema>;
type Output = z.infer<typeof outputSchema>;

export default defineComponent({
  id: 'shipsec.dnsx.scan',
  label: 'DNSX Scanner',
  category: 'security',
  runner: {
    kind: 'docker',
    image: 'ghcr.io/shipsecai/dnsx:latest',
    entrypoint: 'sh',
    command: ['-c', 'dnsx "$@"', '--'],
    network: 'bridge',
  },
  
  inputs: inputs({
    domains: port(z.array(z.string()).min(1), { 
      label: 'Domains',
      connectionType: { kind: 'list', element: { kind: 'primitive', name: 'text' } }
    }),
  }),

  parameters: parameters({
    threads: param(z.number().default(10), {
      label: 'Threads',
      editor: 'number',
      min: 1,
      max: 100,
    }),
  }),

  outputs: outputs({
    results: port(z.array(z.object({
      domain: z.string(),
      records: z.array(z.string()),
    })), { label: 'Scan Results' }),
    rawOutput: port(z.string(), { label: 'Raw Stdout' }),
  }),
  
  async execute({ inputs, params }, context) {
    const tenantId = (context as any).tenantId ?? 'default-tenant';
    const volume = new IsolatedContainerVolume(tenantId, context.runId);

    try {
      context.emitProgress('Preparing input files...');
      await volume.initialize({
        'domains.txt': inputs.domains.join('\n')
      });

      const args = [
        '-l', '/inputs/domains.txt',
        '-json',
        '-t', String(params.threads),
        '-stream',
      ];

      const runnerConfig: DockerRunnerConfig = {
        ...this.runner,
        command: [...(this.runner.command ?? []), ...args],
        volumes: [volume.getVolumeConfig('/inputs', true)],
      };

      context.emitProgress('Running DNSX...');
      const rawOutput = await runComponentWithRunner(
        runnerConfig,
        async (stdout) => {
          const lines = stdout.split('\n').filter(Boolean);
          const results = lines.map(line => {
            const parsed = JSON.parse(line);
            return {
              domain: parsed.host,
              records: parsed.a || [],
            };
          });
          return { results, rawOutput: stdout };
        },
        inputs,
        context
      );

      context.logger.info(`Found ${rawOutput.results.length} results`);
      return rawOutput;

    } finally {
      await volume.cleanup();
    }
  }
});

componentRegistry.register(definition);
export default definition;

Questions?

  • File access patterns: See Isolated Volumes
  • SDK source: packages/component-sdk/src/
  • Example components: worker/src/components/security/
  • Bug reports: GitHub Issues