Skip to main content
AI components in ShipSec Studio follow a Provider-Consumer architecture.
  1. Providers: Handle credentials, model selection, and API configuration (OpenAI, Gemini, OpenRouter).
  2. Consumers: Execute specific tasks (Text Generation or Autonomous Agents) using the configuration emitted by a Provider.

Providers

Provider nodes normalize credentials and model settings into a reusable LLM Provider Config.

OpenAI Provider

Configures access to OpenAI or OpenAI-compatible endpoints.
InputTypeDescription
apiKeySecretOpenAI API key (typically from a Secret Loader)
ParameterTypeDescription
modelSelectgpt-5.2, gpt-5.1, gpt-5, gpt-5-mini
apiBaseUrlTextOptional override for the API base URL

Gemini Provider

Configures access to Google’s Gemini models.
InputTypeDescription
apiKeySecretGoogle AI API key
ParameterTypeDescription
modelSelectgemini-3-pro-preview, gemini-3-flash-preview, gemini-2.5-pro
apiBaseUrlTextOptional override for the API base URL
projectIdTextOptional Google Cloud project identifier

OpenRouter Provider

Configures access to multiple LLM providers through OpenRouter’s unified API.
InputTypeDescription
apiKeySecretOpenRouter API key
ParameterTypeDescription
modelTextModel slug (e.g., openrouter/auto, anthropic/claude-3.5-sonnet)
apiBaseUrlTextOptional override for the API base URL
httpRefererTextApplication URL for OpenRouter ranking
appTitleTextApplication title for OpenRouter ranking

Consumers

Consumer nodes perform the actual AI work. They require a Provider Config output from one of the providers above.

AI Generate Text

Performs a one-shot chat completion.
InputTypeDescription
userPromptTextThe primary request or data to process
chatModelCredentialRequired. Connect a Provider output here
modelApiKeySecretOptional. supersedes the API key in the Provider Config
ParameterTypeDescription
systemPromptTextareaInstructions that guide the model’s behavior
temperatureNumberCreativity vs. determinism (0.0 to 2.0)
maxTokensNumberMaximum tokens to generate
OutputTypeDescription
responseTextTextThe assistant’s response
usageJSONToken consumption metadata
rawResponseJSONFull API response for debugging

AI SDK Agent

An autonomous agent that uses reasoning steps and tool-calling to solve complex tasks.
InputTypeDescription
userInputTextThe task or question for the agent
chatModelCredentialRequired. Connect a Provider output here
conversationStateJSONOptional. Connect from a previous turn for memory
mcpToolsListOptional. Connect tools from MCP Providers
ParameterTypeDescription
systemPromptTextareaCore identity and constraints for the agent
temperatureNumberReasoning creativity (default 0.7)
stepLimitNumberMax “Think -> Act -> Observe” loops (1-12)
memorySizeNumberNumber of previous turns to retain in context
OutputTypeDescription
responseTextTextFinal answer after reasoning is complete
conversationStateJSONUpdated state to pass to the next agent node
reasoningTraceJSONDetailed step-by-step logs of the agent’s thoughts
agentRunIdTextUnique session ID for tracking and streaming

MCP Tools (Model Context Protocol)

ShipSec Studio supports the Model Context Protocol (MCP), allowing AI agents to interact with external tools over HTTP.

MCP HTTP Tools

Exposes a set of tools from a remote HTTP server that implements the MCP contract.
InputTypeDescription
endpointTextThe HTTP URL where the MCP server is hosted
headersJsonTextOptional JSON of headers (e.g., Auth tokens)
toolsJSONList of tool definitions available on that endpoint
ParameterTypeDescription
endpointTextDestination URL for tool execution
toolsJSONArray of tools with id, title, and arguments
OutputTypeDescription
toolsListNormalized MCP tool definitions for the AI Agent

MCP Tool Merge

Combines multiple MCP tool lists into a single consolidated list.
InputTypeDescription
toolsA, toolsBListMultiple upstream MCP tool outputs
ParameterTypeDescription
slotsJSONConfigure additional input ports for merging
OutputTypeDescription
toolsListDe-duplicated list of tools ready for an agent

Use Cases

Automated Alert Triage

Flow: ProviderAI Generate Text Analyze incoming security alerts to filter out false positives. Prompt: “Given this alert payload: , determine if it’s a real threat or noise.”

Investigative Agent

Flow: Provider + MCP ToolAI Agent An agent that searches through logs and performs lookups to investigate a specific IP address. Task: “Investigate the IP using the available Splunk and VirusTotal tools.”

Best Practices

The Provider Concept: Always place a Provider node (OpenAI/Gemini/OpenRouter) at the start of your AI chain. This allows you to swap models or providers for the entire workflow by changing just one node.

Prompt Engineering

  1. Format Outputs: If you need JSON for a downstream node, ask for it explicitly in the prompt: “Return only valid JSON with fields ‘risk’ and ‘reason’.”
  2. Use System Prompts: Set high-level rules (e.g., “You are a senior security researcher”) in the System Prompt parameter instead of the User Input.
  3. Variable Injection: Use {{variableName}} syntax to inject data from upstream nodes into your prompts.

Memory & State

For multi-turn conversations, always loop the conversationState output of the AI Agent back into the conversationState input of the next agent invocation (or store it in a persistent variable).