AI components in ShipSec Studio follow a Provider-Consumer architecture.
- Providers: Handle credentials, model selection, and API configuration (OpenAI, Gemini, OpenRouter).
- Consumers: Execute specific tasks (Text Generation or Autonomous Agents) using the configuration emitted by a Provider.
Providers
Provider nodes normalize credentials and model settings into a reusable LLM Provider Config.
OpenAI Provider
Configures access to OpenAI or OpenAI-compatible endpoints.
| Input | Type | Description |
|---|
apiKey | Secret | OpenAI API key (typically from a Secret Loader) |
| Parameter | Type | Description |
|---|
model | Select | gpt-5.2, gpt-5.1, gpt-5, gpt-5-mini |
apiBaseUrl | Text | Optional override for the API base URL |
Gemini Provider
Configures access to Google’s Gemini models.
| Input | Type | Description |
|---|
apiKey | Secret | Google AI API key |
| Parameter | Type | Description |
|---|
model | Select | gemini-3-pro-preview, gemini-3-flash-preview, gemini-2.5-pro |
apiBaseUrl | Text | Optional override for the API base URL |
projectId | Text | Optional Google Cloud project identifier |
OpenRouter Provider
Configures access to multiple LLM providers through OpenRouter’s unified API.
| Input | Type | Description |
|---|
apiKey | Secret | OpenRouter API key |
| Parameter | Type | Description |
|---|
model | Text | Model slug (e.g., openrouter/auto, anthropic/claude-3.5-sonnet) |
apiBaseUrl | Text | Optional override for the API base URL |
httpReferer | Text | Application URL for OpenRouter ranking |
appTitle | Text | Application title for OpenRouter ranking |
Consumers
Consumer nodes perform the actual AI work. They require a Provider Config output from one of the providers above.
AI Generate Text
Performs a one-shot chat completion.
| Input | Type | Description |
|---|
userPrompt | Text | The primary request or data to process |
chatModel | Credential | Required. Connect a Provider output here |
modelApiKey | Secret | Optional. supersedes the API key in the Provider Config |
| Parameter | Type | Description |
|---|
systemPrompt | Textarea | Instructions that guide the model’s behavior |
temperature | Number | Creativity vs. determinism (0.0 to 2.0) |
maxTokens | Number | Maximum tokens to generate |
| Output | Type | Description |
|---|
responseText | Text | The assistant’s response |
usage | JSON | Token consumption metadata |
rawResponse | JSON | Full API response for debugging |
AI SDK Agent
An autonomous agent that uses reasoning steps and tool-calling to solve complex tasks.
| Input | Type | Description |
|---|
userInput | Text | The task or question for the agent |
chatModel | Credential | Required. Connect a Provider output here |
conversationState | JSON | Optional. Connect from a previous turn for memory |
mcpTools | List | Optional. Connect tools from MCP Providers |
| Parameter | Type | Description |
|---|
systemPrompt | Textarea | Core identity and constraints for the agent |
temperature | Number | Reasoning creativity (default 0.7) |
stepLimit | Number | Max “Think -> Act -> Observe” loops (1-12) |
memorySize | Number | Number of previous turns to retain in context |
| Output | Type | Description |
|---|
responseText | Text | Final answer after reasoning is complete |
conversationState | JSON | Updated state to pass to the next agent node |
reasoningTrace | JSON | Detailed step-by-step logs of the agent’s thoughts |
agentRunId | Text | Unique session ID for tracking and streaming |
MCP Tools (Model Context Protocol)
ShipSec Studio supports the Model Context Protocol (MCP), allowing AI agents to interact with external tools over HTTP.
Exposes a set of tools from a remote HTTP server that implements the MCP contract.
| Input | Type | Description |
|---|
endpoint | Text | The HTTP URL where the MCP server is hosted |
headersJson | Text | Optional JSON of headers (e.g., Auth tokens) |
tools | JSON | List of tool definitions available on that endpoint |
| Parameter | Type | Description |
|---|
endpoint | Text | Destination URL for tool execution |
tools | JSON | Array of tools with id, title, and arguments |
| Output | Type | Description |
|---|
tools | List | Normalized MCP tool definitions for the AI Agent |
Combines multiple MCP tool lists into a single consolidated list.
| Input | Type | Description |
|---|
toolsA, toolsB | List | Multiple upstream MCP tool outputs |
| Parameter | Type | Description |
|---|
slots | JSON | Configure additional input ports for merging |
| Output | Type | Description |
|---|
tools | List | De-duplicated list of tools ready for an agent |
Use Cases
Automated Alert Triage
Flow: Provider → AI Generate Text
Analyze incoming security alerts to filter out false positives.
Prompt: “Given this alert payload: , determine if it’s a real threat or noise.”
Investigative Agent
Flow: Provider + MCP Tool → AI Agent
An agent that searches through logs and performs lookups to investigate a specific IP address.
Task: “Investigate the IP using the available Splunk and VirusTotal tools.”
Best Practices
The Provider Concept: Always place a Provider node (OpenAI/Gemini/OpenRouter) at the start of your AI chain. This allows you to swap models or providers for the entire workflow by changing just one node.
Prompt Engineering
- Format Outputs: If you need JSON for a downstream node, ask for it explicitly in the prompt: “Return only valid JSON with fields ‘risk’ and ‘reason’.”
- Use System Prompts: Set high-level rules (e.g., “You are a senior security researcher”) in the System Prompt parameter instead of the User Input.
- Variable Injection: Use
{{variableName}} syntax to inject data from upstream nodes into your prompts.
Memory & State
For multi-turn conversations, always loop the conversationState output of the AI Agent back into the conversationState input of the next agent invocation (or store it in a persistent variable).