OpenAI Responses API Generation

Overview

You can use this Snap to generate LLM output using the specified model and model parameters.


OpenAI Responses API Generation

Limitations

  • The Previous response ID cannot be used in an organization that enables Zero Data Retention since the response will not be stored. Learn more.
  • Feature availability may vary by model. For instance, some models, such as reasoning models, do not support web search. Learn more

Snap views

View Description Examples of upstream and downstream Snaps
Input This Snap has zero to one document input view. Prompt Generator Snap
Output This Snap has exactly one document output view. Mapper
Error

Error handling is a generic way to handle errors without losing data or failing the Snap execution. You can handle the errors that the Snap might encounter when running the pipeline by choosing one of the following options from the When errors occur list under the Views tab. The available options are:

  • Stop Pipeline Execution Stops the current pipeline execution when an error occurs.
  • Discard Error Data and Continue Ignores the error, discards that record, and continues with the remaining records.
  • Route Error Data to Error View Routes the error data to an error view without stopping the Snap execution.

Learn more about Error handling in Pipelines.

Snap settings

Legend:
  • Expression icon (): Allows using JavaScript syntax to access SnapLogic Expressions to set field values dynamically (if enabled). If disabled, you can provide a static value. Learn more.
  • SnapGPT (): Generates SnapLogic Expressions based on natural language using SnapGPT. Learn more.
  • Suggestion icon (): Populates a list of values dynamically based on your Snap configuration. You can select only one attribute at a time using the icon. Type into the field if it supports a comma-separated list of values.
  • Upload : Uploads files. Learn more.
Learn more about the icons in the Snap settings dialog.
Field / Field set Type Description
Label String

Required. Specify a unique name for the Snap. Modify this to be more appropriate, especially if more than one of the same Snaps is in the pipeline.

Default value: OpenAI Responses API Generation

Example: Generate Response

Model name String/Expression/ Suggestion

Required. The model name to use for the responses API.

Default value:

Example: gpt-4

Message String/Expression

Required. Specify the message string or list of input items to send as input to the responses endpoint.

Default value: N/A

Example: $messages
Previous response ID String/Expression

The unique ID of the previous response to the model. Use this to create multi-turn conversations.

Default value: N/A

Example:

resp_685bd91eed58819898936ac8a0ba237e05433b6b77d284a7

(expression) $prev_resp_id

Output handling

Settings for response output management.

Store Checkbox/Expression

Indicate whether to store or not store the generated model response.

Default status: Selected

Model parameters

Parameters used to tune the model runtime.

Temperature Decimal/Expression

The sampling temperature to use, a decimal value between 0 and 2. If left blank, the endpoint uses its default value.

Default value: N/A

Example: 0.2
Reasoning effort Dropdown list/Expression

Required. Reasoning effort level for the selected model. Currently supported only for OpenAI o-series models.

  • medium
  • low
  • high
Default value:

Example: medium

Reasoning summary Dropdown list/Expression

A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process.

  • none
  • auto
  • detailed
Default value: none

Example: none

Maximum output tokens Integer/Expression

Maximum number of tokens that can be generated for a response, including visible output tokens and reasoning tokens. If left blank, the endpoint uses its default value.

Default value: N/A

Example: 50
Top P Decimal/Expression

Nucleus sampling value, a decimal value between 0 and 1. If left blank, the endpoint uses its default value.

Default value: N/A

Example: 0.2
Advanced prompt configurations

Modify the prompt settings to guide the model responses and optimize output processing.

Instructions String/Expression

Specify the persona for the model to adopt in its responses. When using along with previous response ID, the instructions from a previous response will not be carried over to the next response.

Default value: N/A

Example: Provide concise, bullet‑point answers
JSON mode Checkbox/Expression
Select this checkbox to enable the model to generate strings that can be parsed into valid JSON objects. The output includes the parsed JSON object in a field named json_output that contains the data.
Note:
  • This field does not support input values from the upstream Snap.
  • When you select this checkbox, ensure the word JSON is included in the prompt, either in the Prompt field or the System prompt field. Otherwise, the Snap results in an error.
    OpenAI Chat Completions Prompt field with JSON enabled

  • When you select JSON mode, the Continuation requests checkbox is hidden, as this feature is not supported in JSON mode.

Default status: Deselected

Advanced response configurations

Modify the response settings to customize the responses and optimize output processing.

Structured outputs String/Expression

Ensures the model always returns outputs that match your defined JSON Schema.

Default value: N/A

Example: $response_format.json_schema
Truncation Dropdown list/Expression

The truncation strategy to use for the model response.

  • disabled
  • auto
Default value: Example: disabled
Include reasoning encrypted content Checkbox/Expression

Select this checkbox to include an encrypted version of reasoning tokens in the output. This allows reasoning items to be used when store is false.

Default status: Deselected

Simplify response Checkbox/Expression

Select this checkbox to receive a simplified response format that retains only the most commonly used fields and standardizes the output for compatibility with other models. The content will contain the aggregated text output from all output_text items in the output array

Default status: Deselected

Continuation requests Checkbox/Expression

Enable this option to automatically request additional responses when the incomplete_details.reason is max_output_tokens.This feature requires the store to be enabled. It isn't supported when using JSON mode, reasoning models, or built-in tools.

Default status: Deselected

Continuation requests limit Integer/Expression

Required. The maximum number of continuation requests to be made.

Default value: Example: 1
Debug mode Checkbox/Expression

Select this checkbox to enable debug mode. This mode provides the raw response in the _sl_response field and is recommended for debugging purposes only.

Default status: Deselected

Built-in tools

Configure the built-in tools to be used

Score threshold Decimal/Expression

The score threshold for the file search, a number between 0 and 1. Numbers closer to 1 will attempt to return only the most relevant results, but may return fewer results.

Default value: N/A

Example: 0.7
Web search Checkbox/Expression

Select this checkbox to allow model to search the web for the latest information.

Default status: Deselected

Web search type String/Expression/ Suggestion

Required. Select the type of web search tool.

Default value: Example: web_search_preview
Search context size Dropdown list/Expression

High level guidance for the amount of context window space to use for the search.

  • medium
  • low
  • high
Default value: Example: medium
File search Checkbox/Expression

Select this checkbox to Allow model to search the files.

Default status: Deselected

Vector store
User location object

An approximate user location to refine search results based on geography.

Default value: N/A

Example: San Francisco
Include file search results Checkbox/Expression

Select this checkbox to include the file search results in the response.

Default status: Deselected

Maximum number of file search results Integer/Expression

The maximum number of file search results to return. This number should be between 1 and 50

Default value: N/A

Example: 3
Ranker String/Expression

The ranker to use for the file search.

Default value: auto Example: auto
Filters String/Expression

The filters to apply to the file search.

Default value: N/A

Example: (expression) {"type": "eq", "key": "Region", "value": "US"}
Vector store IDs array

Required. The IDs of the vector stores to be used.

Default value: N/A

Example: vs_hovZYr4f8Y1dxvk7L9rohHEi
Important: The Vector Store ID field also supports a list of IDs. To use this, enable the expression and provide a list such as, ["vs_682c22b077ac8191b5d671adda994d11", "vs_hovZYr4f8Y1dxvk7L9rohHEi"].
Advanced tool configuration

Modify the tool call settings to guide the model responses and optimize output processing.

Tool choice Dropdown list/Expression

Controls which (if any) tool is called by the model.

  • REQUIRED
  • SPECIFY A BUILT-IN TOOL
  • AUTO
  • NONE
Default value: AUTO Example: AUTO
Built-in tool Dropdown list/Expression

Required. Select built-in tool to be used

  • WEB SEARCH
  • FILE SEARCH

Default value: N/A

Example: WEB SEARCH
Snap execution Dropdown list
Choose one of the three modes in which the Snap executes. Available options are:
  • Validate & Execute: Performs limited execution of the Snap and generates a data preview during pipeline validation. Subsequently, performs full execution of the Snap (unlimited records) during pipeline runtime.
  • Execute only: Performs full execution of the Snap during pipeline execution without generating preview data.
  • Disabled: Disables the Snap and all Snaps that are downstream from it.

Default value: Validate & Execute

Example: Execute only