Bemærk
Adgang til denne side kræver godkendelse. Du kan prøve at logge på eller ændre mapper.
Adgang til denne side kræver godkendelse. Du kan prøve at ændre mapper.
API Version: v1
Server Variables:
| Variable | Default | Description |
|---|---|---|
| endpoint | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
Authentication
API key
Pass an API key with the api-key header.
Auth tokens
Pass an auth token with the authorization header.
Oauth2authoauth20
Flow: implicit
Authorization URL: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
Scopes:
https://ai.azure.com/.default
Batch
Create batch
POST {endpoint}/openai/v1/batches
Creates and executes a batch from an uploaded file of requests
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
Request Header
| Name | Required | Type | Description |
|---|---|---|---|
| accept | True | string Possible values: application/json |
Request Body
Content-Type: application/json
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| completion_window | enum | The time frame within which the batch should be processed. Currently only 24h is supported.Possible values: 24h |
Yes | |
| endpoint | enum | The endpoint to be used for all requests in the batch. Currently /v1/chat/completions is supported.Possible values: /v1/chat/completions, /v1/embeddings |
Yes | |
| input_file_id | string | The ID of an uploaded file that contains requests for the new batch. Your input file must be formatted as a JSON file, and must be uploaded with the purpose batch. |
No |
Responses
Status Code: 201
Description: The request has succeeded and a new resource has been created as a result.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
List batches
GET {endpoint}/openai/v1/batches
List your organization's batches.
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
| after | query | No | string | A cursor for use in pagination. after is an object ID that defines your place in the list.For instance, if you make a list request and receive 100 objects, ending with obj_foo, your subsequent call can include after=obj_foo in order to fetch the next page of the list. |
| limit | query | No | integer | A limit on the number of objects to be returned. Limit can range between 1 and 100, and the default is 20. |
Request Header
| Name | Required | Type | Description |
|---|---|---|---|
| accept | True | string Possible values: application/json |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.ListBatchesResponse |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Retrieve batch
GET {endpoint}/openai/v1/batches/{batch_id}
Retrieves a batch.
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
| batch_id | path | Yes | string | The ID of the batch to retrieve. |
Request Header
| Name | Required | Type | Description |
|---|---|---|---|
| accept | True | string Possible values: application/json |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Cancel batch
POST {endpoint}/openai/v1/batches/{batch_id}/cancel
Cancels an in-progress batch.
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
| batch_id | path | Yes | string | The ID of the batch to cancel. |
Request Header
| Name | Required | Type | Description |
|---|---|---|---|
| accept | True | string Possible values: application/json |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Chat
Create chat completion
POST {endpoint}/openai/v1/chat/completions
Creates a chat completion.
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
Request Body
Content-Type: application/json
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| audio | OpenAI.CreateChatCompletionRequestAudio or null | Parameters for audio output. Required when audio output is requested withmodalities: ["audio"]. |
No | |
| frequency_penalty | number or null | Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. |
No | |
| function_call | string or OpenAI.ChatCompletionFunctionCallOption | Deprecated in favor of tool_choice.Controls which (if any) function is called by the model. none means the model will not call a function and instead generates amessage. auto means the model can pick between generating a message or calling afunction. Specifying a particular function via {"name": "my_function"} forces themodel to call that function. none is the default when no functions are present. auto is the defaultif functions are present. |
No | |
| functions | array of OpenAI.ChatCompletionFunctions | Deprecated in favor of tools.A list of functions the model may generate JSON inputs for. |
No | |
| logit_bias | object or null | Modify the likelihood of specified tokens appearing in the completion. Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token. |
No | |
| logprobs | boolean or null | Whether to return log probabilities of the output tokens or not. If true, returns the log probabilities of each output token returned in the content of message. |
No | |
| max_completion_tokens | integer or null | An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens. |
No | |
| max_tokens | integer or null | The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API. This value is now deprecated in favor of max_completion_tokens, and isnot compatible with o1 series models. |
No | |
| messages | array of OpenAI.ChatCompletionRequestMessage | A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, images, and audio. |
Yes | |
| metadata | OpenAI.Metadata or null | No | ||
| modalities | OpenAI.ResponseModalities | Output types that you would like the model to generate. Most models are capable of generating text, which is the default: ["text"]The gpt-4o-audio-preview model can also be used togenerate audio. To request that this model generate both text and audio responses, you can use: ["text", "audio"] |
No | |
| model | string | Model ID used to generate the response, like gpt-4o or o3. OpenAIoffers a wide range of models with different capabilities, performance characteristics, and price points. Refer to the model guide to browse and compare available models. |
Yes | |
| n | integer or null | How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs. |
No | |
| parallel_tool_calls | OpenAI.ParallelToolCalls | Whether to enable parallel function calling during tool use. | No | |
| prediction | OpenAI.PredictionContent | Static predicted output content, such as the content of a text file that is being regenerated. |
No | |
| └─ content | string or array of OpenAI.ChatCompletionRequestMessageContentPartText | The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly. |
Yes | |
| └─ type | enum | The type of the predicted content you want to provide. This type is currently always content.Possible values: content |
Yes | |
| presence_penalty | number or null | Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. |
No | |
| prompt_cache_key | string | Used by OpenAI to cache responses for similar requests to optimize your cache hit rates. Replaces the user field. Learn more. |
No | |
| prompt_cache_retention | string or null | No | ||
| reasoning_effort | OpenAI.ReasoningEffort | Constrains effort on reasoning for reasoning models. Currently supported values are none, minimal, low, medium, high, and xhigh. Reducingreasoning effort can result in faster responses and fewer tokens used on reasoning in a response. - gpt-5.1 defaults to none, which does not perform reasoning. The supported reasoning values for gpt-5.1 are none, low, medium, and high. Tool calls are supported for all reasoning values in gpt-5.1.- All models before gpt-5.1 default to medium reasoning effort, and do not support none.- The gpt-5-pro model defaults to (and only supports) high reasoning effort.- xhigh is supported for all models after gpt-5.1-codex-max. |
No | |
| response_format | OpenAI.CreateChatCompletionRequestResponseFormat | An object specifying the format that the model must output. Setting to { "type": "json_schema", "json_schema": {...} } enablesStructured Outputs which ensure the model will match your supplied JSON schema. Learn more in the Structured Outputs guide. Setting to { "type": "json_object" } enables the older JSON mode, whichensures the message the model generates is valid JSON. Using json_schemais preferred for models that support it. |
No | |
| └─ type | OpenAI.CreateChatCompletionRequestResponseFormatType | Yes | ||
| safety_identifier | string | A stable identifier used to help detect users of your application that may be violating OpenAI's usage policies. The IDs should be a string that uniquely identifies each user. We recommend hashing their username or email address, in order to avoid sending us any identifying information. Learn more. |
No | |
| seed | integer or null | This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result.Determinism is not guaranteed, and you should refer to the system_fingerprint response parameter to monitor changes in the backend. |
No | |
| stop | OpenAI.StopConfiguration | Not supported with latest reasoning models o3 and o4-mini.Up to four sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence. |
No | |
| store | boolean or null | Whether or not to store the output of this chat completion request for use in model distillation or evals products. |
No | |
| stream | boolean or null | If set to true, the model response data will be streamed to the client as it is generated using server-sent events. |
No | |
| stream_options | OpenAI.ChatCompletionStreamOptions or null | No | ||
| temperature | number or null | No | ||
| tool_choice | OpenAI.ChatCompletionToolChoiceOption | Controls which (if any) tool is called by the model.none means the model will not call any tool and instead generates a message.auto means the model can pick between generating a message or calling one or more tools.required means the model must call one or more tools.Specifying a particular tool via {"type": "function", "function": {"name": "my_function"}} forces the model to call that tool.none is the default when no tools are present. auto is the default if tools are present. |
No | |
| tools | array of OpenAI.ChatCompletionTool or OpenAI.CustomToolChatCompletions | A list of tools the model may call. You can provide either custom tools or function tools. |
No | |
| top_logprobs | integer or null | No | ||
| top_p | number or null | No | ||
| user | string (deprecated) | A unique identifier representing your end-user, which can help to monitor and detect abuse. |
No | |
| user_security_context | AzureUserSecurityContext | User security context contains several parameters that describe the application itself, and the end user that interacts with the application. These fields assist your security operations teams to investigate and mitigate security incidents by providing a comprehensive approach to protecting your AI applications. Learn more about protecting AI applications using Microsoft Defender for Cloud. | No | |
| verbosity | OpenAI.Verbosity | Constrains the verbosity of the model's response. Lower values will result in more concise responses, while higher values will result in more verbose responses. Currently supported values are low, medium, and high. |
No |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object or object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Examples
Example
POST {endpoint}/openai/v1/chat/completions
Completions
Create completion
POST {endpoint}/openai/v1/completions
Creates a completion.
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
Request Body
Content-Type: application/json
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| best_of | integer or null | Generates best_of completions server-side and returns the "best" (the one with the highest log probability per token). Results cannot be streamed.When used with n, best_of controls the number of candidate completions and n specifies how many to return – best_of must be greater than n.Note:* Because this parameter generates many completions, it can quickly consume your token quota. Use carefully and ensure that you have reasonable settings for max_tokens and stop. |
No | |
| echo | boolean or null | Echo back the prompt in addition to the completion | No | |
| frequency_penalty | number or null | Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. See more information about frequency and presence penalties. |
No | |
| logit_bias | object or null | Modify the likelihood of specified tokens appearing in the completion. Accepts a JSON object that maps tokens (specified by their token ID in the GPT tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token. As an example, you can pass {"50256": -100} to prevent the <|endoftext|> token from being generated. |
No | |
| logprobs | integer or null | Include the log probabilities on the logprobs most likely output tokens, as well the chosen tokens. For example, if logprobs is 5, the API will return a list of the five most likely tokens. The API will always return the logprob of the sampled token, so there may be up to logprobs+1 elements in the response.The maximum value for logprobs is 5. |
No | |
| max_tokens | integer or null | The maximum number of tokensthat can be generated in the completion. The token count of your prompt plus max_tokens cannot exceed the model's context length. Example Python code for counting tokens. |
No | |
| model | string | ID of the model to use. You can use the List models API to see all of your available models, or see our Model overview for descriptions of them. | Yes | |
| n | integer or null | How many completions to generate for each prompt. Note:* Because this parameter generates many completions, it can quickly consume your token quota. Use carefully and ensure that you have reasonable settings for max_tokens and stop. |
No | |
| presence_penalty | number or null | Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. See more information about frequency and presence penalties. |
No | |
| prompt | string or array of string or null | No | ||
| seed | integer or null | If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result.Determinism is not guaranteed, and you should refer to the system_fingerprint response parameter to monitor changes in the backend. |
No | |
| stop | OpenAI.StopConfiguration | Not supported with latest reasoning models o3 and o4-mini.Up to four sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence. |
No | |
| stream | boolean or null | Whether to stream back partial progress. If set, tokens will be sent as data-only server-sent events as they become available, with the stream terminated by a data: [DONE] message. Example Python code. |
No | |
| stream_options | OpenAI.ChatCompletionStreamOptions or null | No | ||
| suffix | string or null | The suffix that comes after a completion of inserted text. This parameter is only supported for gpt-3.5-turbo-instruct. |
No | |
| temperature | number or null | What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both. |
No | |
| top_p | number or null | An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both. |
No | |
| user | string | Learn more. | No |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Examples
Example
POST {endpoint}/openai/v1/completions
Containers
List containers
GET {endpoint}/openai/v1/containers
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
| limit | query | No | integer | A limit on the number of objects to be returned. Limit can range between 1 and 100, and the default is 20. |
| order | query | No | string Possible values: asc, desc |
Sort order by the created_at timestamp of the objects. asc for ascending order anddescfor descending order. |
| after | query | No | string | A cursor for use in pagination. after is an object ID that defines your place in the list.For instance, if you make a list request and receive 100 objects, ending with obj_foo, your subsequent call can include after=obj_foo in order to fetch the next page of the list. |
| before | query | No | string | A cursor for use in pagination. before is an object ID that defines your place in the list.For instance, if you make a list request and receive 100 objects, ending with obj_foo, your subsequent call can include before=obj_foo in order to fetch the previous page of the list. |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.ContainerListResource |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Create container
POST {endpoint}/openai/v1/containers
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
Request Body
Content-Type: application/json
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| expires_after | OpenAI.CreateContainerBodyExpiresAfter | No | ||
| └─ anchor | enum | Possible values: last_active_at |
Yes | |
| └─ minutes | integer | Yes | ||
| file_ids | array of string | IDs of files to copy to the container. | No | |
| memory_limit | enum | Optional memory limit for the container. Defaults to 1g.Possible values: 1g, 4g, 16g, 64g |
No | |
| name | string | Name of the container to create. | Yes |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.ContainerResource |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Retrieve container
GET {endpoint}/openai/v1/containers/{container_id}
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
| container_id | path | Yes | string | The ID of the container to retrieve. |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.ContainerResource |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Delete container
DELETE {endpoint}/openai/v1/containers/{container_id}
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
| container_id | path | Yes | string | The ID of the container to delete. |
Responses
Status Code: 200
Description: The request has succeeded.
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
List container files
GET {endpoint}/openai/v1/containers/{container_id}/files
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
| container_id | path | Yes | string | The ID of the container to list files from. |
| limit | query | No | integer | A limit on the number of objects to be returned. Limit can range between 1 and 100, and the default is 20. |
| order | query | No | string Possible values: asc, desc |
Sort order by the created_at timestamp of the objects. asc for ascending order anddescfor descending order. |
| after | query | No | string | A cursor for use in pagination. after is an object ID that defines your place in the list.For instance, if you make a list request and receive 100 objects, ending with obj_foo, your subsequent call can include after=obj_foo in order to fetch the next page of the list. |
| before | query | No | string | A cursor for use in pagination. before is an object ID that defines your place in the list.For instance, if you make a list request and receive 100 objects, ending with obj_foo, your subsequent call can include before=obj_foo in order to fetch the previous page of the list. |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.ContainerFileListResource |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Create container file
POST {endpoint}/openai/v1/containers/{container_id}/files
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
| container_id | path | Yes | string | The ID of the container to create a file in. |
Request Body
Content-Type: multipart/form-data
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| file | The File object (not file name) to be uploaded. | No | ||
| file_id | string | Name of the file to create. | No |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.ContainerFileResource |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Retrieve container file
GET {endpoint}/openai/v1/containers/{container_id}/files/{file_id}
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
| container_id | path | Yes | string | The ID of the container. |
| file_id | path | Yes | string | The ID of the file to retrieve. |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.ContainerFileResource |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Delete container file
DELETE {endpoint}/openai/v1/containers/{container_id}/files/{file_id}
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
| container_id | path | Yes | string | The ID of the container. |
| file_id | path | Yes | string | The ID of the file to delete. |
Responses
Status Code: 200
Description: The request has succeeded.
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Retrieve container file content
GET {endpoint}/openai/v1/containers/{container_id}/files/{file_id}/content
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
| container_id | path | Yes | string | The ID of the container. |
| file_id | path | Yes | string | The ID of the file to retrieve content from. |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/octet-stream | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Conversations
Create conversation
POST {endpoint}/openai/v1/conversations
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
Request Body
Content-Type: application/json
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| items | array of OpenAI.InputItem or null | No | ||
| metadata | OpenAI.Metadata or null | No |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.ConversationResource |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Retrieve conversation
GET {endpoint}/openai/v1/conversations/{conversation_id}
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
| conversation_id | path | Yes | string | The ID of the conversation to retrieve. |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.ConversationResource |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Update conversation
POST {endpoint}/openai/v1/conversations/{conversation_id}
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
| conversation_id | path | Yes | string | The ID of the conversation to update. |
Request Body
Content-Type: application/json
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| metadata | OpenAI.Metadata or null | Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters. |
Yes |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.ConversationResource |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Delete conversation
DELETE {endpoint}/openai/v1/conversations/{conversation_id}
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
| conversation_id | path | Yes | string | The ID of the conversation to delete. |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.DeletedConversationResource |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
List conversation items
GET {endpoint}/openai/v1/conversations/{conversation_id}/items
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
| conversation_id | path | Yes | string | The ID of the conversation to list items for. |
| limit | query | No | integer | A limit on the number of objects to be returned. Limit can range between 1 and 100, and the default is 20. |
| order | query | No | string Possible values: asc, desc |
The order to return the input items in. Default is desc. |
| after | query | No | string | An item ID to list items after, used in pagination. |
| include | query | No | array | Specify additional output data to include in the model response. |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.ConversationItemList |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Create conversation items
POST {endpoint}/openai/v1/conversations/{conversation_id}/items
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
| conversation_id | path | Yes | string | The ID of the conversation to add the item to. |
| include | query | No | array | Additional fields to include in the response. |
Request Body
Content-Type: application/json
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| items | array of OpenAI.InputItem | Yes |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.ConversationItemList |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Retrieve conversation item
GET {endpoint}/openai/v1/conversations/{conversation_id}/items/{item_id}
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
| conversation_id | path | Yes | string | The ID of the conversation that contains the item. |
| item_id | path | Yes | string | The ID of the item to retrieve. |
| include | query | No | array | Additional fields to include in the response. |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.ConversationItem |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Delete conversation item
DELETE {endpoint}/openai/v1/conversations/{conversation_id}/items/{item_id}
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
| conversation_id | path | Yes | string | The ID of the conversation that contains the item. |
| item_id | path | Yes | string | The ID of the item to delete. |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.ConversationResource |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Evals
List evals
GET {endpoint}/openai/v1/evals
List evaluations for a project.
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
| after | query | No | string | Identifier for the last eval from the previous pagination request. |
| limit | query | No | integer | A limit on the number of evals to be returned in a single pagination response. |
| order | query | No | string Possible values: asc, desc |
Sort order for evals by timestamp. Use asc for ascending order ordesc for descending order. |
| order_by | query | No | string Possible values: created_at, updated_at |
Evals can be ordered by creation time or last updated time. Usecreated_at for creation time or updated_at for last updatedtime. |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.EvalList |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Create eval
POST {endpoint}/openai/v1/evals
Create the structure of an evaluation that can be used to test a model's performance.
An evaluation is a set of testing criteria and a datasource. After creating an evaluation, you can run it on different models and model parameters. We support several types of graders and datasources.
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
Request Body
Content-Type: application/json
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| data_source_config | OpenAI.CreateEvalCustomDataSourceConfig or OpenAI.CreateEvalLogsDataSourceConfig or OpenAI.CreateEvalStoredCompletionsDataSourceConfig | The configuration for the data source used for the evaluation runs. Dictates the schema of the data used in the evaluation. | Yes | |
| metadata | OpenAI.Metadata or null | No | ||
| name | string | The name of the evaluation. | No | |
| statusCode | enum | Possible values: 201 |
Yes | |
| testing_criteria | array of OpenAI.CreateEvalLabelModelGrader or OpenAI.EvalGraderStringCheck or OpenAI.EvalGraderTextSimilarity or OpenAI.EvalGraderPython or OpenAI.EvalGraderScoreModel or EvalGraderEndpoint | A list of graders for all eval runs in this group. Graders can reference variables in the data source using double curly braces notation, like {{item.variable_name}}. To reference the model's output, use the sample namespace (ie, {{sample.output_text}}). |
Yes |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.Eval |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Get eval
GET {endpoint}/openai/v1/evals/{eval_id}
Retrieve an evaluation by its ID. Retrieves an evaluation by its ID.
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
| eval_id | path | Yes | string |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.Eval |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Update eval
POST {endpoint}/openai/v1/evals/{eval_id}
Update select, mutable properties of a specified evaluation.
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
| eval_id | path | Yes | string |
Request Body
Content-Type: application/json
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| metadata | OpenAI.Metadata | Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters. |
No | |
| name | string | No |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.Eval |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Delete eval
DELETE {endpoint}/openai/v1/evals/{eval_id}
Delete a specified evaluation.
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
| eval_id | path | Yes | string |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Get eval runs
GET {endpoint}/openai/v1/evals/{eval_id}/runs
Retrieve a list of runs for a specified evaluation.
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
| eval_id | path | Yes | string | |
| after | query | No | string | |
| limit | query | No | integer | |
| order | query | No | string Possible values: asc, desc |
|
| status | query | No | string Possible values: queued, in_progress, completed, canceled, failed |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.EvalRunList |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Create eval run
POST {endpoint}/openai/v1/evals/{eval_id}/runs
Create a new evaluation run, beginning the grading process.
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
| eval_id | path | Yes | string |
Request Body
Content-Type: application/json
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| data_source | OpenAI.CreateEvalJsonlRunDataSource or OpenAI.CreateEvalCompletionsRunDataSource or OpenAI.CreateEvalResponsesRunDataSource | Details about the run's data source. | Yes | |
| metadata | OpenAI.Metadata or null | No | ||
| name | string | The name of the run. | No |
Responses
Status Code: 201
Description: The request has succeeded and a new resource has been created as a result.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.EvalRun |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Get eval run
GET {endpoint}/openai/v1/evals/{eval_id}/runs/{run_id}
Retrieve a specific evaluation run by its ID.
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
| eval_id | path | Yes | string | |
| run_id | path | Yes | string |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.EvalRun |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Cancel eval run
POST {endpoint}/openai/v1/evals/{eval_id}/runs/{run_id}
Cancel a specific evaluation run by its ID.
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
| eval_id | path | Yes | string | |
| run_id | path | Yes | string |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.EvalRun |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Delete eval run
DELETE {endpoint}/openai/v1/evals/{eval_id}/runs/{run_id}
Delete a specific evaluation run by its ID.
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
| eval_id | path | Yes | string | |
| run_id | path | Yes | string |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Get eval run output items
GET {endpoint}/openai/v1/evals/{eval_id}/runs/{run_id}/output_items
Get a list of output items for a specified evaluation run.
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
| eval_id | path | Yes | string | |
| run_id | path | Yes | string | |
| after | query | No | string | |
| limit | query | No | integer | |
| status | query | No | string Possible values: fail, pass |
|
| order | query | No | string Possible values: asc, desc |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.EvalRunOutputItemList |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Get eval run output item
GET {endpoint}/openai/v1/evals/{eval_id}/runs/{run_id}/output_items/{output_item_id}
Retrieve a specific output item from an evaluation run by its ID.
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
| eval_id | path | Yes | string | |
| run_id | path | Yes | string | |
| output_item_id | path | Yes | string |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.EvalRunOutputItem |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Files
Create file
POST {endpoint}/openai/v1/files
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
Request Body
Content-Type: multipart/form-data
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| expires_after | object | Yes | ||
| └─ anchor | AzureFileExpiryAnchor | Yes | ||
| └─ seconds | integer | Yes | ||
| file | The File object (not file name) to be uploaded. | Yes | ||
| purpose | enum | The intended purpose of the uploaded file. One of: - assistants: Used in the Assistants API - batch: Used in the Batch API - fine-tune: Used for fine-tuning - evals: Used for eval data setsPossible values: assistants, batch, fine-tune, evals |
Yes |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Examples
Example
POST {endpoint}/openai/v1/files
List files
GET {endpoint}/openai/v1/files
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
| purpose | query | No | string | |
| limit | query | No | integer | |
| order | query | No | string Possible values: asc, desc |
|
| after | query | No | string |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.ListFilesResponse |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Retrieve file
GET {endpoint}/openai/v1/files/{file_id}
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
| file_id | path | Yes | string | The ID of the file to use for this request. |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Delete file
DELETE {endpoint}/openai/v1/files/{file_id}
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
| file_id | path | Yes | string | The ID of the file to use for this request. |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.DeleteFileResponse |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Download file
GET {endpoint}/openai/v1/files/{file_id}/content
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
| file_id | path | Yes | string | The ID of the file to use for this request. |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/octet-stream | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Embeddings
Create embedding
POST {endpoint}/openai/v1/embeddings
Creates an embedding vector representing the input text.
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
Request Body
Content-Type: application/json
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| dimensions | integer | The number of dimensions the resulting output embeddings should have. Only supported in text-embedding-3 and later models.Constraints: min: 1 |
No | |
| encoding_format | enum | The format to return the embeddings in. Can be either float or base64.Possible values: float, base64 |
No | |
| input | string or array of string or array of integer or array of array | Input text to embed, encoded as a string or array of tokens. To embed multiple inputs in a single request, pass an array of strings or array of token arrays. The input must not exceed the max input tokens for the model (8,192 tokens for all embedding models), cannot be an empty string, and any array must be 2,048 dimensions or less. Example Python code for counting tokens. In addition to the per-input token limit, all embedding models enforce a maximum of 300,000 tokens summed across all inputs in a single request. | Yes | |
| model | string | ID of the model to use. You can use the List models API to see all of your available models, or see our Model overview for descriptions of them. | Yes | |
| user | string | Learn more. | No |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.CreateEmbeddingResponse |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Examples
Example
POST {endpoint}/openai/v1/embeddings
Fine-tuning
Run grader
POST {endpoint}/openai/v1/fine_tuning/alpha/graders/run
Run a grader.
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
Request Body
Content-Type: application/json
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| grader | OpenAI.GraderStringCheck or OpenAI.GraderTextSimilarity or OpenAI.GraderPython or OpenAI.GraderScoreModel or OpenAI.GraderMulti or GraderEndpoint | The grader used for the fine-tuning job. | Yes | |
| item | OpenAI.RunGraderRequestItem | No | ||
| model_sample | string | The model sample to be evaluated. This value will be used to populate the sample namespace. See the guide for more details.The output_json variable will be populated if the model sample is avalid JSON string. |
Yes |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.RunGraderResponse |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Validate grader
POST {endpoint}/openai/v1/fine_tuning/alpha/graders/validate
Validate a grader.
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
Request Body
Content-Type: application/json
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| grader | OpenAI.GraderStringCheck or OpenAI.GraderTextSimilarity or OpenAI.GraderPython or OpenAI.GraderScoreModel or OpenAI.GraderMulti or GraderEndpoint | No |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.ValidateGraderResponse |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
List fine tuning checkpoint permissions
GET {endpoint}/openai/v1/fine_tuning/checkpoints/{fine_tuned_model_checkpoint}/permissions
List checkpoint permissions
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
| fine_tuned_model_checkpoint | path | Yes | string | The ID of the fine-tuned model checkpoint to get permissions for. |
| project_id | query | No | string | The ID of the project to get permissions for. |
| after | query | No | string | Identifier for the last permission ID from the previous pagination request. |
| limit | query | No | integer | Number of permissions to retrieve. |
| order | query | No | string Possible values: ascending, descending |
The order in which to retrieve permissions. |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.ListFineTuningCheckpointPermissionResponse |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Create fine tuning checkpoint permission
POST {endpoint}/openai/v1/fine_tuning/checkpoints/{fine_tuned_model_checkpoint}/permissions
Create checkpoint permissions
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
| fine_tuned_model_checkpoint | path | Yes | string | The ID of the fine-tuned model checkpoint to create a permission for. |
Request Body
Content-Type: application/json
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| project_ids | array of string | The project identifiers to grant access to. | Yes |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.ListFineTuningCheckpointPermissionResponse |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Delete fine tuning checkpoint permission
DELETE {endpoint}/openai/v1/fine_tuning/checkpoints/{fine_tuned_model_checkpoint}/permissions/{permission_id}
Delete checkpoint permission
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
| fine_tuned_model_checkpoint | path | Yes | string | The ID of the fine-tuned model checkpoint to delete a permission for. |
| permission_id | path | Yes | string | The ID of the fine-tuned model checkpoint permission to delete. |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.DeleteFineTuningCheckpointPermissionResponse |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Create fine tuning job
POST {endpoint}/openai/v1/fine_tuning/jobs
Creates a fine-tuning job which begins the process of creating a new model from a given dataset.
Response includes details of the enqueued job including job status and the name of the fine-tuned models once complete.
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
Request Body
Content-Type: application/json
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| hyperparameters | OpenAI.CreateFineTuningJobRequestHyperparameters | No | ||
| └─ batch_size | string or integer | No | auto | |
| └─ learning_rate_multiplier | string or number | No | ||
| └─ n_epochs | string or integer | No | auto | |
| integrations | array of OpenAI.CreateFineTuningJobRequestIntegrations or null | A list of integrations to enable for your fine-tuning job. | No | |
| metadata | OpenAI.Metadata or null | No | ||
| method | OpenAI.FineTuneMethod | The method used for fine-tuning. | No | |
| model | string (see valid models below) | The name of the model to fine-tune. You can select one of the supported models. |
Yes | |
| seed | integer or null | The seed controls the reproducibility of the job. Passing in the same seed and job parameters should produce the same results, but may differ in rare cases. If a seed is not specified, one will be generated for you. |
No | |
| suffix | string or null | A string of up to 64 characters that will be added to your fine-tuned model name. For example, a suffix of "custom-model-name" would produce a model name like ft:gpt-4o-mini:openai:custom-model-name:7p4lURel. |
No | |
| training_file | string | The ID of an uploaded file that contains training data. See upload file for how to upload a file. Your dataset must be formatted as a JSONL file. Additionally, you must upload your file with the purpose fine-tune.The contents of the file should differ depending on if the model uses the chat, completions format, or if the fine-tuning method uses the preference format. See the fine-tuning guide for more details. |
Yes | |
| validation_file | string or null | The ID of an uploaded file that contains validation data. If you provide this file, the data is used to generate validation metrics periodically during fine-tuning. These metrics can be viewed in the fine-tuning results file. The same data should not be present in both train and validation files. Your dataset must be formatted as a JSONL file. You must upload your file with the purpose fine-tune.See the fine-tuning guide for more details. |
No |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.FineTuningJob |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
List paginated fine tuning jobs
GET {endpoint}/openai/v1/fine_tuning/jobs
List your organization's fine-tuning jobs
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
| after | query | No | string | Identifier for the last job from the previous pagination request. |
| limit | query | No | integer | Number of fine-tuning jobs to retrieve. |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.ListPaginatedFineTuningJobsResponse |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Retrieve fine tuning job
GET {endpoint}/openai/v1/fine_tuning/jobs/{fine_tuning_job_id}
Get info about a fine-tuning job.
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
| fine_tuning_job_id | path | Yes | string | The ID of the fine-tuning job. |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.FineTuningJob |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Cancel fine tuning job
POST {endpoint}/openai/v1/fine_tuning/jobs/{fine_tuning_job_id}/cancel
Immediately cancel a fine-tune job.
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
| fine_tuning_job_id | path | Yes | string | The ID of the fine-tuning job to cancel. |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.FineTuningJob |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
List fine tuning job checkpoints
GET {endpoint}/openai/v1/fine_tuning/jobs/{fine_tuning_job_id}/checkpoints
List the checkpoints for a fine-tuning job.
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
| fine_tuning_job_id | path | Yes | string | The ID of the fine-tuning job to get checkpoints for. |
| after | query | No | string | Identifier for the last checkpoint ID from the previous pagination request. |
| limit | query | No | integer | Number of checkpoints to retrieve. |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.ListFineTuningJobCheckpointsResponse |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Fine tuning - copy checkpoint
POST {endpoint}/openai/v1/fine_tuning/jobs/{fine_tuning_job_id}/checkpoints/{fine_tuning_checkpoint_id}/copy
Creates a copy of a fine-tuning checkpoint at the given destination account and region.
NOTE: This Azure OpenAI API is in preview and subject to change.
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
| fine_tuning_job_id | path | Yes | string | |
| fine_tuning_checkpoint_id | path | Yes | string |
Request Header
| Name | Required | Type | Description |
|---|---|---|---|
| aoai-copy-ft-checkpoints | True | string Possible values: preview |
Enables access to checkpoint copy operations for models, an AOAI preview feature. This feature requires the 'aoai-copy-ft-checkpoints' header to be set to 'preview'. |
| accept | True | string Possible values: application/json |
Request Body
Content-Type: application/json
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| destinationResourceId | string | The ID of the destination Resource to copy. | Yes | |
| region | string | The region to copy the model to. | Yes |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | CopyModelResponse |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Fine tuning - get checkpoint
GET {endpoint}/openai/v1/fine_tuning/jobs/{fine_tuning_job_id}/checkpoints/{fine_tuning_checkpoint_id}/copy
Gets the status of a fine-tuning checkpoint copy.
NOTE: This Azure OpenAI API is in preview and subject to change.
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
| fine_tuning_job_id | path | Yes | string | |
| fine_tuning_checkpoint_id | path | Yes | string |
Request Header
| Name | Required | Type | Description |
|---|---|---|---|
| aoai-copy-ft-checkpoints | True | string Possible values: preview |
Enables access to checkpoint copy operations for models, an AOAI preview feature. This feature requires the 'aoai-copy-ft-checkpoints' header to be set to 'preview'. |
| accept | True | string Possible values: application/json |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | CopyModelResponse |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
List fine tuning events
GET {endpoint}/openai/v1/fine_tuning/jobs/{fine_tuning_job_id}/events
Get status updates for a fine-tuning job.
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
| fine_tuning_job_id | path | Yes | string | The ID of the fine-tuning job to get events for. |
| after | query | No | string | Identifier for the last event from the previous pagination request. |
| limit | query | No | integer | Number of events to retrieve. |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.ListFineTuningJobEventsResponse |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Pause fine tuning job
POST {endpoint}/openai/v1/fine_tuning/jobs/{fine_tuning_job_id}/pause
Pause a fine-tune job.
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
| fine_tuning_job_id | path | Yes | string | The ID of the fine-tuning job to pause. |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.FineTuningJob |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Resume fine tuning job
POST {endpoint}/openai/v1/fine_tuning/jobs/{fine_tuning_job_id}/resume
Resume a paused fine-tune job.
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
| fine_tuning_job_id | path | Yes | string | The ID of the fine-tuning job to resume. |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.FineTuningJob |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Models
List models
GET {endpoint}/openai/v1/models
Lists the currently available models, and provides basic information about each one such as the owner and availability.
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.ListModelsResponse |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Retrieve model
GET {endpoint}/openai/v1/models/{model}
Retrieves a model instance, providing basic information about the model such as the owner and permissioning.
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
| model | path | Yes | string | The ID of the model to use for this request. |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.Model |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Delete model
DELETE {endpoint}/openai/v1/models/{model}
Deletes a model instance.
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
| model | path | Yes | string | The ID of the model to delete. |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.DeleteModelResponse |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Realtime
Create real time call
POST {endpoint}/openai/v1/realtime/calls
Create a new Realtime API call over WebRTC and receive the SDP answer needed to complete the peer connection.
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
Request Body
Content-Type: multipart/form-data
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| sdp | string | WebRTC Session Description Protocol (SDP) offer generated by the caller. | Yes | |
| session | OpenAI.RealtimeSessionCreateRequestGA | Realtime session object configuration. | No | |
| └─ audio | OpenAI.RealtimeSessionCreateRequestGAAudio | Configuration for input and output audio. | No | |
| └─ include | array of string | Additional fields to include in server outputs.item.input_audio_transcription.logprobs: Include logprobs for input audio transcription. |
No | |
| └─ instructions | string | The default system instructions (i.e. system message) prepended to model calls. This field allows the client to guide the model on desired responses. The model can be instructed on response content and format, (for example "be extremely succinct", "act friendly", "here are examples of good responses") and on audio behavior (for example "talk quickly", "inject emotion into your voice", "laugh frequently"). The instructions are not guaranteed to be followed by the model, but they provide guidance to the model on the desired behavior. Note that the server sets default instructions which will be used if this field is not set and are visible in the session.created event at the start of the session. |
No | |
| └─ max_output_tokens | integer (see valid models below) | Maximum number of output tokens for a single assistant response, inclusive of tool calls. Provide an integer between 1 and 4096 to limit output tokens, or inf for the maximum available tokens for agiven model. Defaults to inf. |
No | |
| └─ model | string | The Realtime model used for this session. | No | |
| └─ output_modalities | array of string | The set of modalities the model can respond with. It defaults to ["audio"], indicatingthat the model will respond with audio plus a transcript. ["text"] can be used to makethe model respond with text only. It is not possible to request both text and audio at the same time. |
No | ['audio'] |
| └─ prompt | OpenAI.Prompt | Reference to a prompt template and its variables. Learn more. |
No | |
| └─ tool_choice | OpenAI.ToolChoiceOptions or OpenAI.ToolChoiceFunction or OpenAI.ToolChoiceMCP | How the model chooses tools. Provide one of the string modes or force a specific function/MCP tool. |
No | auto |
| └─ tools | array of OpenAI.RealtimeFunctionTool or OpenAI.MCPTool | Tools available to the model. | No | |
| └─ tracing | string or OpenAI.RealtimeSessionCreateRequestGATracing or null | "" Set to null to disable tracing. Once tracing is enabled for a session, the configuration cannot be modified. auto will create a trace for the session with default values for theworkflow name, group id, and metadata. |
No | auto |
| └─ truncation | OpenAI.RealtimeTruncation | When the number of tokens in a conversation exceeds the model's input token limit, the conversation be truncated, meaning messages (starting from the oldest) will not be included in the model's context. A 32k context model with 4,096 max output tokens can only include 28,224 tokens in the context before truncation occurs. Clients can configure truncation behavior to truncate with a lower max token limit, which is an effective way to control token usage and cost. Truncation will reduce the number of cached tokens on the next turn (busting the cache), since messages are dropped from the beginning of the context. However, clients can also configure truncation to retain messages up to a fraction of the maximum context size, which will reduce the need for future truncations and thus improve the cache rate. Truncation can be disabled entirely, which means the server will never truncate but would instead return an error if the conversation exceeds the model's input token limit. |
No | |
| └─ type | enum | The type of session to create. Always realtime for the Realtime API.Possible values: realtime |
Yes |
Responses
Status Code: 201
Description: The request has succeeded and a new resource has been created as a result.
| Content-Type | Type | Description |
|---|---|---|
| application/sdp | string |
Response Headers:
| Header | Type | Description |
|---|---|---|
| location | string | Relative URL containing the call ID for subsequent control requests. |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Accept real time call
POST {endpoint}/openai/v1/realtime/calls/{call_id}/accept
Accept an incoming SIP call and configure the realtime session that will handle it.
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
| call_id | path | Yes | string | The identifier for the call provided in the realtime.call.incoming webhook. |
Request Body
Content-Type: application/json
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| audio | OpenAI.RealtimeSessionCreateRequestGAAudio | No | ||
| └─ input | OpenAI.RealtimeSessionCreateRequestGAAudioInput | No | ||
| └─ output | OpenAI.RealtimeSessionCreateRequestGAAudioOutput | No | ||
| include | array of string | Additional fields to include in server outputs.item.input_audio_transcription.logprobs: Include logprobs for input audio transcription. |
No | |
| instructions | string | The default system instructions (i.e. system message) prepended to model calls. This field allows the client to guide the model on desired responses. The model can be instructed on response content and format, (for example "be extremely succinct", "act friendly", "here are examples of good responses") and on audio behavior (for example "talk quickly", "inject emotion into your voice", "laugh frequently"). The instructions are not guaranteed to be followed by the model, but they provide guidance to the model on the desired behavior. Note that the server sets default instructions which will be used if this field is not set and are visible in the session.created event at the start of the session. |
No | |
| max_output_tokens | integer (see valid models below) | Maximum number of output tokens for a single assistant response, inclusive of tool calls. Provide an integer between 1 and 4096 to limit output tokens, or inf for the maximum available tokens for agiven model. Defaults to inf. |
No | |
| model | string | The Realtime model used for this session. | No | |
| output_modalities | array of string | The set of modalities the model can respond with. It defaults to ["audio"], indicatingthat the model will respond with audio plus a transcript. ["text"] can be used to makethe model respond with text only. It is not possible to request both text and audio at the same time. |
No | ['audio'] |
| prompt | OpenAI.Prompt | Reference to a prompt template and its variables. Learn more. |
No | |
| tool_choice | OpenAI.ToolChoiceOptions or OpenAI.ToolChoiceFunction or OpenAI.ToolChoiceMCP | How the model chooses tools. Provide one of the string modes or force a specific function/MCP tool. |
No | |
| tools | array of OpenAI.RealtimeFunctionTool or OpenAI.MCPTool | Tools available to the model. | No | |
| tracing | string or OpenAI.RealtimeSessionCreateRequestGATracing or null | "" Set to null to disable tracing. Once tracing is enabled for a session, the configuration cannot be modified. auto will create a trace for the session with default values for theworkflow name, group id, and metadata. |
No | |
| truncation | OpenAI.RealtimeTruncation | When the number of tokens in a conversation exceeds the model's input token limit, the conversation be truncated, meaning messages (starting from the oldest) will not be included in the model's context. A 32k context model with 4,096 max output tokens can only include 28,224 tokens in the context before truncation occurs. Clients can configure truncation behavior to truncate with a lower max token limit, which is an effective way to control token usage and cost. Truncation will reduce the number of cached tokens on the next turn (busting the cache), since messages are dropped from the beginning of the context. However, clients can also configure truncation to retain messages up to a fraction of the maximum context size, which will reduce the need for future truncations and thus improve the cache rate. Truncation can be disabled entirely, which means the server will never truncate but would instead return an error if the conversation exceeds the model's input token limit. |
No | |
| type | enum | The type of session to create. Always realtime for the Realtime API.Possible values: realtime |
Yes |
Responses
Status Code: 200
Description: The request has succeeded.
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Hang up realtime call
POST {endpoint}/openai/v1/realtime/calls/{call_id}/hangup
End an active Realtime API call, whether it was initiated over SIP or WebRTC.
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
| call_id | path | Yes | string | The identifier for the call. |
Responses
Status Code: 200
Description: The request has succeeded.
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Refer real time call
POST {endpoint}/openai/v1/realtime/calls/{call_id}/refer
Transfer an active SIP call to a new destination using the SIP REFER verb.
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
| call_id | path | Yes | string | The identifier for the call provided in the realtime.call.incoming webhook. |
Request Body
Content-Type: application/json
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| target_uri | string | URI that should appear in the SIP Refer-To header. Supports values liketel:+14155550123 or sip:agent\@example.com. |
Yes |
Responses
Status Code: 200
Description: The request has succeeded.
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Reject real time call
POST {endpoint}/openai/v1/realtime/calls/{call_id}/reject
Decline an incoming SIP call by returning a SIP status code to the caller.
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
| call_id | path | Yes | string | The identifier for the call provided in the realtime.call.incoming webhook. |
Request Body
Content-Type: application/json
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| status_code | integer | SIP response code to send back to the caller. Defaults to 603 (Decline)when omitted. |
No |
Responses
Status Code: 200
Description: The request has succeeded.
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Create real time client secret
POST {endpoint}/openai/v1/realtime/client_secrets
Create a Realtime client secret with an associated session configuration.
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
Request Body
Content-Type: application/json
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| expires_after | OpenAI.RealtimeCreateClientSecretRequestExpiresAfter | No | ||
| └─ anchor | enum | Possible values: created_at |
No | |
| └─ seconds | integer | Constraints: min: 10, max: 7200 | No | 600 |
| session | OpenAI.RealtimeSessionCreateRequestUnion | No | ||
| └─ type | OpenAI.RealtimeSessionCreateRequestUnionType | Yes |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.RealtimeCreateClientSecretResponse |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Create real time session
POST {endpoint}/openai/v1/realtime/sessions
Create an ephemeral API token for use in client-side applications with the Realtime API.
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
Request Body
Content-Type: application/json
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| client_secret | OpenAI.RealtimeSessionCreateRequestClientSecret | Yes | ||
| └─ expires_at | integer | Yes | ||
| └─ value | string | Yes | ||
| input_audio_format | string | The format of input audio. Options are pcm16, g711_ulaw, or g711_alaw. |
No | |
| input_audio_transcription | OpenAI.RealtimeSessionCreateRequestInputAudioTranscription | No | ||
| └─ model | string | No | ||
| instructions | string | The default system instructions (i.e. system message) prepended to model calls. This field allows the client to guide the model on desired responses. The model can be instructed on response content and format, (for example "be extremely succinct", "act friendly", "here are examples of good responses") and on audio behavior (for example "talk quickly", "inject emotion into your voice", "laugh frequently"). The instructions are not guaranteed to be followed by the model, but they provide guidance to the model on the desired behavior. Note that the server sets default instructions which will be used if this field is not set and are visible in the session.created event at the start of the session. |
No | |
| max_response_output_tokens | integer (see valid models below) | Maximum number of output tokens for a single assistant response, inclusive of tool calls. Provide an integer between 1 and 4096 to limit output tokens, or inf for the maximum available tokens for agiven model. Defaults to inf. |
No | |
| modalities | array of string | The set of modalities the model can respond with. To disable audio, set this to ["text"]. |
No | ['text', 'audio'] |
| output_audio_format | string | The format of output audio. Options are pcm16, g711_ulaw, or g711_alaw. |
No | |
| prompt | OpenAI.Prompt | Reference to a prompt template and its variables. Learn more. |
No | |
| speed | number | The speed of the model's spoken response. 1.0 is the default speed. 0.25 is the minimum speed. 1.5 is the maximum speed. This value can only be changed in between model turns, not while a response is in progress. Constraints: min: 0.25, max: 1.5 |
No | 1 |
| temperature | number | Sampling temperature for the model, limited to [0.6, 1.2]. Defaults to 0.8. | No | |
| tool_choice | string | How the model chooses tools. Options are auto, none, required, orspecify a function. |
No | |
| tools | array of OpenAI.RealtimeSessionCreateRequestTools | Tools (functions) available to the model. | No | |
| tracing | string or object | Configuration options for tracing. Set to null to disable tracing. Once tracing is enabled for a session, the configuration cannot be modified. auto will create a trace for the session with default values for theworkflow name, group id, and metadata. |
No | |
| truncation | OpenAI.RealtimeTruncation | When the number of tokens in a conversation exceeds the model's input token limit, the conversation be truncated, meaning messages (starting from the oldest) will not be included in the model's context. A 32k context model with 4,096 max output tokens can only include 28,224 tokens in the context before truncation occurs. Clients can configure truncation behavior to truncate with a lower max token limit, which is an effective way to control token usage and cost. Truncation will reduce the number of cached tokens on the next turn (busting the cache), since messages are dropped from the beginning of the context. However, clients can also configure truncation to retain messages up to a fraction of the maximum context size, which will reduce the need for future truncations and thus improve the cache rate. Truncation can be disabled entirely, which means the server will never truncate but would instead return an error if the conversation exceeds the model's input token limit. |
No | |
| turn_detection | OpenAI.RealtimeSessionCreateRequestTurnDetection | No | ||
| └─ prefix_padding_ms | integer | No | ||
| └─ silence_duration_ms | integer | No | ||
| └─ threshold | number | No | ||
| └─ type | string | No | ||
| type | enum | Possible values: realtime |
Yes | |
| voice | OpenAI.VoiceIdsShared | No |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.RealtimeSessionCreateResponse |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Create real time transcription session
POST {endpoint}/openai/v1/realtime/transcription_sessions
Create an ephemeral API token for use in client-side applications with the Realtime API specifically for realtime transcriptions.
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
Request Body
Content-Type: application/json
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| include | array of string | The set of items to include in the transcription. Current available items are:item.input_audio_transcription.logprobs |
No | |
| input_audio_format | enum | The format of input audio. Options are pcm16, g711_ulaw, or g711_alaw.For pcm16, input audio must be 16-bit PCM at a 24-kHz sample rate,single channel (mono), and little-endian byte order. Possible values: pcm16, g711_ulaw, g711_alaw |
No | |
| input_audio_noise_reduction | OpenAI.RealtimeTranscriptionSessionCreateRequestInputAudioNoiseReduction | No | ||
| └─ type | OpenAI.NoiseReductionType | Type of noise reduction. near_field is for close-talking microphones such as headphones, far_field is for far-field microphones such as laptop or conference room microphones. |
No | |
| input_audio_transcription | OpenAI.AudioTranscription | No | ||
| └─ language | string | The language of the input audio. Supplying the input language in ISO-639-1 (e.g. en) formatwill improve accuracy and latency. |
No | |
| └─ model | string | The model to use for transcription. Current options are whisper-1, gpt-4o-mini-transcribe, gpt-4o-mini-transcribe-2025-12-15, gpt-4o-transcribe, and gpt-4o-transcribe-diarize. Use gpt-4o-transcribe-diarize when you need diarization with speaker labels. |
No | |
| └─ prompt | string | An optional text to guide the model's style or continue a previous audio segment. For whisper-1, the prompt is a list of keywords.For gpt-4o-transcribe models (excluding gpt-4o-transcribe-diarize), the prompt is a free text string, for example "expect words related to technology". |
No | |
| turn_detection | OpenAI.RealtimeTranscriptionSessionCreateRequestTurnDetection | No | ||
| └─ prefix_padding_ms | integer | No | ||
| └─ silence_duration_ms | integer | No | ||
| └─ threshold | number | No | ||
| └─ type | enum | Possible values: server_vad |
No | |
| type | enum | Possible values: transcription |
Yes |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.RealtimeTranscriptionSessionCreateResponse |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Responses
Create response
POST {endpoint}/openai/v1/responses
Creates a model response.
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
Request Body
Content-Type: application/json
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| background | boolean or null | No | ||
| conversation | OpenAI.ConversationParam or null | No | ||
| include | array of OpenAI.IncludeEnum or null | No | ||
| input | OpenAI.InputParam | Text, image, or file inputs to the model, used to generate a response. Learn more: - Text inputs and outputs - Image inputs - File inputs - Conversation state - Function calling |
No | |
| instructions | string or null | No | ||
| max_output_tokens | integer or null | No | ||
| max_tool_calls | integer or null | No | ||
| metadata | OpenAI.Metadata or null | No | ||
| model | string | Model ID used to generate the response, like gpt-4o or o3. OpenAIoffers a wide range of models with different capabilities, performance characteristics, and price points. Refer to the model guide to browse and compare available models. |
No | |
| parallel_tool_calls | boolean or null | No | ||
| previous_response_id | string or null | No | ||
| prompt | OpenAI.Prompt | Reference to a prompt template and its variables. Learn more. |
No | |
| prompt_cache_key | string | Used by OpenAI to cache responses for similar requests to optimize your cache hit rates. Replaces the user field. Learn more. |
No | |
| prompt_cache_retention | string or null | No | ||
| reasoning | OpenAI.Reasoning or null | No | ||
| safety_identifier | string | A stable identifier used to help detect users of your application that may be violating OpenAI's usage policies. The IDs should be a string that uniquely identifies each user. We recommend hashing their username or email address, in order to avoid sending us any identifying information. Learn more. |
No | |
| store | boolean or null | No | ||
| stream | boolean or null | No | ||
| stream_options | OpenAI.ResponseStreamOptions or null | No | ||
| temperature | number or null | No | ||
| text | OpenAI.ResponseTextParam | Configuration options for a text response from the model. Can be plain text or structured JSON data. Learn more: - Text inputs and outputs - Structured Outputs |
No | |
| tool_choice | OpenAI.ToolChoiceParam | How the model should select which tool (or tools) to use when generating a response. See the tools parameter to see how to specify which toolsthe model can call. |
No | |
| tools | OpenAI.ToolsArray | An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter.We support the following categories of tools: - Built-in tools: Tools that are provided by OpenAI that extend the model's capabilities, like web search or file search. Learn more about built-in tools. - MCP Tools: Integrations with third-party systems via custom MCP servers or predefined connectors such as Google Drive and SharePoint. Learn more about MCP Tools. - Function calls (custom tools): Functions that are defined by you, enabling the model to call your own code with strongly typed arguments and outputs. Learn more about function calling. You can also use custom tools to call your own code. |
No | |
| top_logprobs | integer or null | No | ||
| top_p | number or null | No | ||
| truncation | string or null | No | ||
| user | string (deprecated) | This field is being replaced by safety_identifier and prompt_cache_key. Use prompt_cache_key instead to maintain caching optimizations.A stable identifier for your end-users. Used to boost cache hit rates by better bucketing similar requests and to help OpenAI detect and prevent abuse. Learn more. |
No |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object | |
| text/event-stream | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Examples
Example
POST {endpoint}/openai/v1/responses
Get response
GET {endpoint}/openai/v1/responses/{response_id}
Retrieves a model response with the given ID.
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
| response_id | path | Yes | string | |
| include[] | query | No | array | Additional fields to include in the response. See the include parameter for Response creation above for more information. |
| stream | query | No | boolean | If set to true, the model response data will be streamed to the client as it is generated using server-sent events. |
| starting_after | query | No | integer | The sequence number of the event after which to start streaming. |
| include_obfuscation | query | No | boolean | When true, stream obfuscation will be enabled. Stream obfuscation adds random characters to an obfuscation field on streaming delta events to normalize payload sizes as a mitigation to certain side-channel attacks. These obfuscation fields are included by default, but add a small amount of overhead to the data stream. You can set include_obfuscation to false to optimize for bandwidth if you trust the network links between your application and the OpenAI API. |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Delete response
DELETE {endpoint}/openai/v1/responses/{response_id}
Deletes a response by ID.
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
| response_id | path | Yes | string |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Cancel response
POST {endpoint}/openai/v1/responses/{response_id}/cancel
Cancels a model response with the given ID. Only responses created with the background parameter set to true can be cancelled.
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
| response_id | path | Yes | string |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
List input items
GET {endpoint}/openai/v1/responses/{response_id}/input_items
Returns a list of input items for a given response.
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
| response_id | path | Yes | string | |
| limit | query | No | integer | A limit on the number of objects to be returned. Limit can range between 1 and 100, and the default is 20. |
| order | query | No | string Possible values: asc, desc |
Sort order by the created_at timestamp of the objects. asc for ascending order anddescfor descending order. |
| after | query | No | string | A cursor for use in pagination. after is an object ID that defines your place in the list.For instance, if you make a list request and receive 100 objects, ending with obj_foo, your subsequent call can include after=obj_foo in order to fetch the next page of the list. |
| before | query | No | string | A cursor for use in pagination. before is an object ID that defines your place in the list.For instance, if you make a list request and receive 100 objects, ending with obj_foo, your subsequent call can include before=obj_foo in order to fetch the previous page of the list. |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.ResponseItemList |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Threads
Create thread
POST {endpoint}/openai/v1/threads
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
Request Body
Content-Type: application/json
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| messages | array of OpenAI.CreateMessageRequest | A list of messages to start the thread with. | No | |
| metadata | OpenAI.Metadata or null | No | ||
| tool_resources | OpenAI.CreateThreadRequestToolResources or null | No |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.ThreadObject |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Createthread and run
POST {endpoint}/openai/v1/threads/runs
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
Request Body
Content-Type: application/json
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| assistant_id | string | The ID of the assistant to use to execute this run. | Yes | |
| instructions | string or null | Override the default system message of the assistant. This is useful for modifying the behavior on a per-run basis. | No | |
| max_completion_tokens | integer or null | The maximum number of completion tokens that may be used over the course of the run. The run will make a best effort to use only the number of completion tokens specified, across multiple turns of the run. If the run exceeds the number of completion tokens specified, the run will end with status incomplete. See incomplete_details for more info. |
No | |
| max_prompt_tokens | integer or null | The maximum number of prompt tokens that may be used over the course of the run. The run will make a best effort to use only the number of prompt tokens specified, across multiple turns of the run. If the run exceeds the number of prompt tokens specified, the run will end with status incomplete. See incomplete_details for more info. |
No | |
| metadata | OpenAI.Metadata or null | No | ||
| model | string | The ID of the Model to be used to execute this run. If a value is provided here, it will override the model associated with the assistant. If not, the model associated with the assistant will be used. | No | |
| parallel_tool_calls | OpenAI.ParallelToolCalls | Whether to enable parallel function calling during tool use. | No | |
| response_format | OpenAI.AssistantsApiResponseFormatOption | Specifies the format that the model must output. Compatible with GPT-4o, GPT-4 Turbo, and all GPT-3.5 Turbo models since gpt-3.5-turbo-1106.Setting to { "type": "json_schema", "json_schema": {...} } enables Structured Outputs which ensure the model will match your supplied JSON schema. Learn more in the Setting to { "type": "json_object" } enables JSON mode, which ensures the message the model generates is valid JSON.Important:* when using JSON mode, you must also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly "stuck" request. Also note that the message content may be partially cut off if finish_reason="length", which indicates the generation exceeded max_tokens or the conversation exceeded the max context length. |
No | |
| stream | boolean or null | If true, returns a stream of events that happen during the Run as server-sent events, terminating when the Run enters a terminal state with a data: [DONE] message. |
No | |
| temperature | number or null | What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. | No | |
| thread | OpenAI.CreateThreadRequest | Options to create a new thread. If no thread is provided when running a request, an empty thread will be created. |
No | |
| tool_choice | OpenAI.AssistantsApiToolChoiceOption | Controls which (if any) tool is called by the model.none means the model will not call any tools and instead generates a message.auto is the default value and means the model can pick between generating a message or calling one or more tools.required means the model must call one or more tools before responding to the user.Specifying a particular tool like {"type": "file_search"} or {"type": "function", "function": {"name": "my_function"}} forces the model to call that tool. |
No | |
| tool_resources | OpenAI.CreateThreadAndRunRequestToolResources or null | A set of resources that are used by the assistant's tools. The resources are specific to the type of tool. For example, the code_interpreter tool requires a list of file IDs, while the file_search tool requires a list of vector store IDs. |
No | |
| tools | array of OpenAI.AssistantTool | Override the tools the assistant can use for this run. This is useful for modifying the behavior on a per-run basis. | No | |
| top_p | number or null | An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both. |
No | |
| truncation_strategy | OpenAI.TruncationObject | Controls for how a thread will be truncated prior to the run. Use this to control the initial context window of the run. | No |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.RunObject |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Delete thread
DELETE {endpoint}/openai/v1/threads/{thread_id}
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
| thread_id | path | Yes | string |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.DeleteThreadResponse |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Retrieve thread
GET {endpoint}/openai/v1/threads/{thread_id}
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
| thread_id | path | Yes | string |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.ThreadObject |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Modify thread
POST {endpoint}/openai/v1/threads/{thread_id}
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
| thread_id | path | Yes | string |
Request Body
Content-Type: application/json
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| metadata | OpenAI.Metadata or null | No | ||
| tool_resources | OpenAI.ModifyThreadRequestToolResources or null | No |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.ThreadObject |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
List messages
GET {endpoint}/openai/v1/threads/{thread_id}/messages
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
| thread_id | path | Yes | string | |
| limit | query | No | integer | |
| order | query | No | string Possible values: asc, desc |
|
| after | query | No | string | |
| before | query | No | string | |
| run_id | query | No | string |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.ListMessagesResponse |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Create message
POST {endpoint}/openai/v1/threads/{thread_id}/messages
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
| thread_id | path | Yes | string |
Request Body
Content-Type: application/json
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| attachments | array of OpenAI.CreateMessageRequestAttachments or null | No | ||
| content | string or array of OpenAI.MessageContentImageFileObject or OpenAI.MessageContentImageUrlObject or OpenAI.MessageRequestContentTextObject | Yes | ||
| metadata | OpenAI.Metadata or null | No | ||
| role | enum | The role of the entity that is creating the message. Allowed values include: - user: Indicates the message is sent by an actual user and should be used in most cases to represent user-generated messages.- assistant: Indicates the message is generated by the assistant. Use this value to insert messages from the assistant into the conversation.Possible values: user, assistant |
Yes |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.MessageObject |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Delete message
DELETE {endpoint}/openai/v1/threads/{thread_id}/messages/{message_id}
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
| thread_id | path | Yes | string | |
| message_id | path | Yes | string |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.DeleteMessageResponse |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Retrieve message
GET {endpoint}/openai/v1/threads/{thread_id}/messages/{message_id}
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
| thread_id | path | Yes | string | |
| message_id | path | Yes | string |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.MessageObject |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Modify message
POST {endpoint}/openai/v1/threads/{thread_id}/messages/{message_id}
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
| thread_id | path | Yes | string | |
| message_id | path | Yes | string |
Request Body
Content-Type: application/json
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| metadata | OpenAI.Metadata or null | No |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.MessageObject |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Create run
POST {endpoint}/openai/v1/threads/{thread_id}/runs
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
| thread_id | path | Yes | string |
Request Body
Content-Type: application/json
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| additional_instructions | string or null | Appends additional instructions at the end of the instructions for the run. This is useful for modifying the behavior on a per-run basis without overriding other instructions. | No | |
| additional_messages | array of OpenAI.CreateMessageRequest or null | Adds additional messages to the thread before creating the run. | No | |
| assistant_id | string | The ID of the assistant to use to execute this run. | Yes | |
| instructions | string or null | Overrides the instructions of the assistant. This is useful for modifying the behavior on a per-run basis. | No | |
| max_completion_tokens | integer or null | The maximum number of completion tokens that may be used over the course of the run. The run will make a best effort to use only the number of completion tokens specified, across multiple turns of the run. If the run exceeds the number of completion tokens specified, the run will end with status incomplete. See incomplete_details for more info. |
No | |
| max_prompt_tokens | integer or null | The maximum number of prompt tokens that may be used over the course of the run. The run will make a best effort to use only the number of prompt tokens specified, across multiple turns of the run. If the run exceeds the number of prompt tokens specified, the run will end with status incomplete. See incomplete_details for more info. |
No | |
| metadata | OpenAI.Metadata or null | No | ||
| model | string | The ID of the Model to be used to execute this run. If a value is provided here, it will override the model associated with the assistant. If not, the model associated with the assistant will be used. | No | |
| parallel_tool_calls | OpenAI.ParallelToolCalls | Whether to enable parallel function calling during tool use. | No | |
| reasoning_effort | OpenAI.ReasoningEffort | Constrains effort on reasoning for reasoning models. Currently supported values are none, minimal, low, medium, high, and xhigh. Reducingreasoning effort can result in faster responses and fewer tokens used on reasoning in a response. - gpt-5.1 defaults to none, which does not perform reasoning. The supported reasoning values for gpt-5.1 are none, low, medium, and high. Tool calls are supported for all reasoning values in gpt-5.1.- All models before gpt-5.1 default to medium reasoning effort, and do not support none.- The gpt-5-pro model defaults to (and only supports) high reasoning effort.- xhigh is supported for all models after gpt-5.1-codex-max. |
No | |
| response_format | OpenAI.AssistantsApiResponseFormatOption | Specifies the format that the model must output. Compatible with GPT-4o, GPT-4 Turbo, and all GPT-3.5 Turbo models since gpt-3.5-turbo-1106.Setting to { "type": "json_schema", "json_schema": {...} } enables Structured Outputs which ensure the model will match your supplied JSON schema. Learn more in the Setting to { "type": "json_object" } enables JSON mode, which ensures the message the model generates is valid JSON.Important:* when using JSON mode, you must also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly "stuck" request. Also note that the message content may be partially cut off if finish_reason="length", which indicates the generation exceeded max_tokens or the conversation exceeded the max context length. |
No | |
| stream | boolean or null | If true, returns a stream of events that happen during the Run as server-sent events, terminating when the Run enters a terminal state with a data: [DONE] message. |
No | |
| temperature | number or null | What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. | No | |
| tool_choice | OpenAI.AssistantsApiToolChoiceOption | Controls which (if any) tool is called by the model.none means the model will not call any tools and instead generates a message.auto is the default value and means the model can pick between generating a message or calling one or more tools.required means the model must call one or more tools before responding to the user.Specifying a particular tool like {"type": "file_search"} or {"type": "function", "function": {"name": "my_function"}} forces the model to call that tool. |
No | |
| tools | array of OpenAI.AssistantTool | Override the tools the assistant can use for this run. This is useful for modifying the behavior on a per-run basis. | No | |
| top_p | number or null | An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both. |
No | |
| truncation_strategy | OpenAI.TruncationObject | Controls for how a thread will be truncated prior to the run. Use this to control the initial context window of the run. | No |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.RunObject |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
List runs
GET {endpoint}/openai/v1/threads/{thread_id}/runs
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
| thread_id | path | Yes | string | |
| limit | query | No | integer | |
| order | query | No | string Possible values: asc, desc |
|
| after | query | No | string | |
| before | query | No | string |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.ListRunsResponse |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Retrieve run
GET {endpoint}/openai/v1/threads/{thread_id}/runs/{run_id}
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
| thread_id | path | Yes | string | |
| run_id | path | Yes | string |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.RunObject |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Modify run
POST {endpoint}/openai/v1/threads/{thread_id}/runs/{run_id}
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
| thread_id | path | Yes | string | |
| run_id | path | Yes | string |
Request Body
Content-Type: application/json
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| metadata | OpenAI.Metadata or null | No |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.RunObject |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Cancel run
POST {endpoint}/openai/v1/threads/{thread_id}/runs/{run_id}/cancel
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
| thread_id | path | Yes | string | |
| run_id | path | Yes | string |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.RunObject |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
List run steps
GET {endpoint}/openai/v1/threads/{thread_id}/runs/{run_id}/steps
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
| thread_id | path | Yes | string | |
| run_id | path | Yes | string | |
| limit | query | No | integer | |
| order | query | No | string Possible values: asc, desc |
|
| after | query | No | string | |
| before | query | No | string |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.ListRunStepsResponse |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Get run step
GET {endpoint}/openai/v1/threads/{thread_id}/runs/{run_id}/steps/{step_id}
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
| thread_id | path | Yes | string | |
| run_id | path | Yes | string | |
| step_id | path | Yes | string |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.RunStepObject |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Submit tool outputs to run
POST {endpoint}/openai/v1/threads/{thread_id}/runs/{run_id}/submit_tool_outputs
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
| thread_id | path | Yes | string | |
| run_id | path | Yes | string |
Request Body
Content-Type: application/json
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| stream | boolean or null | No | ||
| tool_outputs | array of OpenAI.SubmitToolOutputsRunRequestToolOutputs | A list of tools for which the outputs are being submitted. | Yes |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.RunObject |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Vector Stores
List vector stores
GET {endpoint}/openai/v1/vector_stores
Returns a list of vector stores.
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
| limit | query | No | integer | A limit on the number of objects to be returned. Limit can range between 1 and 100, and the default is 20. |
| order | query | No | string Possible values: asc, desc |
Sort order by the created_at timestamp of the objects. asc for ascending order anddescfor descending order. |
| after | query | No | string | A cursor for use in pagination. after is an object ID that defines your place in the list.For instance, if you make a list request and receive 100 objects, ending with obj_foo, your subsequent call can include after=obj_foo in order to fetch the next page of the list. |
| before | query | No | string | A cursor for use in pagination. before is an object ID that defines your place in the list.For instance, if you make a list request and receive 100 objects, ending with obj_foo, your subsequent call can include before=obj_foo in order to fetch the previous page of the list. |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.ListVectorStoresResponse |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Create vector store
POST {endpoint}/openai/v1/vector_stores
Creates a vector store.
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
Request Body
Content-Type: application/json
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| chunking_strategy | OpenAI.ChunkingStrategyRequestParam | The chunking strategy used to chunk the file(s). If not set, will use the auto strategy. Only applicable if file_ids is non-empty. |
No | |
| description | string | A description for the vector store. Can be used to describe the vector store's purpose. | No | |
| expires_after | OpenAI.VectorStoreExpirationAfter | The expiration policy for a vector store. | No | |
| file_ids | array of string | A list of File IDs that the vector store should use. Useful for tools like file_search that can access files. |
No | |
| metadata | OpenAI.Metadata or null | No | ||
| name | string | The name of the vector store. | No |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.VectorStoreObject |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Examples
Example
POST {endpoint}/openai/v1/vector_stores
Get vector store
GET {endpoint}/openai/v1/vector_stores/{vector_store_id}
Retrieves a vector store.
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
| vector_store_id | path | Yes | string | The ID of the vector store to retrieve. |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.VectorStoreObject |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Modify vector store
POST {endpoint}/openai/v1/vector_stores/{vector_store_id}
Modifies a vector store.
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
| vector_store_id | path | Yes | string | The ID of the vector store to modify. |
Request Body
Content-Type: application/json
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| expires_after | OpenAI.VectorStoreExpirationAfter | The expiration policy for a vector store. | No | |
| metadata | OpenAI.Metadata or null | No | ||
| name | string or null | The name of the vector store. | No |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.VectorStoreObject |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Delete vector store
DELETE {endpoint}/openai/v1/vector_stores/{vector_store_id}
Delete a vector store.
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
| vector_store_id | path | Yes | string | The ID of the vector store to delete. |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.DeleteVectorStoreResponse |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Create vector store file batch
POST {endpoint}/openai/v1/vector_stores/{vector_store_id}/file_batches
Create a vector store file batch.
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
| vector_store_id | path | Yes | string | The ID of the vector store for which to create a file batch. |
Request Body
Content-Type: application/json
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| attributes | OpenAI.VectorStoreFileAttributes or null | No | ||
| chunking_strategy | OpenAI.ChunkingStrategyRequestParam | The chunking strategy used to chunk the file(s). If not set, will use the auto strategy. Only applicable if file_ids is non-empty. |
No | |
| file_ids | array of string | A list of File IDs that the vector store should use. Useful for tools like file_search that can access files. If attributes or chunking_strategy are provided, they will be applied to all files in the batch. Mutually exclusive with files. |
No | |
| files | array of OpenAI.CreateVectorStoreFileRequest | A list of objects that each include a file_id plus optional attributes or chunking_strategy. Use this when you need to override metadata for specific files. The global attributes or chunking_strategy will be ignored and must be specified for each file. Mutually exclusive with file_ids. |
No |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.VectorStoreFileBatchObject |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Get vector store file batch
GET {endpoint}/openai/v1/vector_stores/{vector_store_id}/file_batches/{batch_id}
Retrieves a vector store file batch.
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
| vector_store_id | path | Yes | string | The ID of the vector store that the file batch belongs to. |
| batch_id | path | Yes | string | The ID of the file batch being retrieved. |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.VectorStoreFileBatchObject |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Cancel vector store file batch
POST {endpoint}/openai/v1/vector_stores/{vector_store_id}/file_batches/{batch_id}/cancel
Cancel a vector store file batch. This attempts to cancel the processing of files in this batch as soon as possible.
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
| vector_store_id | path | Yes | string | The ID of the vector store that the file batch belongs to. |
| batch_id | path | Yes | string | The ID of the file batch to cancel. |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.VectorStoreFileBatchObject |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
List files in vector store batch
GET {endpoint}/openai/v1/vector_stores/{vector_store_id}/file_batches/{batch_id}/files
Returns a list of vector store files in a batch.
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
| vector_store_id | path | Yes | string | The ID of the vector store that the file batch belongs to. |
| batch_id | path | Yes | string | The ID of the file batch that the files belong to. |
| limit | query | No | integer | A limit on the number of objects to be returned. Limit can range between 1 and 100, and the default is 20. |
| order | query | No | string Possible values: asc, desc |
Sort order by the created_at timestamp of the objects. asc for ascending order anddescfor descending order. |
| after | query | No | string | A cursor for use in pagination. after is an object ID that defines your place in the list.For instance, if you make a list request and receive 100 objects, ending with obj_foo, your subsequent call can include after=obj_foo in order to fetch the next page of the list. |
| before | query | No | string | A cursor for use in pagination. before is an object ID that defines your place in the list.For instance, if you make a list request and receive 100 objects, ending with obj_foo, your subsequent call can include before=obj_foo in order to fetch the previous page of the list. |
| filter | query | No | string Possible values: in_progress, completed, failed, cancelled |
Filter by file status. One of in_progress, completed, failed, cancelled. |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.ListVectorStoreFilesResponse |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
List vector store files
GET {endpoint}/openai/v1/vector_stores/{vector_store_id}/files
Returns a list of vector store files.
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
| vector_store_id | path | Yes | string | The ID of the vector store that the files belong to. |
| limit | query | No | integer | A limit on the number of objects to be returned. Limit can range between 1 and 100, and the default is 20. |
| order | query | No | string Possible values: asc, desc |
Sort order by the created_at timestamp of the objects. asc for ascending order anddescfor descending order. |
| after | query | No | string | A cursor for use in pagination. after is an object ID that defines your place in the list.For instance, if you make a list request and receive 100 objects, ending with obj_foo, your subsequent call can include after=obj_foo in order to fetch the next page of the list. |
| before | query | No | string | A cursor for use in pagination. before is an object ID that defines your place in the list.For instance, if you make a list request and receive 100 objects, ending with obj_foo, your subsequent call can include before=obj_foo in order to fetch the previous page of the list. |
| filter | query | No | string Possible values: in_progress, completed, failed, cancelled |
Filter by file status. One of in_progress, completed, failed, cancelled. |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.ListVectorStoreFilesResponse |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Create vector store file
POST {endpoint}/openai/v1/vector_stores/{vector_store_id}/files
Create a vector store file by attaching a File to a vector store.
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
| vector_store_id | path | Yes | string | The ID of the vector store for which to create a File. |
Request Body
Content-Type: application/json
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| attributes | OpenAI.VectorStoreFileAttributes or null | No | ||
| chunking_strategy | OpenAI.ChunkingStrategyRequestParam | The chunking strategy used to chunk the file(s). If not set, will use the auto strategy. Only applicable if file_ids is non-empty. |
No | |
| file_id | string | A File ID that the vector store should use. Useful for tools like file_search that can access files. |
Yes |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.VectorStoreFileObject |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Get vector store file
GET {endpoint}/openai/v1/vector_stores/{vector_store_id}/files/{file_id}
Retrieves a vector store file.
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
| vector_store_id | path | Yes | string | The ID of the vector store that the file belongs to. |
| file_id | path | Yes | string | The ID of the file being retrieved. |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.VectorStoreFileObject |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Update vector store file attributes
POST {endpoint}/openai/v1/vector_stores/{vector_store_id}/files/{file_id}
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
| vector_store_id | path | Yes | string | |
| file_id | path | Yes | string |
Request Body
Content-Type: application/json
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| attributes | OpenAI.VectorStoreFileAttributes or null | Yes |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.VectorStoreFileObject |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Delete vector store file
DELETE {endpoint}/openai/v1/vector_stores/{vector_store_id}/files/{file_id}
Delete a vector store file. This will remove the file from the vector store but the file itself will not be deleted. To delete the file, use the delete file endpoint endpoint.
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
| vector_store_id | path | Yes | string | The ID of the vector store that the file belongs to. |
| file_id | path | Yes | string | The ID of the file to delete. |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.DeleteVectorStoreFileResponse |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Retrieve vector store file content
GET {endpoint}/openai/v1/vector_stores/{vector_store_id}/files/{file_id}/content
Retrieve vector store file content
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
| vector_store_id | path | Yes | string | The ID of the vector store to search. |
| file_id | path | Yes | string | The ID of the file to retrieve content for. |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.VectorStoreSearchResultsPage |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Search vector store
POST {endpoint}/openai/v1/vector_stores/{vector_store_id}/search
Search vector store
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
| endpoint | path | Yes | string | Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace "aoairesource" with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com |
| api-version | query | No | string | The explicit Azure AI Foundry Models API version to use for this request.v1 if not otherwise specified. |
| vector_store_id | path | Yes | string | The ID of the vector store to search. |
Request Body
Content-Type: application/json
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| filters | OpenAI.ComparisonFilter or OpenAI.CompoundFilter | A filter to apply based on file attributes. | No | |
| max_num_results | integer | The maximum number of results to return. This number should be between 1 and 50 inclusive. Constraints: min: 1, max: 50 |
No | 10 |
| query | string or array of string | A query string for a search | Yes | |
| ranking_options | OpenAI.VectorStoreSearchRequestRankingOptions | No | ||
| └─ ranker | enum | Possible values: none, auto, default-2024-11-15 |
No | |
| └─ score_threshold | number | Constraints: min: 0, max: 1 | No | |
| rewrite_query | boolean | Whether to rewrite the natural language query for vector search. | No |
Responses
Status Code: 200
Description: The request has succeeded.
| Content-Type | Type | Description |
|---|---|---|
| application/json | OpenAI.VectorStoreSearchResultsPage |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Status Code: default
Description: An unexpected error response.
| Content-Type | Type | Description |
|---|---|---|
| application/json | object |
Response Headers:
| Header | Type | Description |
|---|---|---|
| apim-request-id | string | A request ID used for troubleshooting purposes. |
Components
AudioSegment
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| avg_logprob | number | The average log probability associated with this audio segment. | Yes | |
| compression_ratio | number | The compression ratio of this audio segment. | Yes | |
| end | number | The time at which this segment ended relative to the beginning of the translated audio. | Yes | |
| id | integer | The 0-based index of this segment within a translation. | Yes | |
| no_speech_prob | number | The probability of no speech detection within this audio segment. | Yes | |
| seek | integer | The seek position associated with the processing of this audio segment. Seek positions are expressed as hundredths of seconds. The model may process several segments from a single seek position, so while the seek position will never represent a later time than the segment's start, the segment's start may represent a significantly later time than the segment's associated seek position. |
Yes | |
| start | number | The time at which this segment started relative to the beginning of the translated audio. | Yes | |
| temperature | number | The temperature score associated with this audio segment. | Yes | |
| text | string | The translated text that was part of this audio segment. | Yes | |
| tokens | array of integer | The token IDs matching the translated text in this audio segment. | Yes |
AudioTaskLabel
Defines the possible descriptors for available audio operation responses.
| Property | Value |
|---|---|
| Description | Defines the possible descriptors for available audio operation responses. |
| Type | string |
| Values | transcribetranslate |
AudioTranslationSegment
Extended information about a single segment of translated audio data. Segments generally represent roughly 5-10 seconds of speech. Segment boundaries typically occur between words but not necessarily sentences.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| avg_logprob | number | The average log probability associated with this audio segment. | Yes | |
| compression_ratio | number | The compression ratio of this audio segment. | Yes | |
| end | number | The time at which this segment ended relative to the beginning of the translated audio. | Yes | |
| id | integer | The 0-based index of this segment within a translation. | Yes | |
| no_speech_prob | number | The probability of no speech detection within this audio segment. | Yes | |
| seek | integer | The seek position associated with the processing of this audio segment. Seek positions are expressed as hundredths of seconds. The model may process several segments from a single seek position, so while the seek position will never represent a later time than the segment's start, the segment's start may represent a significantly later time than the segment's associated seek position. |
Yes | |
| start | number | The time at which this segment started relative to the beginning of the translated audio. | Yes | |
| temperature | number | The temperature score associated with this audio segment. | Yes | |
| text | string | The translated text that was part of this audio segment. | Yes | |
| tokens | array of integer | The token IDs matching the translated text in this audio segment. | Yes |
AzureAIFoundryModelsApiVersion
| Property | Value |
|---|---|
| Type | string |
| Values | v1preview |
AzureAudioTranscriptionResponse
Result information for an operation that transcribed spoken audio into written text.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| duration | number | The total duration of the audio processed to produce accompanying transcription information. | No | |
| language | string | The spoken language that was detected in the transcribed audio data. This is expressed as a two-letter ISO-639-1 language code like 'en' or 'fr'. |
No | |
| segments | array of OpenAI.TranscriptionSegment | A collection of information about the timing, probabilities, and other detail of each processed audio segment. | No | |
| task | AudioTaskLabel | Defines the possible descriptors for available audio operation responses. | No | |
| text | string | The transcribed text for the provided audio data. | Yes | |
| words | array of OpenAI.TranscriptionWord | A collection of information about the timing of each processed word. | No |
AzureAudioTranslationResponse
Result information for an operation that translated spoken audio into written text.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| duration | number | The total duration of the audio processed to produce accompanying translation information. | No | |
| language | string | The spoken language that was detected in the translated audio data. This is expressed as a two-letter ISO-639-1 language code like 'en' or 'fr'. |
No | |
| segments | array of AudioTranslationSegment | A collection of information about the timing, probabilities, and other detail of each processed audio segment. | No | |
| task | AudioTaskLabel | Defines the possible descriptors for available audio operation responses. | No | |
| text | string | The translated text for the provided audio data. | Yes |
AzureCompletionsSamplingParams
Sampling parameters for controlling the behavior of completions.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| max_completion_tokens | integer | No | ||
| max_tokens | integer | The maximum number of tokens in the generated output. | No | |
| reasoning_effort | OpenAI.ReasoningEffort | Constrains effort on reasoning for reasoning models. Currently supported values are none, minimal, low, medium, high, and xhigh. Reducingreasoning effort can result in faster responses and fewer tokens used on reasoning in a response. - gpt-5.1 defaults to none, which does not perform reasoning. The supported reasoning values for gpt-5.1 are none, low, medium, and high. Tool calls are supported for all reasoning values in gpt-5.1.- All models before gpt-5.1 default to medium reasoning effort, and do not support none.- The gpt-5-pro model defaults to (and only supports) high reasoning effort.- xhigh is supported for all models after gpt-5.1-codex-max. |
No | |
| response_format | OpenAI.ResponseFormatText or OpenAI.ResponseFormatJsonSchema or OpenAI.ResponseFormatJsonObject | No | ||
| seed | integer | A seed value initializes the randomness during sampling. | No | 42 |
| temperature | number | A higher temperature increases randomness in the outputs. | No | 1 |
| tools | array of OpenAI.ChatCompletionTool | No | ||
| top_p | number | An alternative to temperature for nucleus sampling; 1.0 includes all tokens. | No | 1 |
AzureContentFilterBlocklistIdResult
A content filter result item that associates an existing custom blocklist ID with a value indicating whether or not the corresponding blocklist resulted in content being filtered.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| filtered | boolean | Whether the associated blocklist resulted in the content being filtered. | Yes | |
| id | string | The ID of the custom blocklist associated with the filtered status. | Yes |
AzureContentFilterBlocklistResult
A collection of true/false filtering results for configured custom blocklists.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| details | array of object | The pairs of individual blocklist IDs and whether they resulted in a filtering action. | No | |
| filtered | boolean | A value indicating whether any of the detailed blocklists resulted in a filtering action. | Yes |
AzureContentFilterCompletionTextSpan
A representation of a span of completion text as used by Azure OpenAI content filter results.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| completion_end_offset | integer | Offset of the first UTF32 code point which is excluded from the span. This field is always equal to completion_start_offset for empty spans. This field is always larger than completion_start_offset for non-empty spans. | Yes | |
| completion_start_offset | integer | Offset of the UTF32 code point which begins the span. | Yes |
AzureContentFilterCompletionTextSpanDetectionResult
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| details | array of AzureContentFilterCompletionTextSpan | Detailed information about the detected completion text spans. | Yes | |
| detected | boolean | Whether the labeled content category was detected in the content. | Yes | |
| filtered | boolean | Whether the content detection resulted in a content filtering action. | Yes |
AzureContentFilterCustomTopicIdResult
A content filter result item that associates an existing custom topic ID with a value indicating whether or not the corresponding topic resulted in content being detected.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| detected | boolean | Whether the associated custom topic resulted in the content being detected. | Yes | |
| id | string | The ID of the custom topic associated with the detected status. | Yes |
AzureContentFilterCustomTopicResult
A collection of true/false filtering results for configured custom topics.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| details | array of object | The pairs of individual topic IDs and whether they are detected. | No | |
| filtered | boolean | A value indicating whether any of the detailed topics resulted in a filtering action. | Yes |
AzureContentFilterDetectionResult
A labeled content filter result item that indicates whether the content was detected and whether the content was filtered.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| detected | boolean | Whether the labeled content category was detected in the content. | Yes | |
| filtered | boolean | Whether the content detection resulted in a content filtering action. | Yes |
AzureContentFilterForResponsesAPI
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| blocked | boolean | Indicate if the response is blocked. | Yes | |
| content_filter_offsets | AzureContentFilterResultOffsets | Yes | ||
| content_filter_results | AzureContentFilterResultsForResponsesAPI | Yes | ||
| └─ custom_blocklists | AzureContentFilterBlocklistResult | A collection of binary filtering outcomes for configured custom blocklists. | No | |
| └─ custom_topics | AzureContentFilterCustomTopicResult | A collection of binary filtering outcomes for configured custom topics. | No | |
| └─ error | object | If present, details about an error that prevented content filtering from completing its evaluation. | No | |
| └─ code | integer | A distinct, machine-readable code associated with the error. | Yes | |
| └─ message | string | A human-readable message associated with the error. | Yes | |
| └─ hate | AzureContentFilterSeverityResult | A content filter category that can refer to any content that attacks or uses pejorative or discriminatory language with reference to a person or identity group based on certain differentiating attributes of these groups including but not limited to race, ethnicity, nationality, gender identity and expression, sexual orientation, religion, immigration status, ability status, personal appearance, and body size. |
No | |
| └─ indirect_attack | AzureContentFilterDetectionResult | A detection result that describes attacks on systems powered by Generative AI models that can happen every time an application processes information that wasn’t directly authored by either the developer of the application or the user. |
No | |
| └─ jailbreak | AzureContentFilterDetectionResult | A detection result that describes user prompt injection attacks, where malicious users deliberately exploit system vulnerabilities to elicit unauthorized behavior from the LLM. This could lead to inappropriate content generation or violations of system-imposed restrictions. |
Yes | |
| └─ personally_identifiable_information | AzureContentFilterPersonallyIdentifiableInformationResult | A detection result that describes matches against Personal Identifiable Information with configurable subcategories. | No | |
| └─ profanity | AzureContentFilterDetectionResult | A detection result that identifies whether crude, vulgar, or otherwise objection language is present in the content. |
No | |
| └─ protected_material_code | object | A detection result that describes a match against licensed code or other protected source material. | No | |
| └─ citation | object | If available, the citation details describing the associated license and its location. | No | |
| └─ URL | string | The URL associated with the license. | No | |
| └─ license | string | The name or identifier of the license associated with the detection. | No | |
| └─ detected | boolean | Whether the labeled content category was detected in the content. | Yes | |
| └─ filtered | boolean | Whether the content detection resulted in a content filtering action. | Yes | |
| └─ protected_material_text | AzureContentFilterDetectionResult | A detection result that describes a match against text protected under copyright or other status. | No | |
| └─ self_harm | AzureContentFilterSeverityResult | A content filter category that describes language related to physical actions intended to purposely hurt, injure, damage one's body or kill oneself. |
No | |
| └─ sexual | AzureContentFilterSeverityResult | A content filter category for language related to anatomical organs and genitals, romantic relationships, acts portrayed in erotic or affectionate terms, pregnancy, physical sexual acts, including those portrayed as an assault or a forced sexual violent act against one's will, prostitution, pornography, and abuse. |
No | |
| └─ task_adherence | AzureContentFilterDetectionResult | A detection result that indicates if the execution flow still sticks the plan. | Yes | |
| └─ ungrounded_material | AzureContentFilterCompletionTextSpanDetectionResult | No | ||
| └─ violence | AzureContentFilterSeverityResult | A content filter category for language related to physical actions intended to hurt, injure, damage, or kill someone or something; describes weapons, guns and related entities, such as manufactures, associations, legislation, and so on. |
No | |
| source_type | string | The name of the source type of the message. | Yes |
AzureContentFilterHarmExtensions
Extensions for harm categories, providing additional configuration options.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| pii_sub_categories | array of AzurePiiSubCategory | Configuration for PIIHarmSubCategory(s). | No |
AzureContentFilterImagePromptResults
A content filter result for an image generation operation's input request content.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| custom_blocklists | AzureContentFilterBlocklistResult | A collection of true/false filtering results for configured custom blocklists. | No | |
| └─ details | array of object | The pairs of individual blocklist IDs and whether they resulted in a filtering action. | No | |
| └─ filtered | boolean | A value indicating whether the blocklist produced a filtering action. | Yes | |
| └─ id | string | The ID of the custom blocklist evaluated. | Yes | |
| └─ filtered | boolean | A value indicating whether any of the detailed blocklists resulted in a filtering action. | Yes | |
| custom_topics | AzureContentFilterCustomTopicResult | A collection of true/false filtering results for configured custom topics. | No | |
| └─ details | array of object | The pairs of individual topic IDs and whether they are detected. | No | |
| └─ detected | boolean | A value indicating whether the topic is detected. | Yes | |
| └─ id | string | The ID of the custom topic evaluated. | Yes | |
| └─ filtered | boolean | A value indicating whether any of the detailed topics resulted in a filtering action. | Yes | |
| hate | AzureContentFilterSeverityResult | A labeled content filter result item that indicates whether the content was filtered and what the qualitative severity level of the content was, as evaluated against content filter configuration for the category. |
No | |
| └─ filtered | boolean | Whether the content severity resulted in a content filtering action. | Yes | |
| └─ severity | enum | The labeled severity of the content. Possible values: safe, low, medium, high |
Yes | |
| jailbreak | AzureContentFilterDetectionResult | A labeled content filter result item that indicates whether the content was detected and whether the content was filtered. |
Yes | |
| └─ detected | boolean | Whether the labeled content category was detected in the content. | Yes | |
| └─ filtered | boolean | Whether the content detection resulted in a content filtering action. | Yes | |
| profanity | AzureContentFilterDetectionResult | A labeled content filter result item that indicates whether the content was detected and whether the content was filtered. |
No | |
| └─ detected | boolean | Whether the labeled content category was detected in the content. | Yes | |
| └─ filtered | boolean | Whether the content detection resulted in a content filtering action. | Yes | |
| self_harm | AzureContentFilterSeverityResult | A labeled content filter result item that indicates whether the content was filtered and what the qualitative severity level of the content was, as evaluated against content filter configuration for the category. |
No | |
| └─ filtered | boolean | Whether the content severity resulted in a content filtering action. | Yes | |
| └─ severity | enum | The labeled severity of the content. Possible values: safe, low, medium, high |
Yes | |
| sexual | AzureContentFilterSeverityResult | A labeled content filter result item that indicates whether the content was filtered and what the qualitative severity level of the content was, as evaluated against content filter configuration for the category. |
No | |
| └─ filtered | boolean | Whether the content severity resulted in a content filtering action. | Yes | |
| └─ severity | enum | The labeled severity of the content. Possible values: safe, low, medium, high |
Yes | |
| violence | AzureContentFilterSeverityResult | A labeled content filter result item that indicates whether the content was filtered and what the qualitative severity level of the content was, as evaluated against content filter configuration for the category. |
No | |
| └─ filtered | boolean | Whether the content severity resulted in a content filtering action. | Yes | |
| └─ severity | enum | The labeled severity of the content. Possible values: safe, low, medium, high |
Yes |
AzureContentFilterImageResponseResults
A content filter result for an image generation operation's output response content.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| hate | AzureContentFilterSeverityResult | A labeled content filter result item that indicates whether the content was filtered and what the qualitative severity level of the content was, as evaluated against content filter configuration for the category. |
No | |
| └─ filtered | boolean | Whether the content severity resulted in a content filtering action. | Yes | |
| └─ severity | enum | The labeled severity of the content. Possible values: safe, low, medium, high |
Yes | |
| self_harm | AzureContentFilterSeverityResult | A labeled content filter result item that indicates whether the content was filtered and what the qualitative severity level of the content was, as evaluated against content filter configuration for the category. |
No | |
| └─ filtered | boolean | Whether the content severity resulted in a content filtering action. | Yes | |
| └─ severity | enum | The labeled severity of the content. Possible values: safe, low, medium, high |
Yes | |
| sexual | AzureContentFilterSeverityResult | A labeled content filter result item that indicates whether the content was filtered and what the qualitative severity level of the content was, as evaluated against content filter configuration for the category. |
No | |
| └─ filtered | boolean | Whether the content severity resulted in a content filtering action. | Yes | |
| └─ severity | enum | The labeled severity of the content. Possible values: safe, low, medium, high |
Yes | |
| violence | AzureContentFilterSeverityResult | A labeled content filter result item that indicates whether the content was filtered and what the qualitative severity level of the content was, as evaluated against content filter configuration for the category. |
No | |
| └─ filtered | boolean | Whether the content severity resulted in a content filtering action. | Yes | |
| └─ severity | enum | The labeled severity of the content. Possible values: safe, low, medium, high |
Yes |
AzureContentFilterPersonallyIdentifiableInformationResult
A content filter detection result for Personally Identifiable Information that includes harm extensions.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| detected | boolean | Whether the labeled content category was detected in the content. | Yes | |
| filtered | boolean | Whether the content detection resulted in a content filtering action. | Yes | |
| redacted_text | string | The redacted text with PII information removed or masked. | No | |
| sub_categories | array of AzurePiiSubCategoryResult | Detailed results for individual PIIHarmSubCategory(s). | No |
AzureContentFilterResultForChoice
A content filter result for a single response item produced by a generative AI system.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| custom_blocklists | AzureContentFilterBlocklistResult | A collection of true/false filtering results for configured custom blocklists. | No | |
| └─ details | array of object | The pairs of individual blocklist IDs and whether they resulted in a filtering action. | No | |
| └─ filtered | boolean | A value indicating whether the blocklist produced a filtering action. | Yes | |
| └─ id | string | The ID of the custom blocklist evaluated. | Yes | |
| └─ filtered | boolean | A value indicating whether any of the detailed blocklists resulted in a filtering action. | Yes | |
| custom_topics | AzureContentFilterCustomTopicResult | A collection of true/false filtering results for configured custom topics. | No | |
| └─ details | array of object | The pairs of individual topic IDs and whether they are detected. | No | |
| └─ detected | boolean | A value indicating whether the topic is detected. | Yes | |
| └─ id | string | The ID of the custom topic evaluated. | Yes | |
| └─ filtered | boolean | A value indicating whether any of the detailed topics resulted in a filtering action. | Yes | |
| error | object | If present, details about an error that prevented content filtering from completing its evaluation. | No | |
| └─ code | integer | A distinct, machine-readable code associated with the error. | Yes | |
| └─ message | string | A human-readable message associated with the error. | Yes | |
| hate | AzureContentFilterSeverityResult | A labeled content filter result item that indicates whether the content was filtered and what the qualitative severity level of the content was, as evaluated against content filter configuration for the category. |
No | |
| └─ filtered | boolean | Whether the content severity resulted in a content filtering action. | Yes | |
| └─ severity | enum | The labeled severity of the content. Possible values: safe, low, medium, high |
Yes | |
| personally_identifiable_information | AzureContentFilterPersonallyIdentifiableInformationResult | A content filter detection result for Personally Identifiable Information that includes harm extensions. | No | |
| └─ redacted_text | string | The redacted text with PII information removed or masked. | No | |
| └─ sub_categories | array of AzurePiiSubCategoryResult | Detailed results for individual PIIHarmSubCategory(s). | No | |
| profanity | AzureContentFilterDetectionResult | A labeled content filter result item that indicates whether the content was detected and whether the content was filtered. |
No | |
| └─ detected | boolean | Whether the labeled content category was detected in the content. | Yes | |
| └─ filtered | boolean | Whether the content detection resulted in a content filtering action. | Yes | |
| protected_material_code | object | A detection result that describes a match against licensed code or other protected source material. | No | |
| └─ citation | object | If available, the citation details describing the associated license and its location. | No | |
| └─ URL | string | The URL associated with the license. | No | |
| └─ license | string | The name or identifier of the license associated with the detection. | No | |
| └─ detected | boolean | Whether the labeled content category was detected in the content. | Yes | |
| └─ filtered | boolean | Whether the content detection resulted in a content filtering action. | Yes | |
| protected_material_text | AzureContentFilterDetectionResult | A labeled content filter result item that indicates whether the content was detected and whether the content was filtered. |
No | |
| └─ detected | boolean | Whether the labeled content category was detected in the content. | Yes | |
| └─ filtered | boolean | Whether the content detection resulted in a content filtering action. | Yes | |
| self_harm | AzureContentFilterSeverityResult | A labeled content filter result item that indicates whether the content was filtered and what the qualitative severity level of the content was, as evaluated against content filter configuration for the category. |
No | |
| └─ filtered | boolean | Whether the content severity resulted in a content filtering action. | Yes | |
| └─ severity | enum | The labeled severity of the content. Possible values: safe, low, medium, high |
Yes | |
| sexual | AzureContentFilterSeverityResult | A labeled content filter result item that indicates whether the content was filtered and what the qualitative severity level of the content was, as evaluated against content filter configuration for the category. |
No | |
| └─ filtered | boolean | Whether the content severity resulted in a content filtering action. | Yes | |
| └─ severity | enum | The labeled severity of the content. Possible values: safe, low, medium, high |
Yes | |
| ungrounded_material | AzureContentFilterCompletionTextSpanDetectionResult | No | ||
| violence | AzureContentFilterSeverityResult | A labeled content filter result item that indicates whether the content was filtered and what the qualitative severity level of the content was, as evaluated against content filter configuration for the category. |
No | |
| └─ filtered | boolean | Whether the content severity resulted in a content filtering action. | Yes | |
| └─ severity | enum | The labeled severity of the content. Possible values: safe, low, medium, high |
Yes |
AzureContentFilterResultForPrompt
A content filter result associated with a single input prompt item into a generative AI system.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| content_filter_results | object | The content filter category details for the result. | No | |
| └─ custom_blocklists | AzureContentFilterBlocklistResult | A collection of true/false filtering results for configured custom blocklists. | No | |
| └─ details | array of object | The pairs of individual blocklist IDs and whether they resulted in a filtering action. | No | |
| └─ filtered | boolean | A value indicating whether the blocklist produced a filtering action. | Yes | |
| └─ id | string | The ID of the custom blocklist evaluated. | Yes | |
| └─ filtered | boolean | A value indicating whether any of the detailed blocklists resulted in a filtering action. | Yes | |
| └─ custom_topics | AzureContentFilterCustomTopicResult | A collection of true/false filtering results for configured custom topics. | No | |
| └─ details | array of object | The pairs of individual topic IDs and whether they are detected. | No | |
| └─ detected | boolean | A value indicating whether the topic is detected. | Yes | |
| └─ id | string | The ID of the custom topic evaluated. | Yes | |
| └─ filtered | boolean | A value indicating whether any of the detailed topics resulted in a filtering action. | Yes | |
| └─ error | object | If present, details about an error that prevented content filtering from completing its evaluation. | No | |
| └─ code | integer | A distinct, machine-readable code associated with the error. | Yes | |
| └─ message | string | A human-readable message associated with the error. | Yes | |
| └─ hate | AzureContentFilterSeverityResult | A labeled content filter result item that indicates whether the content was filtered and what the qualitative severity level of the content was, as evaluated against content filter configuration for the category. |
No | |
| └─ filtered | boolean | Whether the content severity resulted in a content filtering action. | Yes | |
| └─ severity | enum | The labeled severity of the content. Possible values: safe, low, medium, high |
Yes | |
| └─ indirect_attack | AzureContentFilterDetectionResult | A labeled content filter result item that indicates whether the content was detected and whether the content was filtered. |
Yes | |
| └─ detected | boolean | Whether the labeled content category was detected in the content. | Yes | |
| └─ filtered | boolean | Whether the content detection resulted in a content filtering action. | Yes | |
| └─ jailbreak | AzureContentFilterDetectionResult | A labeled content filter result item that indicates whether the content was detected and whether the content was filtered. |
Yes | |
| └─ detected | boolean | Whether the labeled content category was detected in the content. | Yes | |
| └─ filtered | boolean | Whether the content detection resulted in a content filtering action. | Yes | |
| └─ profanity | AzureContentFilterDetectionResult | A labeled content filter result item that indicates whether the content was detected and whether the content was filtered. |
No | |
| └─ detected | boolean | Whether the labeled content category was detected in the content. | Yes | |
| └─ filtered | boolean | Whether the content detection resulted in a content filtering action. | Yes | |
| └─ self_harm | AzureContentFilterSeverityResult | A labeled content filter result item that indicates whether the content was filtered and what the qualitative severity level of the content was, as evaluated against content filter configuration for the category. |
No | |
| └─ filtered | boolean | Whether the content severity resulted in a content filtering action. | Yes | |
| └─ severity | enum | The labeled severity of the content. Possible values: safe, low, medium, high |
Yes | |
| └─ sexual | AzureContentFilterSeverityResult | A labeled content filter result item that indicates whether the content was filtered and what the qualitative severity level of the content was, as evaluated against content filter configuration for the category. |
No | |
| └─ filtered | boolean | Whether the content severity resulted in a content filtering action. | Yes | |
| └─ severity | enum | The labeled severity of the content. Possible values: safe, low, medium, high |
Yes | |
| └─ violence | AzureContentFilterSeverityResult | A labeled content filter result item that indicates whether the content was filtered and what the qualitative severity level of the content was, as evaluated against content filter configuration for the category. |
No | |
| └─ filtered | boolean | Whether the content severity resulted in a content filtering action. | Yes | |
| └─ severity | enum | The labeled severity of the content. Possible values: safe, low, medium, high |
Yes | |
| prompt_index | integer | The index of the input prompt associated with the accompanying content filter result categories. | No |
AzureContentFilterResultOffsets
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| check_offset | integer | Yes | ||
| end_offset | integer | Yes | ||
| start_offset | integer | Yes |
AzureContentFilterResultsForResponsesAPI
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| custom_blocklists | AzureContentFilterBlocklistResult | A collection of true/false filtering results for configured custom blocklists. | No | |
| └─ details | array of object | The pairs of individual blocklist IDs and whether they resulted in a filtering action. | No | |
| └─ filtered | boolean | A value indicating whether the blocklist produced a filtering action. | Yes | |
| └─ id | string | The ID of the custom blocklist evaluated. | Yes | |
| └─ filtered | boolean | A value indicating whether any of the detailed blocklists resulted in a filtering action. | Yes | |
| custom_topics | AzureContentFilterCustomTopicResult | A collection of true/false filtering results for configured custom topics. | No | |
| └─ details | array of object | The pairs of individual topic IDs and whether they are detected. | No | |
| └─ detected | boolean | A value indicating whether the topic is detected. | Yes | |
| └─ id | string | The ID of the custom topic evaluated. | Yes | |
| └─ filtered | boolean | A value indicating whether any of the detailed topics resulted in a filtering action. | Yes | |
| error | object | If present, details about an error that prevented content filtering from completing its evaluation. | No | |
| └─ code | integer | A distinct, machine-readable code associated with the error. | Yes | |
| └─ message | string | A human-readable message associated with the error. | Yes | |
| hate | AzureContentFilterSeverityResult | A labeled content filter result item that indicates whether the content was filtered and what the qualitative severity level of the content was, as evaluated against content filter configuration for the category. |
No | |
| └─ filtered | boolean | Whether the content severity resulted in a content filtering action. | Yes | |
| └─ severity | enum | The labeled severity of the content. Possible values: safe, low, medium, high |
Yes | |
| indirect_attack | AzureContentFilterDetectionResult | A labeled content filter result item that indicates whether the content was detected and whether the content was filtered. |
No | |
| └─ detected | boolean | Whether the labeled content category was detected in the content. | Yes | |
| └─ filtered | boolean | Whether the content detection resulted in a content filtering action. | Yes | |
| jailbreak | AzureContentFilterDetectionResult | A labeled content filter result item that indicates whether the content was detected and whether the content was filtered. |
Yes | |
| └─ detected | boolean | Whether the labeled content category was detected in the content. | Yes | |
| └─ filtered | boolean | Whether the content detection resulted in a content filtering action. | Yes | |
| personally_identifiable_information | AzureContentFilterPersonallyIdentifiableInformationResult | A content filter detection result for Personally Identifiable Information that includes harm extensions. | No | |
| └─ redacted_text | string | The redacted text with PII information removed or masked. | No | |
| └─ sub_categories | array of AzurePiiSubCategoryResult | Detailed results for individual PIIHarmSubCategory(s). | No | |
| profanity | AzureContentFilterDetectionResult | A labeled content filter result item that indicates whether the content was detected and whether the content was filtered. |
No | |
| └─ detected | boolean | Whether the labeled content category was detected in the content. | Yes | |
| └─ filtered | boolean | Whether the content detection resulted in a content filtering action. | Yes | |
| protected_material_code | object | A detection result that describes a match against licensed code or other protected source material. | No | |
| └─ citation | object | If available, the citation details describing the associated license and its location. | No | |
| └─ URL | string | The URL associated with the license. | No | |
| └─ license | string | The name or identifier of the license associated with the detection. | No | |
| └─ detected | boolean | Whether the labeled content category was detected in the content. | Yes | |
| └─ filtered | boolean | Whether the content detection resulted in a content filtering action. | Yes | |
| protected_material_text | AzureContentFilterDetectionResult | A labeled content filter result item that indicates whether the content was detected and whether the content was filtered. |
No | |
| └─ detected | boolean | Whether the labeled content category was detected in the content. | Yes | |
| └─ filtered | boolean | Whether the content detection resulted in a content filtering action. | Yes | |
| self_harm | AzureContentFilterSeverityResult | A labeled content filter result item that indicates whether the content was filtered and what the qualitative severity level of the content was, as evaluated against content filter configuration for the category. |
No | |
| └─ filtered | boolean | Whether the content severity resulted in a content filtering action. | Yes | |
| └─ severity | enum | The labeled severity of the content. Possible values: safe, low, medium, high |
Yes | |
| sexual | AzureContentFilterSeverityResult | A labeled content filter result item that indicates whether the content was filtered and what the qualitative severity level of the content was, as evaluated against content filter configuration for the category. |
No | |
| └─ filtered | boolean | Whether the content severity resulted in a content filtering action. | Yes | |
| └─ severity | enum | The labeled severity of the content. Possible values: safe, low, medium, high |
Yes | |
| task_adherence | AzureContentFilterDetectionResult | A labeled content filter result item that indicates whether the content was detected and whether the content was filtered. |
Yes | |
| └─ detected | boolean | Whether the labeled content category was detected in the content. | Yes | |
| └─ filtered | boolean | Whether the content detection resulted in a content filtering action. | Yes | |
| ungrounded_material | AzureContentFilterCompletionTextSpanDetectionResult | No | ||
| violence | AzureContentFilterSeverityResult | A labeled content filter result item that indicates whether the content was filtered and what the qualitative severity level of the content was, as evaluated against content filter configuration for the category. |
No | |
| └─ filtered | boolean | Whether the content severity resulted in a content filtering action. | Yes | |
| └─ severity | enum | The labeled severity of the content. Possible values: safe, low, medium, high |
Yes |
AzureContentFilterSeverityResult
A labeled content filter result item that indicates whether the content was filtered and what the qualitative severity level of the content was, as evaluated against content filter configuration for the category.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| filtered | boolean | Whether the content severity resulted in a content filtering action. | Yes | |
| severity | enum | The labeled severity of the content. Possible values: safe, low, medium, high |
Yes |
AzureFileExpiryAnchor
| Property | Value |
|---|---|
| Type | string |
| Values | created_at |
AzureFineTuneReinforcementMethod
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| grader | OpenAI.GraderStringCheck or OpenAI.GraderTextSimilarity or OpenAI.GraderScoreModel or OpenAI.GraderMulti or GraderEndpoint | Yes | ||
| hyperparameters | OpenAI.FineTuneReinforcementHyperparameters | The hyperparameters used for the reinforcement fine-tuning job. | No | |
| response_format | ResponseFormatJSONSchemaRequest | No | ||
| └─ json_schema | object | JSON Schema for the response format | Yes | |
| └─ type | enum | Type of response format Possible values: json_schema |
Yes |
AzurePiiSubCategory
Configuration for individual PIIHarmSubCategory(s) within the harm extensions framework.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| detect | boolean | Whether detection is enabled for this subcategory. | Yes | |
| filter | boolean | Whether content containing this subcategory should be blocked. | Yes | |
| redact | boolean | Whether content containing this subcategory should be redacted. | Yes | |
| sub_category | string | The PIIHarmSubCategory being configured. | Yes |
AzurePiiSubCategoryResult
Result details for individual PIIHarmSubCategory(s).
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| detected | boolean | Whether the labeled content subcategory was detected in the content. | Yes | |
| filtered | boolean | Whether the content detection resulted in a content filtering action for this subcategory. | Yes | |
| redacted | boolean | Whether the content was redacted for this subcategory. | Yes | |
| sub_category | string | The PIIHarmSubCategory that was evaluated. | Yes |
AzureResponsesSamplingParams
Sampling parameters for controlling the behavior of responses.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| max_tokens | integer | The maximum number of tokens in the generated output. | No | |
| reasoning_effort | OpenAI.ReasoningEffort | Constrains effort on reasoning for reasoning models. Currently supported values are none, minimal, low, medium, high, and xhigh. Reducingreasoning effort can result in faster responses and fewer tokens used on reasoning in a response. - gpt-5.1 defaults to none, which does not perform reasoning. The supported reasoning values for gpt-5.1 are none, low, medium, and high. Tool calls are supported for all reasoning values in gpt-5.1.- All models before gpt-5.1 default to medium reasoning effort, and do not support none.- The gpt-5-pro model defaults to (and only supports) high reasoning effort.- xhigh is supported for all models after gpt-5.1-codex-max. |
No | |
| seed | integer | A seed value initializes the randomness during sampling. | No | 42 |
| temperature | number | A higher temperature increases randomness in the outputs. | No | 1 |
| text | OpenAI.CreateEvalResponsesRunDataSourceSamplingParamsText | No | ||
| tools | array of OpenAI.Tool | No | ||
| top_p | number | An alternative to temperature for nucleus sampling; 1.0 includes all tokens. | No | 1 |
AzureUserSecurityContext
User security context contains several parameters that describe the application itself, and the end user that interacts with the application. These fields assist your security operations teams to investigate and mitigate security incidents by providing a comprehensive approach to protecting your AI applications. Learn more about protecting AI applications using Microsoft Defender for Cloud.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| application_name | string | The name of the application. Sensitive personal information should not be included in this field. | No | |
| end_user_id | string | This identifier is the Microsoft Entra ID (formerly Azure Active Directory) user object ID used to authenticate end-users within the generative AI application. Sensitive personal information should not be included in this field. | No | |
| end_user_tenant_id | string | The Microsoft 365 tenant ID the end user belongs to. It's required when the generative AI application is multitenant. | No | |
| source_ip | string | Captures the original client's IP address. | No |
CopiedAccountDetails
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| destinationResourceId | string | The ID of the destination resource where the model was copied to. | Yes | |
| region | string | The region where the model was copied to. | Yes | |
| status | enum | The status of the copy operation. Possible values: Completed, Failed, InProgress |
Yes |
CopyModelRequest
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| destinationResourceId | string | The ID of the destination Resource to copy. | Yes | |
| region | string | The region to copy the model to. | Yes |
CopyModelResponse
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| checkpointedModelName | string | The ID of the copied model. | Yes | |
| copiedAccountDetails | array of CopiedAccountDetails | The ID of the destination resource id where it was copied | Yes | |
| fineTuningJobId | string | The ID of the fine-tuning job that the checkpoint was copied from. | Yes |
CreateVideoBody
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| model | string | The name of the deployment to use for this request. | Yes | |
| prompt | string | Text prompt that describes the video to generate. Constraints: minLength: 1 |
Yes | |
| seconds | VideoSeconds | Supported clip durations, measured in seconds. | No | 4 |
| size | VideoSize | Output dimensions formatted as {width}x{height}. |
No | 720x1280 |
CreateVideoBodyWithInputReference
The properties of a video generation job request with media files.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| input_reference | object | Optional image reference that guides generation. | Yes | |
| model | object | The name of the deployment to use for this request. | Yes | |
| prompt | object | Text prompt that describes the video to generate. | Yes | |
| seconds | object | Clip duration in seconds. Defaults to 4 seconds. | No | |
| size | object | Output resolution formatted as width x height. Defaults to 720x1280. | No |
CreateVideoRemixBody
Parameters for remixing an existing generated video.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| prompt | string | Updated text prompt that directs the remix generation. Constraints: minLength: 1 |
Yes |
DeletedVideoResource
Confirmation payload returned after deleting a video.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| deleted | boolean | Indicates that the video resource was deleted. | Yes | True |
| id | string | Identifier of the deleted video. | Yes | |
| object | string | The object type that signals the deletion response. | Yes | video.deleted |
Error
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| code | string | Yes | ||
| message | string | Yes |
EvalGraderEndpoint
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| headers | object or null | Optional HTTP headers to include in requests to the endpoint | No | |
| name | string | The name of the grader | Yes | |
| pass_threshold | number or null | Optional threshold score above which the grade is considered passing If not specified, all scores are considered valid |
No | |
| rate_limit | integer or null | Optional rate limit for requests per second to the endpoint Must be a positive integer |
No | |
| type | enum | Possible values: endpoint |
Yes | |
| url | string | The HTTPS URL of the endpoint to call for grading Constraints: pattern: ^https:// |
Yes |
GraderEndpoint
Endpoint grader configuration for external HTTP endpoint evaluation
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| headers | object or null | Optional HTTP headers to include in requests to the endpoint | No | |
| name | string | The name of the grader | Yes | |
| pass_threshold | number or null | Optional threshold score above which the grade is considered passing If not specified, all scores are considered valid |
No | |
| rate_limit | integer or null | Optional rate limit for requests per second to the endpoint Must be a positive integer |
No | |
| type | enum | Possible values: endpoint |
Yes | |
| url | string | The HTTPS URL of the endpoint to call for grading Constraints: pattern: ^https:// |
Yes |
OpenAI.Annotation
An annotation that applies to a span of output text.
Discriminator for OpenAI.Annotation
This component uses the property type to discriminate between different types:
| Type Value | Schema |
|---|---|
file_citation |
OpenAI.FileCitationBody |
url_citation |
OpenAI.UrlCitationBody |
container_file_citation |
OpenAI.ContainerFileCitationBody |
file_path |
OpenAI.FilePath |
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | OpenAI.AnnotationType | Yes |
OpenAI.AnnotationType
| Property | Value |
|---|---|
| Type | string |
| Values | file_citationurl_citationcontainer_file_citationfile_path |
OpenAI.ApplyPatchCallOutputStatus
| Property | Value |
|---|---|
| Type | string |
| Values | completedfailed |
OpenAI.ApplyPatchCallStatus
| Property | Value |
|---|---|
| Type | string |
| Values | in_progresscompleted |
OpenAI.ApplyPatchCreateFileOperation
Instruction describing how to create a file via the apply_patch tool.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| diff | string | Diff to apply. | Yes | |
| path | string | Path of the file to create. | Yes | |
| type | enum | Create a new file with the provided diff. Possible values: create_file |
Yes |
OpenAI.ApplyPatchDeleteFileOperation
Instruction describing how to delete a file via the apply_patch tool.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| path | string | Path of the file to delete. | Yes | |
| type | enum | Delete the specified file. Possible values: delete_file |
Yes |
OpenAI.ApplyPatchFileOperation
One of the create_file, delete_file, or update_file operations applied via apply_patch.
Discriminator for OpenAI.ApplyPatchFileOperation
This component uses the property type to discriminate between different types:
| Type Value | Schema |
|---|---|
create_file |
OpenAI.ApplyPatchCreateFileOperation |
delete_file |
OpenAI.ApplyPatchDeleteFileOperation |
update_file |
OpenAI.ApplyPatchUpdateFileOperation |
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | OpenAI.ApplyPatchFileOperationType | Yes |
OpenAI.ApplyPatchFileOperationType
| Property | Value |
|---|---|
| Type | string |
| Values | create_filedelete_fileupdate_file |
OpenAI.ApplyPatchToolParam
Allows the assistant to create, delete, or update files using unified diffs.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | enum | The type of the tool. Always apply_patch.Possible values: apply_patch |
Yes |
OpenAI.ApplyPatchUpdateFileOperation
Instruction describing how to update a file via the apply_patch tool.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| diff | string | Diff to apply. | Yes | |
| path | string | Path of the file to update. | Yes | |
| type | enum | Update an existing file with the provided diff. Possible values: update_file |
Yes |
OpenAI.ApproximateLocation
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| city | string or null | No | ||
| country | string or null | No | ||
| region | string or null | No | ||
| timezone | string or null | No | ||
| type | enum | The type of location approximation. Always approximate.Possible values: approximate |
Yes |
OpenAI.AssistantTool
Discriminator for OpenAI.AssistantTool
This component uses the property type to discriminate between different types:
| Type Value | Schema |
|---|---|
code_interpreter |
OpenAI.AssistantToolsCode |
file_search |
OpenAI.AssistantToolsFileSearch |
function |
OpenAI.AssistantToolsFunction |
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | OpenAI.AssistantToolType | Yes |
OpenAI.AssistantToolType
| Property | Value |
|---|---|
| Type | string |
| Values | code_interpreterfile_searchfunction |
OpenAI.AssistantToolsCode
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | enum | The type of tool being defined: code_interpreterPossible values: code_interpreter |
Yes |
OpenAI.AssistantToolsFileSearch
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| file_search | OpenAI.AssistantToolsFileSearchFileSearch | No | ||
| └─ max_num_results | integer | Constraints: min: 1, max: 50 | No | |
| └─ ranking_options | OpenAI.FileSearchRankingOptions | The ranking options for the file search. If not specified, the file search tool will use the auto ranker and a score_threshold of 0.See the file search tool documentation for more information. |
No | |
| type | enum | The type of tool being defined: file_searchPossible values: file_search |
Yes |
OpenAI.AssistantToolsFileSearchFileSearch
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| max_num_results | integer | Constraints: min: 1, max: 50 | No | |
| ranking_options | OpenAI.FileSearchRankingOptions | The ranking options for the file search. If not specified, the file search tool will use the auto ranker and a score_threshold of 0.See the file search tool documentation for more information. |
No |
OpenAI.AssistantToolsFileSearchTypeOnly
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | enum | The type of tool being defined: file_searchPossible values: file_search |
Yes |
OpenAI.AssistantToolsFunction
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| function | OpenAI.FunctionObject | Yes | ||
| type | enum | The type of tool being defined: functionPossible values: function |
Yes |
OpenAI.AssistantsApiResponseFormatOption
Specifies the format that the model must output. Compatible with GPT-4o, GPT-4 Turbo, and all GPT-3.5 Turbo models since gpt-3.5-turbo-1106.
Setting to { "type": "json_schema", "json_schema": {...} } enables Structured Outputs which ensure the model will match your supplied JSON schema. Learn more in the
Setting to { "type": "json_object" } enables JSON mode, which ensures the message the model generates is valid JSON.
Important:* when using JSON mode, you must also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly "stuck" request. Also note that the message content may be partially cut off if finish_reason="length", which indicates the generation exceeded max_tokens or the conversation exceeded the max context length.
Type: string or OpenAI.ResponseFormatText or OpenAI.ResponseFormatJsonObject or OpenAI.ResponseFormatJsonSchema
Specifies the format that the model must output. Compatible with GPT-4o, GPT-4 Turbo, and all GPT-3.5 Turbo models since gpt-3.5-turbo-1106.
Setting to { "type": "json_schema", "json_schema": {...} } enables Structured Outputs which ensure the model will match your supplied JSON schema. Learn more in the
Setting to { "type": "json_object" } enables JSON mode, which ensures the message the model generates is valid JSON.
Important:* when using JSON mode, you must also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly "stuck" request. Also note that the message content may be partially cut off if finish_reason="length", which indicates the generation exceeded max_tokens or the conversation exceeded the max context length.
OpenAI.AssistantsApiToolChoiceOption
Controls which (if any) tool is called by the model.
none means the model will not call any tools and instead generates a message.
auto is the default value and means the model can pick between generating a message or calling one or more tools.
required means the model must call one or more tools before responding to the user.
Specifying a particular tool like {"type": "file_search"} or {"type": "function", "function": {"name": "my_function"}} forces the model to call that tool.
Type: string or OpenAI.AssistantsNamedToolChoice
Controls which (if any) tool is called by the model.
none means the model will not call any tools and instead generates a message.
auto is the default value and means the model can pick between generating a message or calling one or more tools.
required means the model must call one or more tools before responding to the user.
Specifying a particular tool like {"type": "file_search"} or {"type": "function", "function": {"name": "my_function"}} forces the model to call that tool.
OpenAI.AssistantsNamedToolChoice
Specifies a tool the model should use. Use to force the model to call a specific tool.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| function | OpenAI.AssistantsNamedToolChoiceFunction | No | ||
| type | enum | The type of the tool. If type is function, the function name must be setPossible values: function, code_interpreter, file_search |
Yes |
OpenAI.AssistantsNamedToolChoiceFunction
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| name | string | Yes |
OpenAI.AudioTranscription
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| language | string | The language of the input audio. Supplying the input language in ISO-639-1 (e.g. en) formatwill improve accuracy and latency. |
No | |
| model | string | The model to use for transcription. Current options are whisper-1, gpt-4o-mini-transcribe, gpt-4o-mini-transcribe-2025-12-15, gpt-4o-transcribe, and gpt-4o-transcribe-diarize. Use gpt-4o-transcribe-diarize when you need diarization with speaker labels. |
No | |
| prompt | string | An optional text to guide the model's style or continue a previous audio segment. For whisper-1, the prompt is a list of keywords.For gpt-4o-transcribe models (excluding gpt-4o-transcribe-diarize), the prompt is a free text string, for example "expect words related to technology". |
No |
OpenAI.AutoChunkingStrategyRequestParam
The default strategy. This strategy currently uses a max_chunk_size_tokens of 800 and chunk_overlap_tokens of 400.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | enum | Always auto.Possible values: auto |
Yes |
OpenAI.Batch
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| cancelled_at | integer | The Unix timestamp (in seconds) for when the batch was cancelled. | No | |
| cancelling_at | integer | The Unix timestamp (in seconds) for when the batch started cancelling. | No | |
| completed_at | integer | The Unix timestamp (in seconds) for when the batch was completed. | No | |
| completion_window | string | The time frame within which the batch should be processed. | Yes | |
| created_at | integer | The Unix timestamp (in seconds) for when the batch was created. | Yes | |
| endpoint | string | The OpenAI API endpoint used by the batch. | Yes | |
| error_file_id | string | The ID of the file containing the outputs of requests with errors. | No | |
| errors | OpenAI.BatchErrors | No | ||
| expired_at | integer | The Unix timestamp (in seconds) for when the batch expired. | No | |
| expires_at | integer | The Unix timestamp (in seconds) for when the batch will expire. | No | |
| failed_at | integer | The Unix timestamp (in seconds) for when the batch failed. | No | |
| finalizing_at | integer | The Unix timestamp (in seconds) for when the batch started finalizing. | No | |
| id | string | Yes | ||
| in_progress_at | integer | The Unix timestamp (in seconds) for when the batch started processing. | No | |
| input_file_id | string or null | No | ||
| metadata | OpenAI.Metadata or null | No | ||
| model | string | Model ID used to process the batch, like gpt-5-2025-08-07. OpenAIoffers a wide range of models with different capabilities, performance characteristics, and price points. Refer to the model guide to browse and compare available models. |
No | |
| object | enum | The object type, which is always batch.Possible values: batch |
Yes | |
| output_file_id | string | The ID of the file containing the outputs of successfully executed requests. | No | |
| request_counts | OpenAI.BatchRequestCounts | The request counts for different statuses within the batch. | No | |
| status | enum | The current status of the batch. Possible values: validating, failed, in_progress, finalizing, completed, expired, cancelling, cancelled |
Yes | |
| usage | OpenAI.BatchUsage | No | ||
| └─ input_tokens | integer | Yes | ||
| └─ input_tokens_details | OpenAI.BatchUsageInputTokensDetails | Yes | ||
| └─ output_tokens | integer | Yes | ||
| └─ output_tokens_details | OpenAI.BatchUsageOutputTokensDetails | Yes | ||
| └─ total_tokens | integer | Yes |
OpenAI.BatchError
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| code | string | An error code identifying the error type. | No | |
| line | integer or null | No | ||
| message | string | A human-readable message providing more details about the error. | No | |
| param | string or null | No |
OpenAI.BatchErrors
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| data | array of OpenAI.BatchError | No | ||
| object | string | No |
OpenAI.BatchRequestCounts
The request counts for different statuses within the batch.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| completed | integer | Number of requests that have been completed successfully. | Yes | |
| failed | integer | Number of requests that have failed. | Yes | |
| total | integer | Total number of requests in the batch. | Yes |
OpenAI.BatchUsage
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| input_tokens | integer | Yes | ||
| input_tokens_details | OpenAI.BatchUsageInputTokensDetails | Yes | ||
| output_tokens | integer | Yes | ||
| output_tokens_details | OpenAI.BatchUsageOutputTokensDetails | Yes | ||
| total_tokens | integer | Yes |
OpenAI.BatchUsageInputTokensDetails
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| cached_tokens | integer | Yes |
OpenAI.BatchUsageOutputTokensDetails
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| reasoning_tokens | integer | Yes |
OpenAI.ChatCompletionAllowedTools
Constrains the tools available to the model to a predefined set.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| mode | enum | Constrains the tools available to the model to a predefined set.auto allows the model to pick from among the allowed tools and generate amessage. required requires the model to call one or more of the allowed tools.Possible values: auto, required |
Yes | |
| tools | array of object | A list of tool definitions that the model should be allowed to call. For the Chat Completions API, the list of tool definitions might look like: json<br> [<br> { "type": "function", "function": { "name": "get_weather" } },<br> { "type": "function", "function": { "name": "get_time" } }<br> ]<br> |
Yes |
OpenAI.ChatCompletionAllowedToolsChoice
Constrains the tools available to the model to a predefined set.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| allowed_tools | OpenAI.ChatCompletionAllowedTools | Constrains the tools available to the model to a predefined set. | Yes | |
| type | enum | Allowed tool configuration type. Always allowed_tools.Possible values: allowed_tools |
Yes |
OpenAI.ChatCompletionFunctionCallOption
Specifying a particular function via {"name": "my_function"} forces the model to call that function.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| name | string | The name of the function to call. | Yes |
OpenAI.ChatCompletionFunctions
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| description | string | A description of what the function does, used by the model to choose when and how to call the function. | No | |
| name | string | The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64. | Yes | |
| parameters | OpenAI.FunctionParameters | The parameters the functions accepts, described as a JSON Schema object. See the guide for examples, and the JSON Schema reference for documentation about the format. Omitting parameters defines a function with an empty parameter list. |
No |
OpenAI.ChatCompletionMessageCustomToolCall
A call to a custom tool created by the model.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| custom | OpenAI.ChatCompletionMessageCustomToolCallCustom | Yes | ||
| └─ input | string | Yes | ||
| └─ name | string | Yes | ||
| id | string | The ID of the tool call. | Yes | |
| type | enum | The type of the tool. Always custom.Possible values: custom |
Yes |
OpenAI.ChatCompletionMessageCustomToolCallCustom
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| input | string | Yes | ||
| name | string | Yes |
OpenAI.ChatCompletionMessageToolCall
A call to a function tool created by the model.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| function | OpenAI.ChatCompletionMessageToolCallFunction | Yes | ||
| └─ arguments | string | Yes | ||
| └─ name | string | Yes | ||
| id | string | The ID of the tool call. | Yes | |
| type | enum | The type of the tool. Currently, only function is supported.Possible values: function |
Yes |
OpenAI.ChatCompletionMessageToolCallChunk
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| function | OpenAI.ChatCompletionMessageToolCallChunkFunction | No | ||
| id | string | The ID of the tool call. | No | |
| index | integer | Yes | ||
| type | enum | The type of the tool. Currently, only function is supported.Possible values: function |
No |
OpenAI.ChatCompletionMessageToolCallChunkFunction
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| arguments | string | No | ||
| name | string | No |
OpenAI.ChatCompletionMessageToolCallFunction
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| arguments | string | Yes | ||
| name | string | Yes |
OpenAI.ChatCompletionMessageToolCalls
The tool calls generated by the model, such as function calls.
OpenAI.ChatCompletionMessageToolCallsItem
The tool calls generated by the model, such as function calls.
OpenAI.ChatCompletionNamedToolChoice
Specifies a tool the model should use. Use to force the model to call a specific function.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| function | OpenAI.ChatCompletionNamedToolChoiceFunction | Yes | ||
| type | enum | For function calling, the type is always function.Possible values: function |
Yes |
OpenAI.ChatCompletionNamedToolChoiceCustom
Specifies a tool the model should use. Use to force the model to call a specific custom tool.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| custom | OpenAI.ChatCompletionNamedToolChoiceCustomCustom | Yes | ||
| type | enum | For custom tool calling, the type is always custom.Possible values: custom |
Yes |
OpenAI.ChatCompletionNamedToolChoiceCustomCustom
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| name | string | Yes |
OpenAI.ChatCompletionNamedToolChoiceFunction
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| name | string | Yes |
OpenAI.ChatCompletionRequestAssistantMessage
Messages sent by the model in response to user messages.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| audio | OpenAI.ChatCompletionRequestAssistantMessageAudio or null | Data about a previous audio response from the model. | No | |
| content | string or array of OpenAI.ChatCompletionRequestAssistantMessageContentPart or null | No | ||
| function_call | OpenAI.ChatCompletionRequestAssistantMessageFunctionCall or null | No | ||
| name | string | An optional name for the participant. Provides the model information to differentiate between participants of the same role. | No | |
| refusal | string or null | No | ||
| role | enum | The role of the messages author, in this case assistant.Possible values: assistant |
Yes | |
| tool_calls | OpenAI.ChatCompletionMessageToolCalls | The tool calls generated by the model, such as function calls. | No |
OpenAI.ChatCompletionRequestAssistantMessageAudio
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| id | string | Yes |
OpenAI.ChatCompletionRequestAssistantMessageContentPart
Discriminator for OpenAI.ChatCompletionRequestAssistantMessageContentPart
This component uses the property type to discriminate between different types:
| Type Value | Schema |
|---|---|
refusal |
OpenAI.ChatCompletionRequestMessageContentPartRefusal |
text |
OpenAI.ChatCompletionRequestAssistantMessageContentPartChatCompletionRequestMessageContentPartText |
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | OpenAI.ChatCompletionRequestAssistantMessageContentPartType | Yes |
OpenAI.ChatCompletionRequestAssistantMessageContentPartChatCompletionRequestMessageContentPartText
Learn about text inputs.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| text | string | The text content. | Yes | |
| type | enum | The type of the content part. Possible values: text |
Yes |
OpenAI.ChatCompletionRequestAssistantMessageContentPartType
| Property | Value |
|---|---|
| Type | string |
| Values | textrefusal |
OpenAI.ChatCompletionRequestAssistantMessageFunctionCall
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| arguments | string | Yes | ||
| name | string | Yes |
OpenAI.ChatCompletionRequestDeveloperMessage
Developer-provided instructions that the model should follow, regardless of
messages sent by the user. With o1 models and newer, developer messages
replace the previous system messages.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| content | string or array of OpenAI.ChatCompletionRequestMessageContentPartText | The contents of the developer message. | Yes | |
| name | string | An optional name for the participant. Provides the model information to differentiate between participants of the same role. | No | |
| role | enum | The role of the messages author, in this case developer.Possible values: developer |
Yes |
OpenAI.ChatCompletionRequestFunctionMessage
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| content | string or null | Yes | ||
| name | string | The name of the function to call. | Yes | |
| role | enum | The role of the messages author, in this case function.Possible values: function |
Yes |
OpenAI.ChatCompletionRequestMessage
Discriminator for OpenAI.ChatCompletionRequestMessage
This component uses the property role to discriminate between different types:
| Type Value | Schema |
|---|---|
assistant |
OpenAI.ChatCompletionRequestAssistantMessage |
developer |
OpenAI.ChatCompletionRequestDeveloperMessage |
function |
OpenAI.ChatCompletionRequestFunctionMessage |
system |
OpenAI.ChatCompletionRequestSystemMessage |
user |
OpenAI.ChatCompletionRequestUserMessage |
tool |
OpenAI.ChatCompletionRequestToolMessage |
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| role | OpenAI.ChatCompletionRequestMessageType | Yes |
OpenAI.ChatCompletionRequestMessageContentPartAudio
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| input_audio | OpenAI.ChatCompletionRequestMessageContentPartAudioInputAudio | Yes | ||
| type | enum | The type of the content part. Always input_audio.Possible values: input_audio |
Yes |
OpenAI.ChatCompletionRequestMessageContentPartAudioInputAudio
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| data | string | Yes | ||
| format | enum | Possible values: wav, mp3 |
Yes |
OpenAI.ChatCompletionRequestMessageContentPartFile
Learn about file inputs for text generation.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| file | OpenAI.ChatCompletionRequestMessageContentPartFileFile | Yes | ||
| └─ file_data | string | No | ||
| └─ file_id | string | No | ||
| └─ filename | string | No | ||
| type | enum | The type of the content part. Always file.Possible values: file |
Yes |
OpenAI.ChatCompletionRequestMessageContentPartFileFile
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| file_data | string | No | ||
| file_id | string | No | ||
| filename | string | No |
OpenAI.ChatCompletionRequestMessageContentPartImage
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| image_url | OpenAI.ChatCompletionRequestMessageContentPartImageImageUrl | Yes | ||
| type | enum | The type of the content part. Possible values: image_url |
Yes |
OpenAI.ChatCompletionRequestMessageContentPartImageImageUrl
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| detail | enum | Possible values: auto, low, high |
No | |
| url | string | Yes |
OpenAI.ChatCompletionRequestMessageContentPartRefusal
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| refusal | string | The refusal message generated by the model. | Yes | |
| type | enum | The type of the content part. Possible values: refusal |
Yes |
OpenAI.ChatCompletionRequestMessageContentPartText
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| text | string | The text content. | Yes | |
| type | enum | The type of the content part. Possible values: text |
Yes |
OpenAI.ChatCompletionRequestMessageType
| Property | Value |
|---|---|
| Type | string |
| Values | developersystemuserassistanttoolfunction |
OpenAI.ChatCompletionRequestSystemMessage
Developer-provided instructions that the model should follow, regardless of
messages sent by the user. With o1 models and newer, use developer messages
for this purpose instead.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| content | string or array of OpenAI.ChatCompletionRequestSystemMessageContentPart | The contents of the system message. | Yes | |
| name | string | An optional name for the participant. Provides the model information to differentiate between participants of the same role. | No | |
| role | enum | The role of the messages author, in this case system.Possible values: system |
Yes |
OpenAI.ChatCompletionRequestSystemMessageContentPart
References: OpenAI.ChatCompletionRequestMessageContentPartText
OpenAI.ChatCompletionRequestToolMessage
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| content | string or array of OpenAI.ChatCompletionRequestToolMessageContentPart | The contents of the tool message. | Yes | |
| role | enum | The role of the messages author, in this case tool.Possible values: tool |
Yes | |
| tool_call_id | string | Tool call that this message is responding to. | Yes |
OpenAI.ChatCompletionRequestToolMessageContentPart
References: OpenAI.ChatCompletionRequestMessageContentPartText
OpenAI.ChatCompletionRequestUserMessage
Messages sent by an end user, containing prompts or additional context information.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| content | string or array of OpenAI.ChatCompletionRequestUserMessageContentPart | The contents of the user message. | Yes | |
| name | string | An optional name for the participant. Provides the model information to differentiate between participants of the same role. | No | |
| role | enum | The role of the messages author, in this case user.Possible values: user |
Yes |
OpenAI.ChatCompletionRequestUserMessageContentPart
Discriminator for OpenAI.ChatCompletionRequestUserMessageContentPart
This component uses the property type to discriminate between different types:
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | OpenAI.ChatCompletionRequestUserMessageContentPartType | Yes |
OpenAI.ChatCompletionRequestUserMessageContentPartChatCompletionRequestMessageContentPartText
Learn about text inputs.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| text | string | The text content. | Yes | |
| type | enum | The type of the content part. Possible values: text |
Yes |
OpenAI.ChatCompletionRequestUserMessageContentPartType
| Property | Value |
|---|---|
| Type | string |
| Values | textimage_urlinput_audiofile |
OpenAI.ChatCompletionResponseMessage
If the audio output modality is requested, this object contains data about the audio response from the model.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| annotations | array of OpenAI.ChatCompletionResponseMessageAnnotations | Annotations for the message, when applicable, as when using the web search tool. |
No | |
| audio | OpenAI.ChatCompletionResponseMessageAudio or null | No | ||
| content | string or null | Yes | ||
| function_call | OpenAI.ChatCompletionResponseMessageFunctionCall | No | ||
| └─ arguments | string | Yes | ||
| └─ name | string | Yes | ||
| reasoning_content | string | An Azure-specific extension property containing generated reasoning content from supported models. | No | |
| refusal | string or null | Yes | ||
| role | enum | The role of the author of this message. Possible values: assistant |
Yes | |
| tool_calls | OpenAI.ChatCompletionMessageToolCallsItem | The tool calls generated by the model, such as function calls. | No |
OpenAI.ChatCompletionResponseMessageAnnotations
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | enum | Possible values: url_citation |
Yes | |
| url_citation | OpenAI.ChatCompletionResponseMessageAnnotationsUrlCitation | Yes |
OpenAI.ChatCompletionResponseMessageAnnotationsUrlCitation
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| end_index | integer | Yes | ||
| start_index | integer | Yes | ||
| title | string | Yes | ||
| url | string | Yes |
OpenAI.ChatCompletionResponseMessageAudio
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| data | string | Yes | ||
| expires_at | integer | Yes | ||
| id | string | Yes | ||
| transcript | string | Yes |
OpenAI.ChatCompletionResponseMessageFunctionCall
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| arguments | string | Yes | ||
| name | string | Yes |
OpenAI.ChatCompletionStreamOptions
Options for streaming response. Only set this when you set stream: true.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| include_obfuscation | boolean | When true, stream obfuscation will be enabled. Stream obfuscation adds random characters to an obfuscation field on streaming delta events tonormalize payload sizes as a mitigation to certain side-channel attacks. These obfuscation fields are included by default, but add a small amount of overhead to the data stream. You can set include_obfuscation tofalse to optimize for bandwidth if you trust the network links between your application and the OpenAI API. |
No | |
| include_usage | boolean | If set, an additional chunk will be streamed before the data: [DONE]message. The usage field on this chunk shows the token usage statisticsfor the entire request, and the choices field will always be an emptyarray. All other chunks will also include a usage field, but with a nullvalue. NOTE: If the stream is interrupted, you may not receive the final usage chunk which contains the total token usage for the request. |
No |
OpenAI.ChatCompletionStreamResponseDelta
A chat completion delta generated by streamed model responses.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| content | string or null | No | ||
| function_call | OpenAI.ChatCompletionStreamResponseDeltaFunctionCall | No | ||
| └─ arguments | string | No | ||
| └─ name | string | No | ||
| reasoning_content | string | An Azure-specific extension property containing generated reasoning content from supported models. | No | |
| refusal | string or null | No | ||
| role | enum | The role of the author of this message. Possible values: developer, system, user, assistant, tool |
No | |
| tool_calls | array of OpenAI.ChatCompletionMessageToolCallChunk | No |
OpenAI.ChatCompletionStreamResponseDeltaFunctionCall
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| arguments | string | No | ||
| name | string | No |
OpenAI.ChatCompletionTokenLogprob
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| bytes | array of integer or null | Yes | ||
| logprob | number | The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely. |
Yes | |
| token | string | The token. | Yes | |
| top_logprobs | array of OpenAI.ChatCompletionTokenLogprobTopLogprobs | List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned. |
Yes |
OpenAI.ChatCompletionTokenLogprobTopLogprobs
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| bytes | array of integer or null | Yes | ||
| logprob | number | Yes | ||
| token | string | Yes |
OpenAI.ChatCompletionTool
A function tool that can be used to generate a response.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| function | OpenAI.FunctionObject | Yes | ||
| type | enum | The type of the tool. Currently, only function is supported.Possible values: function |
Yes |
OpenAI.ChatCompletionToolChoiceOption
Controls which (if any) tool is called by the model.
none means the model will not call any tool and instead generates a message.
auto means the model can pick between generating a message or calling one or more tools.
required means the model must call one or more tools.
Specifying a particular tool via {"type": "function", "function": {"name": "my_function"}} forces the model to call that tool.
none is the default when no tools are present. auto is the default if tools are present.
Type: string or OpenAI.ChatCompletionAllowedToolsChoice or OpenAI.ChatCompletionNamedToolChoice or OpenAI.ChatCompletionNamedToolChoiceCustom
Controls which (if any) tool is called by the model.
none means the model will not call any tool and instead generates a message.
auto means the model can pick between generating a message or calling one or more tools.
required means the model must call one or more tools.
Specifying a particular tool via {"type": "function", "function": {"name": "my_function"}} forces the model to call that tool.
none is the default when no tools are present. auto is the default if tools are present.
OpenAI.ChunkingStrategyRequestParam
The chunking strategy used to chunk the file(s). If not set, will use the auto strategy. Only applicable if file_ids is non-empty.
Discriminator for OpenAI.ChunkingStrategyRequestParam
This component uses the property type to discriminate between different types:
| Type Value | Schema |
|---|---|
auto |
OpenAI.AutoChunkingStrategyRequestParam |
static |
OpenAI.StaticChunkingStrategyRequestParam |
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | OpenAI.ChunkingStrategyRequestParamType | Yes |
OpenAI.ChunkingStrategyRequestParamType
| Property | Value |
|---|---|
| Type | string |
| Values | autostatic |
OpenAI.ChunkingStrategyResponse
The strategy used to chunk the file.
Discriminator for OpenAI.ChunkingStrategyResponse
This component uses the property type to discriminate between different types:
| Type Value | Schema |
|---|---|
static |
OpenAI.StaticChunkingStrategyResponseParam |
other |
OpenAI.OtherChunkingStrategyResponseParam |
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | OpenAI.ChunkingStrategyResponseType | Yes |
OpenAI.ChunkingStrategyResponseType
| Property | Value |
|---|---|
| Type | string |
| Values | staticother |
OpenAI.ClickButtonType
| Property | Value |
|---|---|
| Type | string |
| Values | leftrightwheelbackforward |
OpenAI.ClickParam
A click action.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| button | OpenAI.ClickButtonType | Yes | ||
| type | enum | Specifies the event type. For a click action, this property is always click.Possible values: click |
Yes | |
| x | integer | The x-coordinate where the click occurred. | Yes | |
| y | integer | The y-coordinate where the click occurred. | Yes |
OpenAI.CodeInterpreterContainerAuto
Configuration for a code interpreter container. Optionally specify the IDs of the files to run the code on.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| file_ids | array of string | An optional list of uploaded files to make available to your code. | No | |
| memory_limit | OpenAI.ContainerMemoryLimit or null | No | ||
| type | enum | Always auto.Possible values: auto |
Yes |
OpenAI.CodeInterpreterOutputImage
The image output from the code interpreter.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | enum | The type of the output. Always image.Possible values: image |
Yes | |
| url | string | The URL of the image output from the code interpreter. | Yes |
OpenAI.CodeInterpreterOutputLogs
The logs output from the code interpreter.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| logs | string | The logs output from the code interpreter. | Yes | |
| type | enum | The type of the output. Always logs.Possible values: logs |
Yes |
OpenAI.CodeInterpreterTool
A tool that runs Python code to help generate a response to a prompt.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| container | string or OpenAI.CodeInterpreterContainerAuto | The code interpreter container. Can be a container ID or an object that specifies uploaded file IDs to make available to your code, along with an optional memory_limit setting. |
Yes | |
| type | enum | The type of the code interpreter tool. Always code_interpreter.Possible values: code_interpreter |
Yes |
OpenAI.ComparisonFilter
A filter used to compare a specified attribute key to a given value using a defined comparison operation.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| key | string | The key to compare against the value. | Yes | |
| type | enum | Specifies the comparison operator: eq, ne, gt, gte, lt, lte, in, nin.- eq: equals- ne: not equal- gt: greater than- gte: greater than or equal- lt: less than- lte: less than or equal- in: in- nin: not inPossible values: eq, ne, gt, gte, lt, lte |
Yes | |
| value | string or number or boolean or array of OpenAI.ComparisonFilterValueItems | The value to compare against the attribute key; supports string, number, or boolean types. | Yes |
OpenAI.ComparisonFilterValueItems
This schema accepts one of the following types:
- string
- number
OpenAI.CompletionUsage
Usage statistics for the completion request.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| completion_tokens | integer | Number of tokens in the generated completion. | Yes | |
| completion_tokens_details | OpenAI.CompletionUsageCompletionTokensDetails | No | ||
| └─ accepted_prediction_tokens | integer | No | ||
| └─ audio_tokens | integer | No | ||
| └─ reasoning_tokens | integer | No | ||
| └─ rejected_prediction_tokens | integer | No | ||
| prompt_tokens | integer | Number of tokens in the prompt. | Yes | |
| prompt_tokens_details | OpenAI.CompletionUsagePromptTokensDetails | No | ||
| └─ audio_tokens | integer | No | ||
| └─ cached_tokens | integer | No | ||
| total_tokens | integer | Total number of tokens used in the request (prompt + completion). | Yes |
OpenAI.CompletionUsageCompletionTokensDetails
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| accepted_prediction_tokens | integer | No | ||
| audio_tokens | integer | No | ||
| reasoning_tokens | integer | No | ||
| rejected_prediction_tokens | integer | No |
OpenAI.CompletionUsagePromptTokensDetails
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| audio_tokens | integer | No | ||
| cached_tokens | integer | No |
OpenAI.CompoundFilter
Combine multiple filters using and or or.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| filters | array of OpenAI.ComparisonFilter or object | Array of filters to combine. Items can be ComparisonFilter or CompoundFilter. |
Yes | |
| type | enum | Type of operation: and or or.Possible values: and, or |
Yes |
OpenAI.ComputerAction
Discriminator for OpenAI.ComputerAction
This component uses the property type to discriminate between different types:
| Type Value | Schema |
|---|---|
click |
OpenAI.ClickParam |
double_click |
OpenAI.DoubleClickAction |
drag |
OpenAI.Drag |
keypress |
OpenAI.KeyPressAction |
move |
OpenAI.Move |
screenshot |
OpenAI.Screenshot |
scroll |
OpenAI.Scroll |
type |
OpenAI.Type |
wait |
OpenAI.Wait |
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | OpenAI.ComputerActionType | Yes |
OpenAI.ComputerActionType
| Property | Value |
|---|---|
| Type | string |
| Values | clickdouble_clickdragkeypressmovescreenshotscrolltypewait |
OpenAI.ComputerCallSafetyCheckParam
A pending safety check for the computer call.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| code | string or null | No | ||
| id | string | The ID of the pending safety check. | Yes | |
| message | string or null | No |
OpenAI.ComputerEnvironment
| Property | Value |
|---|---|
| Type | string |
| Values | windowsmaclinuxubuntubrowser |
OpenAI.ComputerScreenshotContent
A screenshot of a computer.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| file_id | string or null | Yes | ||
| image_url | string or null | Yes | ||
| type | enum | Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot.Possible values: computer_screenshot |
Yes |
OpenAI.ComputerScreenshotImage
A computer screenshot image used with the computer use tool.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| file_id | string | The identifier of an uploaded file that contains the screenshot. | No | |
| image_url | string | The URL of the screenshot image. | No | |
| type | enum | Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot.Possible values: computer_screenshot |
Yes |
OpenAI.ComputerUsePreviewTool
A tool that controls a virtual computer.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| display_height | integer | The height of the computer display. | Yes | |
| display_width | integer | The width of the computer display. | Yes | |
| environment | OpenAI.ComputerEnvironment | Yes | ||
| type | enum | The type of the computer use tool. Always computer_use_preview.Possible values: computer_use_preview |
Yes |
OpenAI.ContainerFileCitationBody
A citation for a container file used to generate a model response.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| container_id | string | The ID of the container file. | Yes | |
| end_index | integer | The index of the last character of the container file citation in the message. | Yes | |
| file_id | string | The ID of the file. | Yes | |
| filename | string | The filename of the container file cited. | Yes | |
| start_index | integer | The index of the first character of the container file citation in the message. | Yes | |
| type | enum | The type of the container file citation. Always container_file_citation.Possible values: container_file_citation |
Yes |
OpenAI.ContainerFileListResource
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| data | array of OpenAI.ContainerFileResource | A list of container files. | Yes | |
| first_id | string | The ID of the first file in the list. | Yes | |
| has_more | boolean | Whether there are more files available. | Yes | |
| last_id | string | The ID of the last file in the list. | Yes | |
| object | enum | The type of object returned, must be 'list'. Possible values: list |
Yes |
OpenAI.ContainerFileResource
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| bytes | integer | Size of the file in bytes. | Yes | |
| container_id | string | The container this file belongs to. | Yes | |
| created_at | integer | Unix timestamp (in seconds) when the file was created. | Yes | |
| id | string | Unique identifier for the file. | Yes | |
| object | enum | The type of this object (container.file).Possible values: container.file |
Yes | |
| path | string | Path of the file in the container. | Yes | |
| source | string | Source of the file (for example, user, assistant). |
Yes |
OpenAI.ContainerListResource
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| data | array of OpenAI.ContainerResource | A list of containers. | Yes | |
| first_id | string | The ID of the first container in the list. | Yes | |
| has_more | boolean | Whether there are more containers available. | Yes | |
| last_id | string | The ID of the last container in the list. | Yes | |
| object | enum | The type of object returned, must be 'list'. Possible values: list |
Yes |
OpenAI.ContainerMemoryLimit
| Property | Value |
|---|---|
| Type | string |
| Values | 1g4g16g64g |
OpenAI.ContainerResource
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| created_at | integer | Unix timestamp (in seconds) when the container was created. | Yes | |
| expires_after | OpenAI.ContainerResourceExpiresAfter | No | ||
| └─ anchor | enum | Possible values: last_active_at |
No | |
| └─ minutes | integer | No | ||
| id | string | Unique identifier for the container. | Yes | |
| last_active_at | integer | Unix timestamp (in seconds) when the container was last active. | No | |
| memory_limit | enum | The memory limit configured for the container. Possible values: 1g, 4g, 16g, 64g |
No | |
| name | string | Name of the container. | Yes | |
| object | string | The type of this object. | Yes | |
| status | string | Status of the container (for example, active, deleted). | Yes |
OpenAI.ContainerResourceExpiresAfter
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| anchor | enum | Possible values: last_active_at |
No | |
| minutes | integer | No |
OpenAI.ConversationItem
A single item within a conversation. The set of possible types are the same as the output type of a Response object.
Discriminator for OpenAI.ConversationItem
This component uses the property type to discriminate between different types:
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | OpenAI.ConversationItemType | Yes |
OpenAI.ConversationItemApplyPatchToolCall
A tool call that applies file diffs by creating, deleting, or updating files.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| call_id | string | The unique ID of the apply patch tool call generated by the model. | Yes | |
| created_by | string | The ID of the entity that created this tool call. | No | |
| id | string | The unique ID of the apply patch tool call. Populated when this item is returned via API. | Yes | |
| operation | OpenAI.ApplyPatchFileOperation | One of the create_file, delete_file, or update_file operations applied via apply_patch. | Yes | |
| └─ type | OpenAI.ApplyPatchFileOperationType | Yes | ||
| status | OpenAI.ApplyPatchCallStatus | Yes | ||
| type | enum | The type of the item. Always apply_patch_call.Possible values: apply_patch_call |
Yes |
OpenAI.ConversationItemApplyPatchToolCallOutput
The output emitted by an apply patch tool call.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| call_id | string | The unique ID of the apply patch tool call generated by the model. | Yes | |
| created_by | string | The ID of the entity that created this tool call output. | No | |
| id | string | The unique ID of the apply patch tool call output. Populated when this item is returned via API. | Yes | |
| output | string or null | No | ||
| status | OpenAI.ApplyPatchCallOutputStatus | Yes | ||
| type | enum | The type of the item. Always apply_patch_call_output.Possible values: apply_patch_call_output |
Yes |
OpenAI.ConversationItemCodeInterpreterToolCall
A tool call to run code.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| code | string or null | Yes | ||
| container_id | string | The ID of the container used to run the code. | Yes | |
| id | string | The unique ID of the code interpreter tool call. | Yes | |
| outputs | array of OpenAI.CodeInterpreterOutputLogs or OpenAI.CodeInterpreterOutputImage or null | Yes | ||
| status | enum | The status of the code interpreter tool call. Valid values are in_progress, completed, incomplete, interpreting, and failed.Possible values: in_progress, completed, incomplete, interpreting, failed |
Yes | |
| type | enum | The type of the code interpreter tool call. Always code_interpreter_call.Possible values: code_interpreter_call |
Yes |
OpenAI.ConversationItemComputerToolCall
A tool call to a computer use tool. See the computer use guide for more information.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| action | OpenAI.ComputerAction | Yes | ||
| call_id | string | An identifier used when responding to the tool call with output. | Yes | |
| id | string | The unique ID of the computer call. | Yes | |
| pending_safety_checks | array of OpenAI.ComputerCallSafetyCheckParam | The pending safety checks for the computer call. | Yes | |
| status | enum | The status of the item. One of in_progress, completed, orincomplete. Populated when items are returned via API.Possible values: in_progress, completed, incomplete |
Yes | |
| type | enum | The type of the computer call. Always computer_call.Possible values: computer_call |
Yes |
OpenAI.ConversationItemComputerToolCallOutputResource
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| acknowledged_safety_checks | array of OpenAI.ComputerCallSafetyCheckParam | The safety checks reported by the API that have been acknowledged by the developer. |
No | |
| call_id | string | The ID of the computer tool call that produced the output. | Yes | |
| id | string | The ID of the computer tool call output. | No | |
| output | OpenAI.ComputerScreenshotImage | A computer screenshot image used with the computer use tool. | Yes | |
| status | enum | The status of the message input. One of in_progress, completed, orincomplete. Populated when input items are returned via API.Possible values: in_progress, completed, incomplete |
No | |
| type | enum | The type of the computer tool call output. Always computer_call_output.Possible values: computer_call_output |
Yes |
OpenAI.ConversationItemCustomToolCall
A call to a custom tool created by the model.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| call_id | string | An identifier used to map this custom tool call to a tool call output. | Yes | |
| id | string | The unique ID of the custom tool call in the OpenAI platform. | No | |
| input | string | The input for the custom tool call generated by the model. | Yes | |
| name | string | The name of the custom tool being called. | Yes | |
| type | enum | The type of the custom tool call. Always custom_tool_call.Possible values: custom_tool_call |
Yes |
OpenAI.ConversationItemCustomToolCallOutput
The output of a custom tool call from your code, being sent back to the model.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| call_id | string | The call ID, used to map this custom tool call output to a custom tool call. | Yes | |
| id | string | The unique ID of the custom tool call output in the OpenAI platform. | No | |
| output | string or array of OpenAI.FunctionAndCustomToolCallOutput | The output from the custom tool call generated by your code. Can be a string or a list of output content. |
Yes | |
| type | enum | The type of the custom tool call output. Always custom_tool_call_output.Possible values: custom_tool_call_output |
Yes |
OpenAI.ConversationItemFileSearchToolCall
The results of a file search tool call. See the file search guide for more information.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| id | string | The unique ID of the file search tool call. | Yes | |
| queries | array of string | The queries used to search for files. | Yes | |
| results | array of OpenAI.FileSearchToolCallResults or null | No | ||
| status | enum | The status of the file search tool call. One of in_progress,searching, incomplete or failed,Possible values: in_progress, searching, completed, incomplete, failed |
Yes | |
| type | enum | The type of the file search tool call. Always file_search_call.Possible values: file_search_call |
Yes |
OpenAI.ConversationItemFunctionShellCall
A tool call that executes one or more shell commands in a managed environment.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| action | OpenAI.FunctionShellAction | Execute a shell command. | Yes | |
| └─ commands | array of string | Yes | ||
| └─ max_output_length | integer or null | Yes | ||
| └─ timeout_ms | integer or null | Yes | ||
| call_id | string | The unique ID of the shell tool call generated by the model. | Yes | |
| created_by | string | The ID of the entity that created this tool call. | No | |
| id | string | The unique ID of the shell tool call. Populated when this item is returned via API. | Yes | |
| status | OpenAI.LocalShellCallStatus | Yes | ||
| type | enum | The type of the item. Always shell_call.Possible values: shell_call |
Yes |
OpenAI.ConversationItemFunctionShellCallOutput
The output of a shell tool call that was emitted.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| call_id | string | The unique ID of the shell tool call generated by the model. | Yes | |
| created_by | string | The identifier of the actor that created the item. | No | |
| id | string | The unique ID of the shell call output. Populated when this item is returned via API. | Yes | |
| max_output_length | integer or null | Yes | ||
| output | array of OpenAI.FunctionShellCallOutputContent | An array of shell call output contents | Yes | |
| type | enum | The type of the shell call output. Always shell_call_output.Possible values: shell_call_output |
Yes |
OpenAI.ConversationItemFunctionToolCallOutputResource
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| call_id | string | The unique ID of the function tool call generated by the model. | Yes | |
| id | string | The unique ID of the function tool call output. Populated when this item is returned via API. |
No | |
| output | string or array of OpenAI.FunctionAndCustomToolCallOutput | The output from the function call generated by your code. Can be a string or a list of output content. |
Yes | |
| status | enum | The status of the item. One of in_progress, completed, orincomplete. Populated when items are returned via API.Possible values: in_progress, completed, incomplete |
No | |
| type | enum | The type of the function tool call output. Always function_call_output.Possible values: function_call_output |
Yes |
OpenAI.ConversationItemFunctionToolCallResource
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| arguments | string | A JSON string of the arguments to pass to the function. | Yes | |
| call_id | string | The unique ID of the function tool call generated by the model. | Yes | |
| id | string | The unique ID of the function tool call. | No | |
| name | string | The name of the function to run. | Yes | |
| status | enum | The status of the item. One of in_progress, completed, orincomplete. Populated when items are returned via API.Possible values: in_progress, completed, incomplete |
No | |
| type | enum | The type of the function tool call. Always function_call.Possible values: function_call |
Yes |
OpenAI.ConversationItemImageGenToolCall
An image generation request made by the model.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| id | string | The unique ID of the image generation call. | Yes | |
| result | string or null | Yes | ||
| status | enum | The status of the image generation call. Possible values: in_progress, completed, generating, failed |
Yes | |
| type | enum | The type of the image generation call. Always image_generation_call.Possible values: image_generation_call |
Yes |
OpenAI.ConversationItemList
A list of Conversation items.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| data | array of OpenAI.ConversationItem | A list of conversation items. | Yes | |
| first_id | string | The ID of the first item in the list. | Yes | |
| has_more | boolean | Whether there are more items available. | Yes | |
| last_id | string | The ID of the last item in the list. | Yes | |
| object | enum | The type of object returned, must be list.Possible values: list |
Yes |
OpenAI.ConversationItemLocalShellToolCall
A tool call to run a command on the local shell.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| action | OpenAI.LocalShellExecAction | Execute a shell command on the server. | Yes | |
| call_id | string | The unique ID of the local shell tool call generated by the model. | Yes | |
| id | string | The unique ID of the local shell call. | Yes | |
| status | enum | The status of the local shell call. Possible values: in_progress, completed, incomplete |
Yes | |
| type | enum | The type of the local shell call. Always local_shell_call.Possible values: local_shell_call |
Yes |
OpenAI.ConversationItemLocalShellToolCallOutput
The output of a local shell tool call.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| id | string | The unique ID of the local shell tool call generated by the model. | Yes | |
| output | string | A JSON string of the output of the local shell tool call. | Yes | |
| status | string or null | No | ||
| type | enum | The type of the local shell tool call output. Always local_shell_call_output.Possible values: local_shell_call_output |
Yes |
OpenAI.ConversationItemMcpApprovalRequest
A request for human approval of a tool invocation.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| arguments | string | A JSON string of arguments for the tool. | Yes | |
| id | string | The unique ID of the approval request. | Yes | |
| name | string | The name of the tool to run. | Yes | |
| server_label | string | The label of the MCP server making the request. | Yes | |
| type | enum | The type of the item. Always mcp_approval_request.Possible values: mcp_approval_request |
Yes |
OpenAI.ConversationItemMcpApprovalResponseResource
A response to an MCP approval request.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| approval_request_id | string | The ID of the approval request being answered. | Yes | |
| approve | boolean | Whether the request was approved. | Yes | |
| id | string | The unique ID of the approval response | Yes | |
| reason | string or null | No | ||
| type | enum | The type of the item. Always mcp_approval_response.Possible values: mcp_approval_response |
Yes |
OpenAI.ConversationItemMcpListTools
A list of tools available on an MCP server.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| error | string or null | No | ||
| id | string | The unique ID of the list. | Yes | |
| server_label | string | The label of the MCP server. | Yes | |
| tools | array of OpenAI.MCPListToolsTool | The tools available on the server. | Yes | |
| type | enum | The type of the item. Always mcp_list_tools.Possible values: mcp_list_tools |
Yes |
OpenAI.ConversationItemMcpToolCall
An invocation of a tool on an MCP server.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| approval_request_id | string or null | No | ||
| arguments | string | A JSON string of the arguments passed to the tool. | Yes | |
| error | string or null | No | ||
| id | string | The unique ID of the tool call. | Yes | |
| name | string | The name of the tool that was run. | Yes | |
| output | string or null | No | ||
| server_label | string | The label of the MCP server running the tool. | Yes | |
| status | OpenAI.MCPToolCallStatus | No | ||
| type | enum | The type of the item. Always mcp_call.Possible values: mcp_call |
Yes |
OpenAI.ConversationItemMessage
A message to or from the model.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| content | array of OpenAI.InputTextContent or OpenAI.OutputTextContent or OpenAI.TextContent or OpenAI.SummaryTextContent or OpenAI.ReasoningTextContent or OpenAI.RefusalContent or OpenAI.InputImageContent or OpenAI.ComputerScreenshotContent or OpenAI.InputFileContent | The content of the message | Yes | |
| id | string | The unique ID of the message. | Yes | |
| role | OpenAI.MessageRole | Yes | ||
| status | OpenAI.MessageStatus | Yes | ||
| type | enum | The type of the message. Always set to message.Possible values: message |
Yes |
OpenAI.ConversationItemReasoningItem
A description of the chain of thought used by a reasoning model while generating
a response. Be sure to include these items in your input to the Responses API
for subsequent turns of a conversation if you are manually
managing context.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| content | array of OpenAI.ReasoningTextContent | Reasoning text content. | No | |
| encrypted_content | string or null | No | ||
| id | string | The unique identifier of the reasoning content. | Yes | |
| status | enum | The status of the item. One of in_progress, completed, orincomplete. Populated when items are returned via API.Possible values: in_progress, completed, incomplete |
No | |
| summary | array of OpenAI.Summary | Reasoning summary content. | Yes | |
| type | enum | The type of the object. Always reasoning.Possible values: reasoning |
Yes |
OpenAI.ConversationItemType
| Property | Value |
|---|---|
| Type | string |
| Values | messagefunction_callfunction_call_outputfile_search_callweb_search_callimage_generation_callcomputer_callcomputer_call_outputreasoningcode_interpreter_calllocal_shell_calllocal_shell_call_outputshell_callshell_call_outputapply_patch_callapply_patch_call_outputmcp_list_toolsmcp_approval_requestmcp_approval_responsemcp_callcustom_tool_callcustom_tool_call_output |
OpenAI.ConversationItemWebSearchToolCall
The results of a web search tool call. See the web search guide for more information.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| action | OpenAI.WebSearchActionSearch or OpenAI.WebSearchActionOpenPage or OpenAI.WebSearchActionFind | An object describing the specific action taken in this web search call. Includes details on how the model used the web (search, open_page, find). |
Yes | |
| id | string | The unique ID of the web search tool call. | Yes | |
| status | enum | The status of the web search tool call. Possible values: in_progress, searching, completed, failed |
Yes | |
| type | enum | The type of the web search tool call. Always web_search_call.Possible values: web_search_call |
Yes |
OpenAI.ConversationParam
The conversation that this response belongs to. Items from this conversation are prepended to input_items for this response request.
Input items and output items from this response are automatically added to this conversation after this response completes.
Type: string or OpenAI.ConversationParam-2
The conversation that this response belongs to. Items from this conversation are prepended to input_items for this response request.
Input items and output items from this response are automatically added to this conversation after this response completes.
OpenAI.ConversationParam-2
The conversation that this response belongs to.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| id | string | The unique ID of the conversation. | Yes |
OpenAI.ConversationReference
The conversation that this response belonged to. Input items and output items from this response were automatically added to this conversation.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| id | string | The unique ID of the conversation that this response was associated with. | Yes |
OpenAI.ConversationResource
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| created_at | integer | The time at which the conversation was created, measured in seconds since the Unix epoch. | Yes | |
| id | string | The unique ID of the conversation. | Yes | |
| metadata | Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters. |
Yes | ||
| object | enum | The object type, which is always conversation.Possible values: conversation |
Yes |
OpenAI.CreateChatCompletionRequestAudio
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| format | enum | Possible values: wav, aac, mp3, flac, opus, pcm16 |
Yes | |
| voice | OpenAI.VoiceIdsShared | Yes |
OpenAI.CreateChatCompletionRequestResponseFormat
An object specifying the format that the model must output.
Setting to { "type": "json_schema", "json_schema": {...} } enables
Structured Outputs which ensure the model will match your supplied JSON
schema. Learn more in the Structured Outputs
guide.
Setting to { "type": "json_object" } enables the older JSON mode, which
ensures the message the model generates is valid JSON. Using json_schema
is preferred for models that support it.
Discriminator for OpenAI.CreateChatCompletionRequestResponseFormat
This component uses the property type to discriminate between different types:
| Type Value | Schema |
|---|---|
json_schema |
OpenAI.ResponseFormatJsonSchema |
text |
OpenAI.CreateChatCompletionRequestResponseFormatResponseFormatText |
json_object |
OpenAI.CreateChatCompletionRequestResponseFormatResponseFormatJsonObject |
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | OpenAI.CreateChatCompletionRequestResponseFormatType | Yes |
OpenAI.CreateChatCompletionRequestResponseFormatResponseFormatJsonObject
JSON object response format. An older method of generating JSON responses.
Using json_schema is recommended for models that support it. Note that the
model will not generate JSON without a system or user message instructing it
to do so.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | enum | The type of response format being defined. Always json_object.Possible values: json_object |
Yes |
OpenAI.CreateChatCompletionRequestResponseFormatResponseFormatText
Default response format. Used to generate text responses.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | enum | The type of response format being defined. Always text.Possible values: text |
Yes |
OpenAI.CreateChatCompletionRequestResponseFormatType
| Property | Value |
|---|---|
| Type | string |
| Values | textjson_schemajson_object |
OpenAI.CreateChatCompletionResponseChoices
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| content_filter_results | AzureContentFilterResultForChoice | A content filter result for a single response item produced by a generative AI system. | No | |
| finish_reason | enum | Possible values: stop, length, tool_calls, content_filter, function_call |
Yes | |
| index | integer | Yes | ||
| logprobs | OpenAI.CreateChatCompletionResponseChoicesLogprobs or null | Yes | ||
| message | OpenAI.ChatCompletionResponseMessage | If the audio output modality is requested, this object contains data about the audio response from the model. |
Yes |
OpenAI.CreateChatCompletionResponseChoicesLogprobs
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| content | array of OpenAI.ChatCompletionTokenLogprob or null | Yes | ||
| refusal | array of OpenAI.ChatCompletionTokenLogprob or null | Yes |
OpenAI.CreateChatCompletionStreamResponseChoices
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| delta | OpenAI.ChatCompletionStreamResponseDelta | A chat completion delta generated by streamed model responses. | Yes | |
| finish_reason | string or null | Yes | ||
| index | integer | Yes | ||
| logprobs | OpenAI.CreateChatCompletionStreamResponseChoicesLogprobs or null | No |
OpenAI.CreateChatCompletionStreamResponseChoicesLogprobs
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| content | array of OpenAI.ChatCompletionTokenLogprob or null | Yes | ||
| refusal | array of OpenAI.ChatCompletionTokenLogprob or null | Yes |
OpenAI.CreateCompletionResponseChoices
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| content_filter_results | AzureContentFilterResultForChoice | A content filter result for a single response item produced by a generative AI system. | No | |
| finish_reason | enum | Possible values: stop, length, content_filter |
Yes | |
| index | integer | Yes | ||
| logprobs | OpenAI.CreateCompletionResponseChoicesLogprobs or null | Yes | ||
| text | string | Yes |
OpenAI.CreateCompletionResponseChoicesLogprobs
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| text_offset | array of integer | No | ||
| token_logprobs | array of number | No | ||
| tokens | array of string | No | ||
| top_logprobs | array of object | No |
OpenAI.CreateContainerBody
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| expires_after | OpenAI.CreateContainerBodyExpiresAfter | No | ||
| └─ anchor | enum | Possible values: last_active_at |
Yes | |
| └─ minutes | integer | Yes | ||
| file_ids | array of string | IDs of files to copy to the container. | No | |
| memory_limit | enum | Optional memory limit for the container. Defaults to "1g". Possible values: 1g, 4g, 16g, 64g |
No | |
| name | string | Name of the container to create. | Yes |
OpenAI.CreateContainerBodyExpiresAfter
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| anchor | enum | Possible values: last_active_at |
Yes | |
| minutes | integer | Yes |
OpenAI.CreateContainerFileBody
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| file | The File object (not file name) to be uploaded. | No | ||
| file_id | string | Name of the file to create. | No |
OpenAI.CreateConversationBody
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| items | array of OpenAI.InputItem or null | No | ||
| metadata | OpenAI.Metadata or null | No |
OpenAI.CreateConversationItemsParametersBody
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| items | array of OpenAI.InputItem | Yes |
OpenAI.CreateEmbeddingRequest
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| dimensions | integer | The number of dimensions the resulting output embeddings should have. Only supported in text-embedding-3 and later models.Constraints: min: 1 |
No | |
| encoding_format | enum | The format to return the embeddings in. Can be either float or base64.Possible values: float, base64 |
No | |
| input | string or array of string or array of integer or array of array | Input text to embed, encoded as a string or array of tokens. To embed multiple inputs in a single request, pass an array of strings or array of token arrays. The input must not exceed the max input tokens for the model (8,192 tokens for all embedding models), cannot be an empty string, and any array must be 2,048 dimensions or less. Example Python code for counting tokens. In addition to the per-input token limit, all embedding models enforce a maximum of 300,000 tokens summed across all inputs in a single request. | Yes | |
| model | string | ID of the model to use. You can use the List models API to see all of your available models, or see our Model overview for descriptions of them. | Yes | |
| user | string | Learn more. | No |
OpenAI.CreateEmbeddingResponse
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| data | array of OpenAI.Embedding | The list of embeddings generated by the model. | Yes | |
| model | string | The name of the model used to generate the embedding. | Yes | |
| object | enum | The object type, which is always "list". Possible values: list |
Yes | |
| usage | OpenAI.CreateEmbeddingResponseUsage | Yes | ||
| └─ prompt_tokens | integer | Yes | ||
| └─ total_tokens | integer | Yes |
OpenAI.CreateEmbeddingResponseUsage
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| prompt_tokens | integer | Yes | ||
| total_tokens | integer | Yes |
OpenAI.CreateEvalCompletionsRunDataSource
A CompletionsRunDataSource object describing a model sampling configuration.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| input_messages | OpenAI.CreateEvalCompletionsRunDataSourceInputMessagesTemplate or OpenAI.CreateEvalCompletionsRunDataSourceInputMessagesItemReference | Used when sampling from a model. Dictates the structure of the messages passed into the model. Can either be a reference to a prebuilt trajectory (ie, item.input_trajectory), or a template with variable references to the item namespace. |
No | |
| model | string | The name of the model to use for generating completions (for example "o3-mini"). | No | |
| sampling_params | AzureCompletionsSamplingParams | Sampling parameters for controlling the behavior of completions. | No | |
| source | OpenAI.EvalJsonlFileContentSource or OpenAI.EvalJsonlFileIdSource or OpenAI.EvalStoredCompletionsSource | Determines what populates the item namespace in this run's data source. |
Yes | |
| type | enum | The type of run data source. Always completions.Possible values: completions |
Yes |
OpenAI.CreateEvalCompletionsRunDataSourceInputMessagesItemReference
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| item_reference | string | Yes | ||
| type | enum | Possible values: item_reference |
Yes |
OpenAI.CreateEvalCompletionsRunDataSourceInputMessagesTemplate
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| template | array of OpenAI.EasyInputMessage or OpenAI.EvalItem | Yes | ||
| type | enum | Possible values: template |
Yes |
OpenAI.CreateEvalCompletionsRunDataSourceSamplingParams
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| max_completion_tokens | integer | No | ||
| reasoning_effort | OpenAI.ReasoningEffort | Constrains effort on reasoning for reasoning models. Currently supported values are none, minimal, low, medium, high, and xhigh. Reducingreasoning effort can result in faster responses and fewer tokens used on reasoning in a response. - gpt-5.1 defaults to none, which does not perform reasoning. The supported reasoning values for gpt-5.1 are none, low, medium, and high. Tool calls are supported for all reasoning values in gpt-5.1.- All models before gpt-5.1 default to medium reasoning effort, and do not support none.- The gpt-5-pro model defaults to (and only supports) high reasoning effort.- xhigh is supported for all models after gpt-5.1-codex-max. |
No | |
| response_format | OpenAI.ResponseFormatText or OpenAI.ResponseFormatJsonSchema or OpenAI.ResponseFormatJsonObject | No | ||
| seed | integer | A seed value initializes the randomness during sampling. | No | 42 |
| temperature | number | A higher temperature increases randomness in the outputs. | No | 1 |
| tools | array of OpenAI.ChatCompletionTool | No | ||
| top_p | number | An alternative to temperature for nucleus sampling; 1.0 includes all tokens. | No | 1 |
OpenAI.CreateEvalCustomDataSourceConfig
A CustomDataSourceConfig object that defines the schema for the data source used for the evaluation runs. This schema is used to define the shape of the data that will be:
- Used to define your testing criteria and
- What data is required when creating a run
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| include_sample_schema | boolean | Whether the eval should expect you to populate the sample namespace (ie, by generating responses off of your data source) | No | |
| item_schema | object | The json schema for each row in the data source. | Yes | |
| type | enum | The type of data source. Always custom.Possible values: custom |
Yes |
OpenAI.CreateEvalItem
A chat message that makes up the prompt or context. May include variable references to the item namespace, ie {{item.name}}.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| content | string | The content of the message. | Yes | |
| role | string | The role of the message (for example "system", "assistant", "user"). | Yes |
OpenAI.CreateEvalJsonlRunDataSource
A JsonlRunDataSource object with that specifies a JSONL file that matches the eval
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| source | OpenAI.EvalJsonlFileContentSource or OpenAI.EvalJsonlFileIdSource | Determines what populates the item namespace in the data source. |
Yes | |
| type | enum | The type of data source. Always jsonl.Possible values: jsonl |
Yes |
OpenAI.CreateEvalLabelModelGrader
A LabelModelGrader object which uses a model to assign labels to each item in the evaluation.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| input | array of OpenAI.CreateEvalItem | A list of chat messages forming the prompt or context. May include variable references to the item namespace, ie {{item.name}}. |
Yes | |
| labels | array of string | The labels to classify to each item in the evaluation. | Yes | |
| model | string | The model to use for the evaluation. Must support structured outputs. | Yes | |
| name | string | The name of the grader. | Yes | |
| passing_labels | array of string | The labels that indicate a passing result. Must be a subset of labels. | Yes | |
| type | enum | The object type, which is always label_model.Possible values: label_model |
Yes |
OpenAI.CreateEvalLogsDataSourceConfig
A data source config which specifies the metadata property of your logs query.
This is usually metadata like usecase=chatbot or prompt-version=v2, etc.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| metadata | object | Metadata filters for the logs data source. | No | |
| type | enum | The type of data source. Always logs.Possible values: logs |
Yes |
OpenAI.CreateEvalResponsesRunDataSource
A ResponsesRunDataSource object describing a model sampling configuration.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| input_messages | OpenAI.CreateEvalResponsesRunDataSourceInputMessagesTemplate or OpenAI.CreateEvalResponsesRunDataSourceInputMessagesItemReference | Used when sampling from a model. Dictates the structure of the messages passed into the model. Can either be a reference to a prebuilt trajectory (ie, item.input_trajectory), or a template with variable references to the item namespace. |
No | |
| model | string | The name of the model to use for generating completions (for example "o3-mini"). | No | |
| sampling_params | AzureResponsesSamplingParams | Sampling parameters for controlling the behavior of responses. | No | |
| source | OpenAI.EvalJsonlFileContentSource or OpenAI.EvalJsonlFileIdSource or OpenAI.EvalResponsesSource | Determines what populates the item namespace in this run's data source. |
Yes | |
| type | enum | The type of run data source. Always responses.Possible values: responses |
Yes |
OpenAI.CreateEvalResponsesRunDataSourceInputMessagesItemReference
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| item_reference | string | Yes | ||
| type | enum | Possible values: item_reference |
Yes |
OpenAI.CreateEvalResponsesRunDataSourceInputMessagesTemplate
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| template | array of object or OpenAI.EvalItem | Yes | ||
| type | enum | Possible values: template |
Yes |
OpenAI.CreateEvalResponsesRunDataSourceSamplingParams
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| reasoning_effort | OpenAI.ReasoningEffort | Constrains effort on reasoning for reasoning models. Currently supported values are none, minimal, low, medium, high, and xhigh. Reducingreasoning effort can result in faster responses and fewer tokens used on reasoning in a response. - gpt-5.1 defaults to none, which does not perform reasoning. The supported reasoning values for gpt-5.1 are none, low, medium, and high. Tool calls are supported for all reasoning values in gpt-5.1.- All models before gpt-5.1 default to medium reasoning effort, and do not support none.- The gpt-5-pro model defaults to (and only supports) high reasoning effort.- xhigh is supported for all models after gpt-5.1-codex-max. |
No | |
| seed | integer | A seed value initializes the randomness during sampling. | No | 42 |
| temperature | number | A higher temperature increases randomness in the outputs. | No | 1 |
| text | OpenAI.CreateEvalResponsesRunDataSourceSamplingParamsText | No | ||
| tools | array of OpenAI.Tool | No | ||
| top_p | number | An alternative to temperature for nucleus sampling; 1.0 includes all tokens. | No | 1 |
OpenAI.CreateEvalResponsesRunDataSourceSamplingParamsText
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| format | OpenAI.TextResponseFormatConfiguration | An object specifying the format that the model must output. Configuring { "type": "json_schema" } enables Structured Outputs,which ensures the model will match your supplied JSON schema. Learn more in the The default format is { "type": "text" } with no additional options.*Not recommended for gpt-4o and newer models:** Setting to { "type": "json_object" } enables the older JSON mode, whichensures the message the model generates is valid JSON. Using json_schemais preferred for models that support it. |
No |
OpenAI.CreateEvalRunRequest
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| data_source | OpenAI.CreateEvalJsonlRunDataSource or OpenAI.CreateEvalCompletionsRunDataSource or OpenAI.CreateEvalResponsesRunDataSource | Details about the run's data source. | Yes | |
| metadata | OpenAI.Metadata or null | No | ||
| name | string | The name of the run. | No |
OpenAI.CreateEvalStoredCompletionsDataSourceConfig
Deprecated in favor of LogsDataSourceConfig.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| metadata | object | Metadata filters for the stored completions data source. | No | |
| type | enum | The type of data source. Always stored_completions.Possible values: stored_completions |
Yes |
OpenAI.CreateFileRequest
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| expires_after | object | Yes | ||
| └─ anchor | AzureFileExpiryAnchor | Yes | ||
| └─ seconds | integer | Yes | ||
| file | The File object (not file name) to be uploaded. | Yes | ||
| purpose | enum | The intended purpose of the uploaded file. One of: - assistants: Used in the Assistants API - batch: Used in the Batch API - fine-tune: Used for fine-tuning - evals: Used for eval data setsPossible values: assistants, batch, fine-tune, evals |
Yes |
OpenAI.CreateFineTuningCheckpointPermissionRequest
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| project_ids | array of string | The project identifiers to grant access to. | Yes |
OpenAI.CreateFineTuningJobRequest
Valid models:
babbage-002
davinci-002
gpt-3.5-turbo
gpt-4o-mini
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| hyperparameters | OpenAI.CreateFineTuningJobRequestHyperparameters | No | ||
| └─ batch_size | string or integer | No | auto | |
| └─ learning_rate_multiplier | string or number | No | ||
| └─ n_epochs | string or integer | No | auto | |
| integrations | array of OpenAI.CreateFineTuningJobRequestIntegrations or null | A list of integrations to enable for your fine-tuning job. | No | |
| metadata | OpenAI.Metadata or null | No | ||
| method | OpenAI.FineTuneMethod | The method used for fine-tuning. | No | |
| model | string (see valid models below) | The name of the model to fine-tune. You can select one of the supported models. |
Yes | |
| seed | integer or null | The seed controls the reproducibility of the job. Passing in the same seed and job parameters should produce the same results, but may differ in rare cases. If a seed is not specified, one will be generated for you. |
No | |
| suffix | string or null | A string of up to 64 characters that will be added to your fine-tuned model name. For example, a suffix of "custom-model-name" would produce a model name like ft:gpt-4o-mini:openai:custom-model-name:7p4lURel. |
No | |
| training_file | string | The ID of an uploaded file that contains training data. See upload file for how to upload a file. Your dataset must be formatted as a JSONL file. Additionally, you must upload your file with the purpose fine-tune.The contents of the file should differ depending on if the model uses the chat, completions format, or if the fine-tuning method uses the preference format. See the fine-tuning guide for more details. |
Yes | |
| validation_file | string or null | The ID of an uploaded file that contains validation data. If you provide this file, the data is used to generate validation metrics periodically during fine-tuning. These metrics can be viewed in the fine-tuning results file. The same data should not be present in both train and validation files. Your dataset must be formatted as a JSONL file. You must upload your file with the purpose fine-tune.See the fine-tuning guide for more details. |
No |
OpenAI.CreateFineTuningJobRequestHyperparameters
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| batch_size | string or integer | No | ||
| learning_rate_multiplier | string or number | No | ||
| n_epochs | string or integer | No |
OpenAI.CreateFineTuningJobRequestIntegrations
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | enum | Possible values: wandb |
Yes | |
| wandb | OpenAI.CreateFineTuningJobRequestIntegrationsWandb | Yes |
OpenAI.CreateFineTuningJobRequestIntegrationsWandb
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| entity | string or null | No | ||
| name | string or null | No | ||
| project | string | Yes | ||
| tags | array of string | No |
OpenAI.CreateMessageRequest
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| attachments | array of OpenAI.CreateMessageRequestAttachments or null | No | ||
| content | string or array of OpenAI.MessageContentImageFileObject or OpenAI.MessageContentImageUrlObject or OpenAI.MessageRequestContentTextObject | Yes | ||
| metadata | OpenAI.Metadata or null | No | ||
| role | enum | The role of the entity that is creating the message. Allowed values include: - user: Indicates the message is sent by an actual user and should be used in most cases to represent user-generated messages.- assistant: Indicates the message is generated by the assistant. Use this value to insert messages from the assistant into the conversation.Possible values: user, assistant |
Yes |
OpenAI.CreateMessageRequestAttachments
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| file_id | string | No | ||
| tools | array of OpenAI.AssistantToolsCode or OpenAI.AssistantToolsFileSearchTypeOnly | No |
OpenAI.CreateResponse
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| background | boolean or null | No | ||
| conversation | OpenAI.ConversationParam or null | No | ||
| include | array of OpenAI.IncludeEnum or null | No | ||
| input | OpenAI.InputParam | Text, image, or file inputs to the model, used to generate a response. Learn more: - Text inputs and outputs - Image inputs - File inputs - Conversation state - Function calling |
No | |
| instructions | string or null | No | ||
| max_output_tokens | integer or null | No | ||
| max_tool_calls | integer or null | No | ||
| metadata | OpenAI.Metadata or null | No | ||
| model | string | Model ID used to generate the response, like gpt-4o or o3. OpenAIoffers a wide range of models with different capabilities, performance characteristics, and price points. Refer to the model guide to browse and compare available models. |
No | |
| parallel_tool_calls | boolean or null | No | ||
| previous_response_id | string or null | No | ||
| prompt | OpenAI.Prompt | Reference to a prompt template and its variables. Learn more. |
No | |
| prompt_cache_key | string | Used by OpenAI to cache responses for similar requests to optimize your cache hit rates. Replaces the user field. Learn more. |
No | |
| prompt_cache_retention | string or null | No | ||
| reasoning | OpenAI.Reasoning or null | No | ||
| safety_identifier | string | A stable identifier used to help detect users of your application that may be violating OpenAI's usage policies. The IDs should be a string that uniquely identifies each user. We recommend hashing their username or email address, in order to avoid sending us any identifying information. Learn more. |
No | |
| store | boolean or null | No | ||
| stream | boolean or null | No | ||
| stream_options | OpenAI.ResponseStreamOptions or null | No | ||
| temperature | number or null | No | ||
| text | OpenAI.ResponseTextParam | Configuration options for a text response from the model. Can be plain text or structured JSON data. Learn more: - Text inputs and outputs - Structured Outputs |
No | |
| tool_choice | OpenAI.ToolChoiceParam | How the model should select which tool (or tools) to use when generating a response. See the tools parameter to see how to specify which toolsthe model can call. |
No | |
| tools | OpenAI.ToolsArray | An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter.We support the following categories of tools: - Built-in tools: Tools that are provided by OpenAI that extend the model's capabilities, like web search or file search. Learn more about built-in tools. - MCP Tools: Integrations with third-party systems via custom MCP servers or predefined connectors such as Google Drive and SharePoint. Learn more about MCP Tools. - Function calls (custom tools): Functions that are defined by you, enabling the model to call your own code with strongly typed arguments and outputs. Learn more about function calling. You can also use custom tools to call your own code. |
No | |
| top_logprobs | integer or null | No | ||
| top_p | number or null | No | ||
| truncation | string or null | No | ||
| user | string (deprecated) | This field is being replaced by safety_identifier and prompt_cache_key. Use prompt_cache_key instead to maintain caching optimizations.A stable identifier for your end-users. Used to boost cache hit rates by better bucketing similar requests and to help OpenAI detect and prevent abuse. Learn more. |
No |
OpenAI.CreateRunRequest
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| additional_instructions | string or null | Appends additional instructions at the end of the instructions for the run. This is useful for modifying the behavior on a per-run basis without overriding other instructions. | No | |
| additional_messages | array of OpenAI.CreateMessageRequest or null | Adds additional messages to the thread before creating the run. | No | |
| assistant_id | string | The ID of the assistant to use to execute this run. | Yes | |
| instructions | string or null | Overrides the instructions of the assistant. This is useful for modifying the behavior on a per-run basis. | No | |
| max_completion_tokens | integer or null | The maximum number of completion tokens that may be used over the course of the run. The run will make a best effort to use only the number of completion tokens specified, across multiple turns of the run. If the run exceeds the number of completion tokens specified, the run will end with status incomplete. See incomplete_details for more info. |
No | |
| max_prompt_tokens | integer or null | The maximum number of prompt tokens that may be used over the course of the run. The run will make a best effort to use only the number of prompt tokens specified, across multiple turns of the run. If the run exceeds the number of prompt tokens specified, the run will end with status incomplete. See incomplete_details for more info. |
No | |
| metadata | OpenAI.Metadata or null | No | ||
| model | string | The ID of the Model to be used to execute this run. If a value is provided here, it will override the model associated with the assistant. If not, the model associated with the assistant will be used. | No | |
| parallel_tool_calls | OpenAI.ParallelToolCalls | Whether to enable parallel function calling during tool use. | No | |
| reasoning_effort | OpenAI.ReasoningEffort | Constrains effort on reasoning for reasoning models. Currently supported values are none, minimal, low, medium, high, and xhigh. Reducingreasoning effort can result in faster responses and fewer tokens used on reasoning in a response. - gpt-5.1 defaults to none, which does not perform reasoning. The supported reasoning values for gpt-5.1 are none, low, medium, and high. Tool calls are supported for all reasoning values in gpt-5.1.- All models before gpt-5.1 default to medium reasoning effort, and do not support none.- The gpt-5-pro model defaults to (and only supports) high reasoning effort.- xhigh is supported for all models after gpt-5.1-codex-max. |
No | |
| response_format | OpenAI.AssistantsApiResponseFormatOption | Specifies the format that the model must output. Compatible with GPT-4o, GPT-4 Turbo, and all GPT-3.5 Turbo models since gpt-3.5-turbo-1106.Setting to { "type": "json_schema", "json_schema": {...} } enables Structured Outputs which ensure the model will match your supplied JSON schema. Learn more in the Setting to { "type": "json_object" } enables JSON mode, which ensures the message the model generates is valid JSON.Important:* when using JSON mode, you must also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly "stuck" request. Also note that the message content may be partially cut off if finish_reason="length", which indicates the generation exceeded max_tokens or the conversation exceeded the max context length. |
No | |
| stream | boolean or null | If true, returns a stream of events that happen during the Run as server-sent events, terminating when the Run enters a terminal state with a data: [DONE] message. |
No | |
| temperature | number or null | What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. | No | |
| tool_choice | OpenAI.AssistantsApiToolChoiceOption | Controls which (if any) tool is called by the model.none means the model will not call any tools and instead generates a message.auto is the default value and means the model can pick between generating a message or calling one or more tools.required means the model must call one or more tools before responding to the user.Specifying a particular tool like {"type": "file_search"} or {"type": "function", "function": {"name": "my_function"}} forces the model to call that tool. |
No | |
| tools | array of OpenAI.AssistantTool | Override the tools the assistant can use for this run. This is useful for modifying the behavior on a per-run basis. | No | |
| top_p | number or null | An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both. |
No | |
| truncation_strategy | OpenAI.TruncationObject | Controls for how a thread will be truncated prior to the run. Use this to control the initial context window of the run. | No |
OpenAI.CreateThreadAndRunRequest
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| assistant_id | string | The ID of the assistant to use to execute this run. | Yes | |
| instructions | string or null | Override the default system message of the assistant. This is useful for modifying the behavior on a per-run basis. | No | |
| max_completion_tokens | integer or null | The maximum number of completion tokens that may be used over the course of the run. The run will make a best effort to use only the number of completion tokens specified, across multiple turns of the run. If the run exceeds the number of completion tokens specified, the run will end with status incomplete. See incomplete_details for more info. |
No | |
| max_prompt_tokens | integer or null | The maximum number of prompt tokens that may be used over the course of the run. The run will make a best effort to use only the number of prompt tokens specified, across multiple turns of the run. If the run exceeds the number of prompt tokens specified, the run will end with status incomplete. See incomplete_details for more info. |
No | |
| metadata | OpenAI.Metadata or null | No | ||
| model | string | The ID of the Model to be used to execute this run. If a value is provided here, it will override the model associated with the assistant. If not, the model associated with the assistant will be used. | No | |
| parallel_tool_calls | OpenAI.ParallelToolCalls | Whether to enable parallel function calling during tool use. | No | |
| response_format | OpenAI.AssistantsApiResponseFormatOption | Specifies the format that the model must output. Compatible with GPT-4o, GPT-4 Turbo, and all GPT-3.5 Turbo models since gpt-3.5-turbo-1106.Setting to { "type": "json_schema", "json_schema": {...} } enables Structured Outputs which ensure the model will match your supplied JSON schema. Learn more in the Setting to { "type": "json_object" } enables JSON mode, which ensures the message the model generates is valid JSON.Important:* when using JSON mode, you must also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly "stuck" request. Also note that the message content may be partially cut off if finish_reason="length", which indicates the generation exceeded max_tokens or the conversation exceeded the max context length. |
No | |
| stream | boolean or null | If true, returns a stream of events that happen during the Run as server-sent events, terminating when the Run enters a terminal state with a data: [DONE] message. |
No | |
| temperature | number or null | What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. | No | |
| thread | OpenAI.CreateThreadRequest | Options to create a new thread. If no thread is provided when running a request, an empty thread will be created. |
No | |
| tool_choice | OpenAI.AssistantsApiToolChoiceOption | Controls which (if any) tool is called by the model.none means the model will not call any tools and instead generates a message.auto is the default value and means the model can pick between generating a message or calling one or more tools.required means the model must call one or more tools before responding to the user.Specifying a particular tool like {"type": "file_search"} or {"type": "function", "function": {"name": "my_function"}} forces the model to call that tool. |
No | |
| tool_resources | OpenAI.CreateThreadAndRunRequestToolResources or null | A set of resources that are used by the assistant's tools. The resources are specific to the type of tool. For example, the code_interpreter tool requires a list of file IDs, while the file_search tool requires a list of vector store IDs. |
No | |
| tools | array of OpenAI.AssistantTool | Override the tools the assistant can use for this run. This is useful for modifying the behavior on a per-run basis. | No | |
| top_p | number or null | An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both. |
No | |
| truncation_strategy | OpenAI.TruncationObject | Controls for how a thread will be truncated prior to the run. Use this to control the initial context window of the run. | No |
OpenAI.CreateThreadAndRunRequestToolResources
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| code_interpreter | OpenAI.CreateThreadAndRunRequestToolResourcesCodeInterpreter | No | ||
| file_search | OpenAI.CreateThreadAndRunRequestToolResourcesFileSearch | No |
OpenAI.CreateThreadAndRunRequestToolResourcesCodeInterpreter
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| file_ids | array of string | No | [] |
OpenAI.CreateThreadAndRunRequestToolResourcesFileSearch
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| vector_store_ids | array of string | No |
OpenAI.CreateThreadRequest
Options to create a new thread. If no thread is provided when running a request, an empty thread will be created.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| messages | array of OpenAI.CreateMessageRequest | A list of messages to start the thread with. | No | |
| metadata | OpenAI.Metadata or null | No | ||
| tool_resources | OpenAI.CreateThreadRequestToolResources or null | No |
OpenAI.CreateThreadRequestToolResources
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| code_interpreter | OpenAI.CreateThreadRequestToolResourcesCodeInterpreter | No | ||
| file_search | object or object | No |
OpenAI.CreateThreadRequestToolResourcesCodeInterpreter
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| file_ids | array of string | No |
OpenAI.CreateVectorStoreFileBatchRequest
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| attributes | OpenAI.VectorStoreFileAttributes or null | No | ||
| chunking_strategy | OpenAI.ChunkingStrategyRequestParam | The chunking strategy used to chunk the file(s). If not set, will use the auto strategy. Only applicable if file_ids is non-empty. |
No | |
| file_ids | array of string | A list of File IDs that the vector store should use. Useful for tools like file_search that can access files. If attributes or chunking_strategy are provided, they will be applied to all files in the batch. Mutually exclusive with files. |
No | |
| files | array of OpenAI.CreateVectorStoreFileRequest | A list of objects that each include a file_id plus optional attributes or chunking_strategy. Use this when you need to override metadata for specific files. The global attributes or chunking_strategy will be ignored and must be specified for each file. Mutually exclusive with file_ids. |
No |
OpenAI.CreateVectorStoreFileRequest
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| attributes | OpenAI.VectorStoreFileAttributes or null | No | ||
| chunking_strategy | OpenAI.ChunkingStrategyRequestParam | The chunking strategy used to chunk the file(s). If not set, will use the auto strategy. Only applicable if file_ids is non-empty. |
No | |
| file_id | string | A File ID that the vector store should use. Useful for tools like file_search that can access files. |
Yes |
OpenAI.CreateVectorStoreRequest
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| chunking_strategy | OpenAI.ChunkingStrategyRequestParam | The chunking strategy used to chunk the file(s). If not set, will use the auto strategy. Only applicable if file_ids is non-empty. |
No | |
| description | string | A description for the vector store. Can be used to describe the vector store's purpose. | No | |
| expires_after | OpenAI.VectorStoreExpirationAfter | The expiration policy for a vector store. | No | |
| file_ids | array of string | A list of File IDs that the vector store should use. Useful for tools like file_search that can access files. |
No | |
| metadata | OpenAI.Metadata or null | No | ||
| name | string | The name of the vector store. | No |
OpenAI.CustomGrammarFormatParam
A grammar defined by the user.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| definition | string | The grammar definition. | Yes | |
| syntax | OpenAI.GrammarSyntax1 | Yes | ||
| type | enum | Grammar format. Always grammar.Possible values: grammar |
Yes |
OpenAI.CustomTextFormatParam
Unconstrained free-form text.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | enum | Unconstrained text format. Always text.Possible values: text |
Yes |
OpenAI.CustomToolChatCompletions
A custom tool that processes input using a specified format.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| custom | OpenAI.CustomToolChatCompletionsCustom | Yes | ||
| └─ description | string | No | ||
| └─ format | OpenAI.CustomToolChatCompletionsCustomFormatText or OpenAI.CustomToolChatCompletionsCustomFormatGrammar | No | ||
| └─ name | string | Yes | ||
| type | enum | The type of the custom tool. Always custom.Possible values: custom |
Yes |
OpenAI.CustomToolChatCompletionsCustom
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| description | string | No | ||
| format | OpenAI.CustomToolChatCompletionsCustomFormatText or OpenAI.CustomToolChatCompletionsCustomFormatGrammar | No | ||
| name | string | Yes |
OpenAI.CustomToolChatCompletionsCustomFormatGrammar
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| grammar | OpenAI.CustomToolChatCompletionsCustomFormatGrammarGrammar | Yes | ||
| └─ definition | string | Yes | ||
| └─ syntax | enum | Possible values: lark, regex |
Yes | |
| type | enum | Possible values: grammar |
Yes |
OpenAI.CustomToolChatCompletionsCustomFormatGrammarGrammar
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| definition | string | Yes | ||
| syntax | enum | Possible values: lark, regex |
Yes |
OpenAI.CustomToolChatCompletionsCustomFormatText
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | enum | Possible values: text |
Yes |
OpenAI.CustomToolParam
A custom tool that processes input using a specified format. Learn more about custom tools
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| description | string | Optional description of the custom tool, used to provide more context. | No | |
| format | OpenAI.CustomToolParamFormat | The input format for the custom tool. Default is unconstrained text. | No | |
| └─ type | OpenAI.CustomToolParamFormatType | Yes | ||
| name | string | The name of the custom tool, used to identify it in tool calls. | Yes | |
| type | enum | The type of the custom tool. Always custom.Possible values: custom |
Yes |
OpenAI.CustomToolParamFormat
The input format for the custom tool. Default is unconstrained text.
Discriminator for OpenAI.CustomToolParamFormat
This component uses the property type to discriminate between different types:
| Type Value | Schema |
|---|---|
text |
OpenAI.CustomTextFormatParam |
grammar |
OpenAI.CustomGrammarFormatParam |
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | OpenAI.CustomToolParamFormatType | Yes |
OpenAI.CustomToolParamFormatType
| Property | Value |
|---|---|
| Type | string |
| Values | textgrammar |
OpenAI.DeleteFileResponse
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| deleted | boolean | Yes | ||
| id | string | Yes | ||
| object | enum | Possible values: file |
Yes |
OpenAI.DeleteFineTuningCheckpointPermissionResponse
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| deleted | boolean | Whether the fine-tuned model checkpoint permission was successfully deleted. | Yes | |
| id | string | The ID of the fine-tuned model checkpoint permission that was deleted. | Yes | |
| object | enum | The object type, which is always "checkpoint.permission". Possible values: checkpoint.permission |
Yes |
OpenAI.DeleteMessageResponse
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| deleted | boolean | Yes | ||
| id | string | Yes | ||
| object | enum | Possible values: thread.message.deleted |
Yes |
OpenAI.DeleteModelResponse
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| deleted | boolean | Yes | ||
| id | string | Yes | ||
| object | string | Yes |
OpenAI.DeleteThreadResponse
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| deleted | boolean | Yes | ||
| id | string | Yes | ||
| object | enum | Possible values: thread.deleted |
Yes |
OpenAI.DeleteVectorStoreFileResponse
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| deleted | boolean | Yes | ||
| id | string | Yes | ||
| object | enum | Possible values: vector_store.file.deleted |
Yes |
OpenAI.DeleteVectorStoreResponse
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| deleted | boolean | Yes | ||
| id | string | Yes | ||
| object | enum | Possible values: vector_store.deleted |
Yes |
OpenAI.DeletedConversationResource
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| deleted | boolean | Yes | ||
| id | string | Yes | ||
| object | enum | Possible values: conversation.deleted |
Yes |
OpenAI.DoubleClickAction
A double click action.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | enum | Specifies the event type. For a double click action, this property is always set to double_click.Possible values: double_click |
Yes | |
| x | integer | The x-coordinate where the double click occurred. | Yes | |
| y | integer | The y-coordinate where the double click occurred. | Yes |
OpenAI.Drag
A drag action.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| path | array of OpenAI.DragPoint | An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg <br> [<br> { x: 100, y: 200 },<br> { x: 200, y: 300 }<br> ]<br> |
Yes | |
| type | enum | Specifies the event type. For a drag action, this property is always set to drag.Possible values: drag |
Yes |
OpenAI.DragPoint
An x/y coordinate pair, e.g. { x: 100, y: 200 }.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| x | integer | The x-coordinate. | Yes | |
| y | integer | The y-coordinate. | Yes |
OpenAI.EasyInputMessage
A message input to the model with a role indicating instruction following
hierarchy. Instructions given with the developer or system role take
precedence over instructions given with the user role. Messages with the
assistant role are presumed to have been generated by the model in previous
interactions.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| content | string or OpenAI.InputMessageContentList | Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses. |
Yes | |
| role | enum | The role of the message input. One of user, assistant, system, ordeveloper.Possible values: user, assistant, system, developer |
Yes | |
| type | enum | The type of the message input. Always message.Possible values: message |
Yes |
OpenAI.Embedding
Represents an embedding vector returned by embedding endpoint.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| embedding | array of number | The embedding vector, which is a list of floats. The length of vector depends on the model as listed in the embedding guide. | Yes | |
| index | integer | The index of the embedding in the list of embeddings. | Yes | |
| object | enum | The object type, which is always "embedding". Possible values: embedding |
Yes |
OpenAI.Eval
An Eval object with a data source config and testing criteria. An Eval represents a task to be done for your LLM integration. Like:
- Improve the quality of my chatbot
- See how well my chatbot handles customer support
- Check if o4-mini is better at my usecase than gpt-4o
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| created_at | integer | The Unix timestamp (in seconds) for when the eval was created. | Yes | |
| data_source_config | OpenAI.EvalCustomDataSourceConfig or OpenAI.EvalLogsDataSourceConfig or OpenAI.EvalStoredCompletionsDataSourceConfig | Configuration of data sources used in runs of the evaluation. | Yes | |
| id | string | Unique identifier for the evaluation. | Yes | |
| metadata | OpenAI.Metadata or null | Yes | ||
| name | string | The name of the evaluation. | Yes | |
| object | enum | The object type. Possible values: eval |
Yes | |
| testing_criteria | array of OpenAI.CreateEvalLabelModelGrader or OpenAI.EvalGraderStringCheck or OpenAI.EvalGraderTextSimilarity or OpenAI.EvalGraderPython or OpenAI.EvalGraderScoreModel or EvalGraderEndpoint | A list of testing criteria. | Yes |
OpenAI.EvalApiError
An object representing an error response from the Eval API.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| code | string | The error code. | Yes | |
| message | string | The error message. | Yes |
OpenAI.EvalCustomDataSourceConfig
A CustomDataSourceConfig which specifies the schema of your item and optionally sample namespaces.
The response schema defines the shape of the data that will be:
- Used to define your testing criteria and
- What data is required when creating a run
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| schema | object | The json schema for the run data source items. Learn how to build JSON schemas here. |
Yes | |
| type | enum | The type of data source. Always custom.Possible values: custom |
Yes |
OpenAI.EvalGraderPython
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| image_tag | string | The image tag to use for the python script. | No | |
| name | string | The name of the grader. | Yes | |
| pass_threshold | number | The threshold for the score. | No | |
| source | string | The source code of the python script. | Yes | |
| type | enum | The object type, which is always python.Possible values: python |
Yes |
OpenAI.EvalGraderScoreModel
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| input | array of OpenAI.EvalItem | The input messages evaluated by the grader. Supports text, output text, input image, and input audio content blocks, and may include template strings. | Yes | |
| model | string | The model to use for the evaluation. | Yes | |
| name | string | The name of the grader. | Yes | |
| pass_threshold | number | The threshold for the score. | No | |
| range | array of number | The range of the score. Defaults to [0, 1]. |
No | |
| sampling_params | OpenAI.EvalGraderScoreModelSamplingParams | No | ||
| └─ max_completions_tokens | integer or null | No | ||
| └─ reasoning_effort | OpenAI.ReasoningEffort | Constrains effort on reasoning for reasoning models. Currently supported values are none, minimal, low, medium, high, and xhigh. Reducingreasoning effort can result in faster responses and fewer tokens used on reasoning in a response. - gpt-5.1 defaults to none, which does not perform reasoning. The supported reasoning values for gpt-5.1 are none, low, medium, and high. Tool calls are supported for all reasoning values in gpt-5.1.- All models before gpt-5.1 default to medium reasoning effort, and do not support none.- The gpt-5-pro model defaults to (and only supports) high reasoning effort.- xhigh is supported for all models after gpt-5.1-codex-max. |
No | |
| └─ seed | integer or null | No | ||
| └─ temperature | number or null | No | ||
| └─ top_p | number or null | No | 1 | |
| type | enum | The object type, which is always score_model.Possible values: score_model |
Yes |
OpenAI.EvalGraderScoreModelSamplingParams
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| max_completions_tokens | integer or null | No | ||
| reasoning_effort | OpenAI.ReasoningEffort | Constrains effort on reasoning for reasoning models. Currently supported values are none, minimal, low, medium, high, and xhigh. Reducingreasoning effort can result in faster responses and fewer tokens used on reasoning in a response. - gpt-5.1 defaults to none, which does not perform reasoning. The supported reasoning values for gpt-5.1 are none, low, medium, and high. Tool calls are supported for all reasoning values in gpt-5.1.- All models before gpt-5.1 default to medium reasoning effort, and do not support none.- The gpt-5-pro model defaults to (and only supports) high reasoning effort.- xhigh is supported for all models after gpt-5.1-codex-max. |
No | |
| seed | integer or null | No | ||
| temperature | number or null | No | ||
| top_p | number or null | No |
OpenAI.EvalGraderStringCheck
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| input | string | The input text. This may include template strings. | Yes | |
| name | string | The name of the grader. | Yes | |
| operation | enum | The string check operation to perform. One of eq, ne, like, or ilike.Possible values: eq, ne, like, ilike |
Yes | |
| reference | string | The reference text. This may include template strings. | Yes | |
| type | enum | The object type, which is always string_check.Possible values: string_check |
Yes |
OpenAI.EvalGraderTextSimilarity
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| evaluation_metric | enum | The evaluation metric to use. One of cosine, fuzzy_match, bleu,gleu, meteor, rouge_1, rouge_2, rouge_3, rouge_4, rouge_5,or rouge_l.Possible values: cosine, fuzzy_match, bleu, gleu, meteor, rouge_1, rouge_2, rouge_3, rouge_4, rouge_5, rouge_l |
Yes | |
| input | string | The text being graded. | Yes | |
| name | string | The name of the grader. | Yes | |
| pass_threshold | number | The threshold for the score. | Yes | |
| reference | string | The text being graded against. | Yes | |
| type | enum | The type of grader. Possible values: text_similarity |
Yes |
OpenAI.EvalItem
A message input to the model with a role indicating instruction following
hierarchy. Instructions given with the developer or system role take
precedence over instructions given with the user role. Messages with the
assistant role are presumed to have been generated by the model in previous
interactions.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| content | OpenAI.EvalItemContent | Inputs to the model - can contain template strings. Supports text, output text, input images, and input audio, either as a single item or an array of items. | Yes | |
| role | enum | The role of the message input. One of user, assistant, system, ordeveloper.Possible values: user, assistant, system, developer |
Yes | |
| type | enum | The type of the message input. Always message.Possible values: message |
No |
OpenAI.EvalItemContent
Inputs to the model - can contain template strings. Supports text, output text, input images, and input audio, either as a single item or an array of items.
Type: OpenAI.EvalItemContentItem or OpenAI.EvalItemContentArray
Inputs to the model - can contain template strings. Supports text, output text, input images, and input audio, either as a single item or an array of items.
OpenAI.EvalItemContentArray
A list of inputs, each of which may be either an input text, output text, input image, or input audio object.
Array of: OpenAI.EvalItemContentItem
OpenAI.EvalItemContentItem
A single content item: input text, output text, input image, or input audio.
Type: OpenAI.EvalItemContentText or OpenAI.EvalItemContentItemObject
A single content item: input text, output text, input image, or input audio.
OpenAI.EvalItemContentItemObject
A single content item: input text, output text, input image, or input audio.
Discriminator for OpenAI.EvalItemContentItemObject
This component uses the property type to discriminate between different types:
| Type Value | Schema |
|---|---|
output_text |
OpenAI.EvalItemContentOutputText |
input_image |
OpenAI.EvalItemInputImage |
input_audio |
OpenAI.InputAudio |
input_text |
OpenAI.EvalItemContentItemObjectInputTextContent |
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | OpenAI.EvalItemContentItemObjectType | Yes |
OpenAI.EvalItemContentItemObjectInputTextContent
A text input to the model.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| text | string | The text input to the model. | Yes | |
| type | enum | The type of the input item. Always input_text.Possible values: input_text |
Yes |
OpenAI.EvalItemContentItemObjectType
| Property | Value |
|---|---|
| Type | string |
| Values | input_textoutput_textinput_imageinput_audio |
OpenAI.EvalItemContentOutputText
A text output from the model.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| text | string | The text output from the model. | Yes | |
| type | enum | The type of the output text. Always output_text.Possible values: output_text |
Yes |
OpenAI.EvalItemContentText
A text input to the model.
Type: string
OpenAI.EvalItemInputImage
An image input block used within EvalItem content arrays.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| detail | string | The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto. |
No | |
| image_url | string | The URL of the image input. | Yes | |
| type | enum | The type of the image input. Always input_image.Possible values: input_image |
Yes |
OpenAI.EvalJsonlFileContentSource
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| content | array of OpenAI.EvalJsonlFileContentSourceContent | The content of the jsonl file. | Yes | |
| type | enum | The type of jsonl source. Always file_content.Possible values: file_content |
Yes |
OpenAI.EvalJsonlFileContentSourceContent
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| item | object | Yes | ||
| sample | object | No |
OpenAI.EvalJsonlFileIdSource
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| id | string | The identifier of the file. | Yes | |
| type | enum | The type of jsonl source. Always file_id.Possible values: file_id |
Yes |
OpenAI.EvalList
An object representing a list of evals.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| data | array of OpenAI.Eval | An array of eval objects. | Yes | |
| first_id | string | The identifier of the first eval in the data array. | Yes | |
| has_more | boolean | Indicates whether there are more evals available. | Yes | |
| last_id | string | The identifier of the last eval in the data array. | Yes | |
| object | enum | The type of this object. It is always set to "list". Possible values: list |
Yes |
OpenAI.EvalLogsDataSourceConfig
A LogsDataSourceConfig which specifies the metadata property of your logs query.
This is usually metadata like usecase=chatbot or prompt-version=v2, etc.
The schema returned by this data source config is used to defined what variables are available in your evals.
item and sample are both defined when using this data source config.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| metadata | OpenAI.Metadata or null | No | ||
| schema | object | The json schema for the run data source items. Learn how to build JSON schemas here. |
Yes | |
| type | enum | The type of data source. Always logs.Possible values: logs |
Yes |
OpenAI.EvalResponsesSource
A EvalResponsesSource object describing a run data source configuration.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| created_after | integer or null | No | ||
| created_before | integer or null | No | ||
| instructions_search | string or null | No | ||
| metadata | object or null | No | ||
| model | string or null | No | ||
| reasoning_effort | OpenAI.ReasoningEffort or null | No | ||
| temperature | number or null | No | ||
| tools | array of string or null | No | ||
| top_p | number or null | No | ||
| type | enum | The type of run data source. Always responses.Possible values: responses |
Yes | |
| users | array of string or null | No |
OpenAI.EvalRun
A schema representing an evaluation run.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| created_at | integer | Unix timestamp (in seconds) when the evaluation run was created. | Yes | |
| data_source | OpenAI.CreateEvalJsonlRunDataSource or OpenAI.CreateEvalCompletionsRunDataSource or OpenAI.CreateEvalResponsesRunDataSource | Information about the run's data source. | Yes | |
| error | OpenAI.EvalApiError | An object representing an error response from the Eval API. | Yes | |
| eval_id | string | The identifier of the associated evaluation. | Yes | |
| id | string | Unique identifier for the evaluation run. | Yes | |
| metadata | OpenAI.Metadata or null | Yes | ||
| model | string | The model that is evaluated, if applicable. | Yes | |
| name | string | The name of the evaluation run. | Yes | |
| object | enum | The type of the object. Always "eval.run". Possible values: eval.run |
Yes | |
| per_model_usage | array of OpenAI.EvalRunPerModelUsage | Usage statistics for each model during the evaluation run. | Yes | |
| per_testing_criteria_results | array of OpenAI.EvalRunPerTestingCriteriaResults | Results per testing criteria applied during the evaluation run. | Yes | |
| report_url | string | The URL to the rendered evaluation run report on the UI dashboard. | Yes | |
| result_counts | OpenAI.EvalRunResultCounts | Yes | ||
| └─ errored | integer | Yes | ||
| └─ failed | integer | Yes | ||
| └─ passed | integer | Yes | ||
| └─ total | integer | Yes | ||
| status | string | The status of the evaluation run. | Yes |
OpenAI.EvalRunList
An object representing a list of runs for an evaluation.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| data | array of OpenAI.EvalRun | An array of eval run objects. | Yes | |
| first_id | string | The identifier of the first eval run in the data array. | Yes | |
| has_more | boolean | Indicates whether there are more evals available. | Yes | |
| last_id | string | The identifier of the last eval run in the data array. | Yes | |
| object | enum | The type of this object. It is always set to "list". Possible values: list |
Yes |
OpenAI.EvalRunOutputItem
A schema representing an evaluation run output item.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| created_at | integer | Unix timestamp (in seconds) when the evaluation run was created. | Yes | |
| datasource_item | object | Details of the input data source item. | Yes | |
| datasource_item_id | integer | The identifier for the data source item. | Yes | |
| eval_id | string | The identifier of the evaluation group. | Yes | |
| id | string | Unique identifier for the evaluation run output item. | Yes | |
| object | enum | The type of the object. Always "eval.run.output_item". Possible values: eval.run.output_item |
Yes | |
| results | array of OpenAI.EvalRunOutputItemResult | A list of grader results for this output item. | Yes | |
| run_id | string | The identifier of the evaluation run associated with this output item. | Yes | |
| sample | OpenAI.EvalRunOutputItemSample | Yes | ||
| └─ error | OpenAI.EvalApiError | An object representing an error response from the Eval API. | Yes | |
| └─ finish_reason | string | Yes | ||
| └─ input | array of OpenAI.EvalRunOutputItemSampleInput | Yes | ||
| └─ max_completion_tokens | integer | Yes | ||
| └─ model | string | Yes | ||
| └─ output | array of OpenAI.EvalRunOutputItemSampleOutput | Yes | ||
| └─ seed | integer | Yes | ||
| └─ temperature | number | Yes | ||
| └─ top_p | number | Yes | ||
| └─ usage | OpenAI.EvalRunOutputItemSampleUsage | Yes | ||
| status | string | The status of the evaluation run. | Yes |
OpenAI.EvalRunOutputItemList
An object representing a list of output items for an evaluation run.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| data | array of OpenAI.EvalRunOutputItem | An array of eval run output item objects. | Yes | |
| first_id | string | The identifier of the first eval run output item in the data array. | Yes | |
| has_more | boolean | Indicates whether there are more eval run output items available. | Yes | |
| last_id | string | The identifier of the last eval run output item in the data array. | Yes | |
| object | enum | The type of this object. It is always set to "list". Possible values: list |
Yes |
OpenAI.EvalRunOutputItemResult
A single grader result for an evaluation run output item.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| name | string | The name of the grader. | Yes | |
| passed | boolean | Whether the grader considered the output a pass. | Yes | |
| sample | object or null | Optional sample or intermediate data produced by the grader. | No | |
| score | number | The numeric score produced by the grader. | Yes | |
| type | string | The grader type (for example, "string-check-grader"). | No |
OpenAI.EvalRunOutputItemSample
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| error | OpenAI.EvalApiError | An object representing an error response from the Eval API. | Yes | |
| finish_reason | string | Yes | ||
| input | array of OpenAI.EvalRunOutputItemSampleInput | Yes | ||
| max_completion_tokens | integer | Yes | ||
| model | string | Yes | ||
| output | array of OpenAI.EvalRunOutputItemSampleOutput | Yes | ||
| seed | integer | Yes | ||
| temperature | number | Yes | ||
| top_p | number | Yes | ||
| usage | OpenAI.EvalRunOutputItemSampleUsage | Yes |
OpenAI.EvalRunOutputItemSampleInput
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| content | string | Yes | ||
| role | string | Yes |
OpenAI.EvalRunOutputItemSampleOutput
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| content | string | No | ||
| role | string | No |
OpenAI.EvalRunOutputItemSampleUsage
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| cached_tokens | integer | Yes | ||
| completion_tokens | integer | Yes | ||
| prompt_tokens | integer | Yes | ||
| total_tokens | integer | Yes |
OpenAI.EvalRunPerModelUsage
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| cached_tokens | integer | Yes | ||
| completion_tokens | integer | Yes | ||
| invocation_count | integer | Yes | ||
| model_name | string | Yes | ||
| prompt_tokens | integer | Yes | ||
| total_tokens | integer | Yes |
OpenAI.EvalRunPerTestingCriteriaResults
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| failed | integer | Yes | ||
| passed | integer | Yes | ||
| testing_criteria | string | Yes |
OpenAI.EvalRunResultCounts
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| errored | integer | Yes | ||
| failed | integer | Yes | ||
| passed | integer | Yes | ||
| total | integer | Yes |
OpenAI.EvalStoredCompletionsDataSourceConfig
Deprecated in favor of LogsDataSourceConfig.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| metadata | OpenAI.Metadata or null | No | ||
| schema | object | The json schema for the run data source items. Learn how to build JSON schemas here. |
Yes | |
| type | enum | The type of data source. Always stored_completions.Possible values: stored_completions |
Yes |
OpenAI.EvalStoredCompletionsSource
A StoredCompletionsRunDataSource configuration describing a set of filters
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| created_after | integer or null | No | ||
| created_before | integer or null | No | ||
| limit | integer or null | No | ||
| metadata | OpenAI.Metadata or null | No | ||
| model | string or null | No | ||
| type | enum | The type of source. Always stored_completions.Possible values: stored_completions |
Yes |
OpenAI.FileCitationBody
A citation to a file.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| file_id | string | The ID of the file. | Yes | |
| filename | string | The filename of the file cited. | Yes | |
| index | integer | The index of the file in the list of files. | Yes | |
| type | enum | The type of the file citation. Always file_citation.Possible values: file_citation |
Yes |
OpenAI.FilePath
A path to a file.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| file_id | string | The ID of the file. | Yes | |
| index | integer | The index of the file in the list of files. | Yes | |
| type | enum | The type of the file path. Always file_path.Possible values: file_path |
Yes |
OpenAI.FileSearchRanker
The ranker to use for the file search. If not specified will use the auto ranker.
| Property | Value |
|---|---|
| Type | string |
| Values | autodefault_2024_08_21 |
OpenAI.FileSearchRankingOptions
The ranking options for the file search. If not specified, the file search tool will use the auto ranker and a score_threshold of 0.
See the file search tool documentation for more information.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| ranker | OpenAI.FileSearchRanker | The ranker to use for the file search. If not specified will use the auto ranker. |
No | |
| score_threshold | number | The score threshold for the file search. All values must be a floating point number between 0 and 1. Constraints: min: 0, max: 1 |
Yes |
OpenAI.FileSearchTool
A tool that searches for relevant content from uploaded files.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| filters | OpenAI.Filters or null | No | ||
| max_num_results | integer | The maximum number of results to return. This number should be between 1 and 50 inclusive. | No | |
| ranking_options | OpenAI.RankingOptions | No | ||
| └─ hybrid_search | OpenAI.HybridSearchOptions | Weights that control how reciprocal rank fusion balances semantic embedding matches versus sparse keyword matches when hybrid search is enabled. | No | |
| └─ ranker | OpenAI.RankerVersionType | The ranker to use for the file search. | No | |
| └─ score_threshold | number | The score threshold for the file search, a number between 0 and 1. Numbers closer to 1 will attempt to return only the most relevant results, but may return fewer results. | No | |
| type | enum | The type of the file search tool. Always file_search.Possible values: file_search |
Yes | |
| vector_store_ids | array of string | The IDs of the vector stores to search. | Yes |
OpenAI.FileSearchToolCallResults
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| attributes | OpenAI.VectorStoreFileAttributes or null | No | ||
| file_id | string | No | ||
| filename | string | No | ||
| score | number | No | ||
| text | string | No |
OpenAI.Filters
Type: OpenAI.ComparisonFilter or OpenAI.CompoundFilter
OpenAI.FineTuneDPOHyperparameters
The hyperparameters used for the DPO fine-tuning job.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| batch_size | string or integer | Number of examples in each batch. A larger batch size means that model parameters are updated less frequently, but with lower variance. | No | |
| beta | string or number | The beta value for the DPO method. A higher beta value will increase the weight of the penalty between the policy and reference model. | No | |
| learning_rate_multiplier | string or number | Scaling factor for the learning rate. A smaller learning rate may be useful to avoid overfitting. | No | |
| n_epochs | string or integer | The number of epochs to train the model for. An epoch refers to one full cycle through the training dataset. | No |
OpenAI.FineTuneDPOMethod
Configuration for the DPO fine-tuning method.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| hyperparameters | OpenAI.FineTuneDPOHyperparameters | The hyperparameters used for the DPO fine-tuning job. | No |
OpenAI.FineTuneMethod
The method used for fine-tuning.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| dpo | OpenAI.FineTuneDPOMethod | Configuration for the DPO fine-tuning method. | No | |
| reinforcement | AzureFineTuneReinforcementMethod | No | ||
| supervised | OpenAI.FineTuneSupervisedMethod | Configuration for the supervised fine-tuning method. | No | |
| type | enum | The type of method. Is either supervised, dpo, or reinforcement.Possible values: supervised, dpo, reinforcement |
Yes |
OpenAI.FineTuneReinforcementHyperparameters
The hyperparameters used for the reinforcement fine-tuning job.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| batch_size | string or integer | Number of examples in each batch. A larger batch size means that model parameters are updated less frequently, but with lower variance. | No | |
| compute_multiplier | string or number | Multiplier on amount of compute used for exploring search space during training. | No | |
| eval_interval | string or integer | The number of training steps between evaluation runs. | No | |
| eval_samples | string or integer | Number of evaluation samples to generate per training step. | No | |
| learning_rate_multiplier | string or number | Scaling factor for the learning rate. A smaller learning rate may be useful to avoid overfitting. | No | |
| n_epochs | string or integer | The number of epochs to train the model for. An epoch refers to one full cycle through the training dataset. | No | |
| reasoning_effort | enum | Level of reasoning effort. Possible values: default, low, medium, high |
No |
OpenAI.FineTuneSupervisedHyperparameters
The hyperparameters used for the fine-tuning job.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| batch_size | string or integer | Number of examples in each batch. A larger batch size means that model parameters are updated less frequently, but with lower variance. | No | |
| learning_rate_multiplier | string or number | Scaling factor for the learning rate. A smaller learning rate may be useful to avoid overfitting. | No | |
| n_epochs | string or integer | The number of epochs to train the model for. An epoch refers to one full cycle through the training dataset. | No |
OpenAI.FineTuneSupervisedMethod
Configuration for the supervised fine-tuning method.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| hyperparameters | OpenAI.FineTuneSupervisedHyperparameters | The hyperparameters used for the fine-tuning job. | No |
OpenAI.FineTuningCheckpointPermission
The checkpoint.permission object represents a permission for a fine-tuned model checkpoint.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| created_at | integer | The Unix timestamp (in seconds) for when the permission was created. | Yes | |
| id | string | The permission identifier, which can be referenced in the API endpoints. | Yes | |
| object | enum | The object type, which is always "checkpoint.permission". Possible values: checkpoint.permission |
Yes | |
| project_id | string | The project identifier that the permission is for. | Yes |
OpenAI.FineTuningIntegration
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | enum | The type of the integration being enabled for the fine-tuning job Possible values: wandb |
Yes | |
| wandb | OpenAI.FineTuningIntegrationWandb | Yes | ||
| └─ entity | string or null | No | ||
| └─ name | string or null | No | ||
| └─ project | string | Yes | ||
| └─ tags | array of string | No |
OpenAI.FineTuningIntegrationWandb
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| entity | string or null | No | ||
| name | string or null | No | ||
| project | string | Yes | ||
| tags | array of string | No |
OpenAI.FineTuningJob
The fine_tuning.job object represents a fine-tuning job that has been created through the API.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| created_at | integer | The Unix timestamp (in seconds) for when the fine-tuning job was created. | Yes | |
| error | OpenAI.FineTuningJobError or null | Yes | ||
| estimated_finish | string or null | No | ||
| fine_tuned_model | string or null | Yes | ||
| finished_at | string or null | Yes | ||
| hyperparameters | OpenAI.FineTuningJobHyperparameters | Yes | ||
| └─ batch_size | string or integer or null | No | auto | |
| └─ learning_rate_multiplier | string or number | No | ||
| └─ n_epochs | string or integer | No | auto | |
| id | string | The object identifier, which can be referenced in the API endpoints. | Yes | |
| integrations | array of OpenAI.FineTuningIntegration or null | No | ||
| metadata | OpenAI.Metadata or null | No | ||
| method | OpenAI.FineTuneMethod | The method used for fine-tuning. | No | |
| model | string | The base model that is being fine-tuned. | Yes | |
| object | enum | The object type, which is always "fine_tuning.job". Possible values: fine_tuning.job |
Yes | |
| organization_id | string | The organization that owns the fine-tuning job. | Yes | |
| result_files | array of string | The compiled results file ID(s) for the fine-tuning job. You can retrieve the results with the Files API. | Yes | |
| seed | integer | The seed used for the fine-tuning job. | Yes | |
| status | enum | The current status of the fine-tuning job, which can be either validating_files, queued, running, succeeded, failed, or cancelled.Possible values: validating_files, queued, running, succeeded, failed, cancelled |
Yes | |
| trained_tokens | integer or null | Yes | ||
| training_file | string | The file ID used for training. You can retrieve the training data with the Files API. | Yes | |
| validation_file | string or null | Yes |
OpenAI.FineTuningJobCheckpoint
The fine_tuning.job.checkpoint object represents a model checkpoint for a fine-tuning job that is ready to use.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| created_at | integer | The Unix timestamp (in seconds) for when the checkpoint was created. | Yes | |
| fine_tuned_model_checkpoint | string | The name of the fine-tuned checkpoint model that is created. | Yes | |
| fine_tuning_job_id | string | The name of the fine-tuning job that this checkpoint was created from. | Yes | |
| id | string | The checkpoint identifier, which can be referenced in the API endpoints. | Yes | |
| metrics | OpenAI.FineTuningJobCheckpointMetrics | Yes | ||
| └─ full_valid_loss | number | No | ||
| └─ full_valid_mean_token_accuracy | number | No | ||
| └─ step | number | No | ||
| └─ train_loss | number | No | ||
| └─ train_mean_token_accuracy | number | No | ||
| └─ valid_loss | number | No | ||
| └─ valid_mean_token_accuracy | number | No | ||
| object | enum | The object type, which is always "fine_tuning.job.checkpoint". Possible values: fine_tuning.job.checkpoint |
Yes | |
| step_number | integer | The step number that the checkpoint was created at. | Yes |
OpenAI.FineTuningJobCheckpointMetrics
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| full_valid_loss | number | No | ||
| full_valid_mean_token_accuracy | number | No | ||
| step | number | No | ||
| train_loss | number | No | ||
| train_mean_token_accuracy | number | No | ||
| valid_loss | number | No | ||
| valid_mean_token_accuracy | number | No |
OpenAI.FineTuningJobError
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| code | string | Yes | ||
| message | string | Yes | ||
| param | string or null | Yes |
OpenAI.FineTuningJobEvent
Fine-tuning job event object
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| created_at | integer | The Unix timestamp (in seconds) for when the fine-tuning job was created. | Yes | |
| data | OpenAI.FineTuningJobEventData | No | ||
| id | string | The object identifier. | Yes | |
| level | enum | The log level of the event. Possible values: info, warn, error |
Yes | |
| message | string | The message of the event. | Yes | |
| object | enum | The object type, which is always "fine_tuning.job.event". Possible values: fine_tuning.job.event |
Yes | |
| type | enum | The type of event. Possible values: message, metrics |
No |
OpenAI.FineTuningJobEventData
Type: object
OpenAI.FineTuningJobHyperparameters
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| batch_size | string or integer or null | No | ||
| learning_rate_multiplier | string or number | No | ||
| n_epochs | string or integer | No |
OpenAI.FunctionAndCustomToolCallOutput
Discriminator for OpenAI.FunctionAndCustomToolCallOutput
This component uses the property type to discriminate between different types:
| Type Value | Schema |
|---|---|
input_text |
OpenAI.FunctionAndCustomToolCallOutputInputTextContent |
input_image |
OpenAI.FunctionAndCustomToolCallOutputInputImageContent |
input_file |
OpenAI.FunctionAndCustomToolCallOutputInputFileContent |
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | OpenAI.FunctionAndCustomToolCallOutputType | Yes |
OpenAI.FunctionAndCustomToolCallOutputInputFileContent
A file input to the model.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| file_data | string | The content of the file to be sent to the model. | No | |
| file_id | string or null | No | ||
| file_url | string | The URL of the file to be sent to the model. | No | |
| filename | string | The name of the file to be sent to the model. | No | |
| type | enum | The type of the input item. Always input_file.Possible values: input_file |
Yes |
OpenAI.FunctionAndCustomToolCallOutputInputImageContent
An image input to the model. Learn about image inputs.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| detail | OpenAI.ImageDetail | Yes | ||
| file_id | string or null | No | ||
| image_url | string or null | No | ||
| type | enum | The type of the input item. Always input_image.Possible values: input_image |
Yes |
OpenAI.FunctionAndCustomToolCallOutputInputTextContent
A text input to the model.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| text | string | The text input to the model. | Yes | |
| type | enum | The type of the input item. Always input_text.Possible values: input_text |
Yes |
OpenAI.FunctionAndCustomToolCallOutputType
| Property | Value |
|---|---|
| Type | string |
| Values | input_textinput_imageinput_file |
OpenAI.FunctionObject
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| description | string | A description of what the function does, used by the model to choose when and how to call the function. | No | |
| name | string | The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64. | Yes | |
| parameters | OpenAI.FunctionParameters | The parameters the functions accepts, described as a JSON Schema object. See the guide for examples, and the JSON Schema reference for documentation about the format. Omitting parameters defines a function with an empty parameter list. |
No | |
| strict | boolean or null | No |
OpenAI.FunctionParameters
The parameters the functions accepts, described as a JSON Schema object. See the guide for examples, and the JSON Schema reference for documentation about the format.
Omitting parameters defines a function with an empty parameter list.
Type: object
OpenAI.FunctionShellAction
Execute a shell command.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| commands | array of string | Yes | ||
| max_output_length | integer or null | Yes | ||
| timeout_ms | integer or null | Yes |
OpenAI.FunctionShellCallOutputContent
The content of a shell tool call output that was emitted.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| created_by | string | The identifier of the actor that created the item. | No | |
| outcome | OpenAI.FunctionShellCallOutputOutcome | Represents either an exit outcome (with an exit code) or a timeout outcome for a shell call output chunk. | Yes | |
| └─ type | OpenAI.FunctionShellCallOutputOutcomeType | Yes | ||
| stderr | string | The standard error output that was captured. | Yes | |
| stdout | string | The standard output that was captured. | Yes |
OpenAI.FunctionShellCallOutputExitOutcome
Indicates that the shell commands finished and returned an exit code.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| exit_code | integer | Exit code from the shell process. | Yes | |
| type | enum | The outcome type. Always exit.Possible values: exit |
Yes |
OpenAI.FunctionShellCallOutputOutcome
Represents either an exit outcome (with an exit code) or a timeout outcome for a shell call output chunk.
Discriminator for OpenAI.FunctionShellCallOutputOutcome
This component uses the property type to discriminate between different types:
| Type Value | Schema |
|---|---|
timeout |
OpenAI.FunctionShellCallOutputTimeoutOutcome |
exit |
OpenAI.FunctionShellCallOutputExitOutcome |
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | OpenAI.FunctionShellCallOutputOutcomeType | Yes |
OpenAI.FunctionShellCallOutputOutcomeType
| Property | Value |
|---|---|
| Type | string |
| Values | timeoutexit |
OpenAI.FunctionShellCallOutputTimeoutOutcome
Indicates that the shell call exceeded its configured time limit.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | enum | The outcome type. Always timeout.Possible values: timeout |
Yes |
OpenAI.FunctionShellToolParam
A tool that allows the model to execute shell commands.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | enum | The type of the shell tool. Always shell.Possible values: shell |
Yes |
OpenAI.FunctionTool
Defines a function in your own code the model can choose to call.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| description | string or null | No | ||
| name | string | The name of the function to call. | Yes | |
| parameters | object or null | Yes | ||
| strict | boolean or null | Yes | ||
| type | enum | The type of the function tool. Always function.Possible values: function |
Yes |
OpenAI.GraderMulti
A MultiGrader object combines the output of multiple graders to produce a single score.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| calculate_output | string | A formula to calculate the output based on grader results. | Yes | |
| graders | OpenAI.GraderStringCheck or OpenAI.GraderTextSimilarity or OpenAI.GraderScoreModel or GraderEndpoint | Yes | ||
| name | string | The name of the grader. | Yes | |
| type | enum | The object type, which is always multi.Possible values: multi |
Yes |
OpenAI.GraderPython
A PythonGrader object that runs a python script on the input.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| image_tag | string | The image tag to use for the python script. | No | |
| name | string | The name of the grader. | Yes | |
| source | string | The source code of the python script. | Yes | |
| type | enum | The object type, which is always python.Possible values: python |
Yes |
OpenAI.GraderScoreModel
A ScoreModelGrader object that uses a model to assign a score to the input.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| input | array of OpenAI.EvalItem | The input messages evaluated by the grader. Supports text, output text, input image, and input audio content blocks, and may include template strings. | Yes | |
| model | string | The model to use for the evaluation. | Yes | |
| name | string | The name of the grader. | Yes | |
| range | array of number | The range of the score. Defaults to [0, 1]. |
No | |
| sampling_params | OpenAI.EvalGraderScoreModelSamplingParams | No | ||
| └─ max_completions_tokens | integer or null | No | ||
| └─ reasoning_effort | OpenAI.ReasoningEffort | Constrains effort on reasoning for reasoning models. Currently supported values are none, minimal, low, medium, high, and xhigh. Reducingreasoning effort can result in faster responses and fewer tokens used on reasoning in a response. - gpt-5.1 defaults to none, which does not perform reasoning. The supported reasoning values for gpt-5.1 are none, low, medium, and high. Tool calls are supported for all reasoning values in gpt-5.1.- All models before gpt-5.1 default to medium reasoning effort, and do not support none.- The gpt-5-pro model defaults to (and only supports) high reasoning effort.- xhigh is supported for all models after gpt-5.1-codex-max. |
No | |
| └─ seed | integer or null | No | ||
| └─ temperature | number or null | No | ||
| └─ top_p | number or null | No | 1 | |
| type | enum | The object type, which is always score_model.Possible values: score_model |
Yes |
OpenAI.GraderStringCheck
A StringCheckGrader object that performs a string comparison between input and reference using a specified operation.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| input | string | The input text. This may include template strings. | Yes | |
| name | string | The name of the grader. | Yes | |
| operation | enum | The string check operation to perform. One of eq, ne, like, or ilike.Possible values: eq, ne, like, ilike |
Yes | |
| reference | string | The reference text. This may include template strings. | Yes | |
| type | enum | The object type, which is always string_check.Possible values: string_check |
Yes |
OpenAI.GraderTextSimilarity
A TextSimilarityGrader object which grades text based on similarity metrics.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| evaluation_metric | enum | The evaluation metric to use. One of cosine, fuzzy_match, bleu,gleu, meteor, rouge_1, rouge_2, rouge_3, rouge_4, rouge_5,or rouge_l.Possible values: cosine, fuzzy_match, bleu, gleu, meteor, rouge_1, rouge_2, rouge_3, rouge_4, rouge_5, rouge_l |
Yes | |
| input | string | The text being graded. | Yes | |
| name | string | The name of the grader. | Yes | |
| reference | string | The text being graded against. | Yes | |
| type | enum | The type of grader. Possible values: text_similarity |
Yes |
OpenAI.GrammarSyntax1
| Property | Value |
|---|---|
| Type | string |
| Values | larkregex |
OpenAI.HybridSearchOptions
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| embedding_weight | number | The weight of the embedding in the reciprocal ranking fusion. | Yes | |
| text_weight | number | The weight of the text in the reciprocal ranking fusion. | Yes |
OpenAI.ImageDetail
| Property | Value |
|---|---|
| Type | string |
| Values | lowhighauto |
OpenAI.ImageGenTool
A tool that generates images using the GPT image models.
Valid models:
gpt-image-1
gpt-image-1-mini
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| background | enum | Background type for the generated image. One of transparent,opaque, or auto. Default: auto.Possible values: transparent, opaque, auto |
No | |
| input_fidelity | OpenAI.InputFidelity or null | No | ||
| input_image_mask | OpenAI.ImageGenToolInputImageMask | No | ||
| └─ file_id | string | No | ||
| └─ image_url | string | No | ||
| model | string (see valid models below) | No | ||
| moderation | enum | Moderation level for the generated image. Default: auto.Possible values: auto, low |
No | |
| output_compression | integer | Compression level for the output image. Default: 100. Constraints: min: 0, max: 100 |
No | 100 |
| output_format | enum | The output format of the generated image. One of png, webp, orjpeg. Default: png.Possible values: png, webp, jpeg |
No | |
| partial_images | integer | Number of partial images to generate in streaming mode, from 0 (default value) to 3. Constraints: min: 0, max: 3 |
No | |
| quality | enum | The quality of the generated image. One of low, medium, high,or auto. Default: auto.Possible values: low, medium, high, auto |
No | |
| size | enum | The size of the generated image. One of 1024x1024, 1024x1536,1536x1024, or auto. Default: auto.Possible values: 1024x1024, 1024x1536, 1536x1024, auto |
No | |
| type | enum | The type of the image generation tool. Always image_generation.Possible values: image_generation |
Yes |
OpenAI.ImageGenToolInputImageMask
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| file_id | string | No | ||
| image_url | string | No |
OpenAI.IncludeEnum
Specify additional output data to include in the model response. Currently supported values are:
web_search_call.action.sources: Include the sources of the web search tool call.code_interpreter_call.outputs: Includes the outputs of python code execution in code interpreter tool call items.computer_call_output.output.image_url: Include image urls from the computer call output.file_search_call.results: Include the search results of the file search tool call.message.input_image.image_url: Include image urls from the input message.message.output_text.logprobs: Include logprobs with assistant messages.reasoning.encrypted_content: Includes an encrypted version of reasoning tokens in reasoning item outputs. This enables reasoning items to be used in multi-turn conversations when using the Responses API statelessly (like when thestoreparameter is set tofalse, or when an organization is enrolled in the zero data retention program).
| Property | Value |
|---|---|
| Description | Specify additional output data to include in the model response. Currently supported values are: |
web_search_call.action.sources: Include the sources of the web search tool call.code_interpreter_call.outputs: Includes the outputs of python code execution in code interpreter tool call items.computer_call_output.output.image_url: Include image urls from the computer call output.file_search_call.results: Include the search results of the file search tool call.message.input_image.image_url: Include image urls from the input message.message.output_text.logprobs: Include logprobs with assistant messages.reasoning.encrypted_content: Includes an encrypted version of reasoning tokens in reasoning item outputs. This enables reasoning items to be used in multi-turn conversations when using the Responses API statelessly (like when thestoreparameter is set tofalse, or when an organization is enrolled in the zero data retention program). | | Type | string | | Values |file_search_call.resultsweb_search_call.resultsweb_search_call.action.sourcesmessage.input_image.image_urlcomputer_call_output.output.image_urlcode_interpreter_call.outputsreasoning.encrypted_contentmessage.output_text.logprobs|
OpenAI.InputAudio
An audio input to the model.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| input_audio | OpenAI.InputAudioInputAudio | Yes | ||
| type | enum | The type of the input item. Always input_audio.Possible values: input_audio |
Yes |
OpenAI.InputAudioInputAudio
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| data | string | Yes | ||
| format | enum | Possible values: mp3, wav |
Yes |
OpenAI.InputContent
Discriminator for OpenAI.InputContent
This component uses the property type to discriminate between different types:
| Type Value | Schema |
|---|---|
input_text |
OpenAI.InputContentInputTextContent |
input_image |
OpenAI.InputContentInputImageContent |
input_file |
OpenAI.InputContentInputFileContent |
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | OpenAI.InputContentType | Yes |
OpenAI.InputContentInputFileContent
A file input to the model.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| file_data | string | The content of the file to be sent to the model. | No | |
| file_id | string or null | No | ||
| file_url | string | The URL of the file to be sent to the model. | No | |
| filename | string | The name of the file to be sent to the model. | No | |
| type | enum | The type of the input item. Always input_file.Possible values: input_file |
Yes |
OpenAI.InputContentInputImageContent
An image input to the model. Learn about image inputs.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| detail | OpenAI.ImageDetail | Yes | ||
| file_id | string or null | No | ||
| image_url | string or null | No | ||
| type | enum | The type of the input item. Always input_image.Possible values: input_image |
Yes |
OpenAI.InputContentInputTextContent
A text input to the model.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| text | string | The text input to the model. | Yes | |
| type | enum | The type of the input item. Always input_text.Possible values: input_text |
Yes |
OpenAI.InputContentType
| Property | Value |
|---|---|
| Type | string |
| Values | input_textinput_imageinput_file |
OpenAI.InputFidelity
Control how much effort the model will exert to match the style and features, especially facial features, of input images. This parameter is only supported for gpt-image-1. Unsupported for gpt-image-1-mini. Supports high and low. Defaults to low.
| Property | Value |
|---|---|
| Type | string |
| Values | highlow |
OpenAI.InputFileContent
A file input to the model.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| file_data | string | The content of the file to be sent to the model. | No | |
| file_id | string or null | No | ||
| file_url | string | The URL of the file to be sent to the model. | No | |
| filename | string | The name of the file to be sent to the model. | No | |
| type | enum | The type of the input item. Always input_file.Possible values: input_file |
Yes |
OpenAI.InputImageContent
An image input to the model. Learn about image inputs.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| detail | OpenAI.ImageDetail | Yes | ||
| file_id | string or null | No | ||
| image_url | string or null | No | ||
| type | enum | The type of the input item. Always input_image.Possible values: input_image |
Yes |
OpenAI.InputItem
Discriminator for OpenAI.InputItem
This component uses the property type to discriminate between different types:
| Type Value | Schema |
|---|---|
message |
OpenAI.EasyInputMessage |
item_reference |
OpenAI.ItemReferenceParam |
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | OpenAI.InputItemType | Yes |
OpenAI.InputItemType
| Property | Value |
|---|---|
| Type | string |
| Values | messageitem_reference |
OpenAI.InputMessageContentList
A list of one or many input items to the model, containing different content types.
Array of: OpenAI.InputContent
OpenAI.InputMessageResource
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| content | OpenAI.InputMessageContentList | A list of one or many input items to the model, containing different content types. |
Yes | |
| id | string | The unique ID of the message input. | Yes | |
| role | enum | The role of the message input. One of user, system, or developer.Possible values: user, system, developer |
Yes | |
| status | enum | The status of item. One of in_progress, completed, orincomplete. Populated when items are returned via API.Possible values: in_progress, completed, incomplete |
No | |
| type | enum | The type of the message input. Always set to message.Possible values: message |
Yes |
OpenAI.InputParam
Text, image, or file inputs to the model, used to generate a response. Learn more:
Type: string or array of OpenAI.InputItem
Text, image, or file inputs to the model, used to generate a response. Learn more:
OpenAI.InputTextContent
A text input to the model.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| text | string | The text input to the model. | Yes | |
| type | enum | The type of the input item. Always input_text.Possible values: input_text |
Yes |
OpenAI.ItemReferenceParam
An internal identifier for an item to reference.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| id | string | The ID of the item to reference. | Yes | |
| type | enum | The type of item to reference. Always item_reference.Possible values: item_reference |
Yes |
OpenAI.ItemResource
Content item used to generate a response.
Discriminator for OpenAI.ItemResource
This component uses the property type to discriminate between different types:
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | OpenAI.ItemResourceType | Yes |
OpenAI.ItemResourceApplyPatchToolCall
A tool call that applies file diffs by creating, deleting, or updating files.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| call_id | string | The unique ID of the apply patch tool call generated by the model. | Yes | |
| created_by | string | The ID of the entity that created this tool call. | No | |
| id | string | The unique ID of the apply patch tool call. Populated when this item is returned via API. | Yes | |
| operation | OpenAI.ApplyPatchFileOperation | One of the create_file, delete_file, or update_file operations applied via apply_patch. | Yes | |
| └─ type | OpenAI.ApplyPatchFileOperationType | Yes | ||
| status | OpenAI.ApplyPatchCallStatus | Yes | ||
| type | enum | The type of the item. Always apply_patch_call.Possible values: apply_patch_call |
Yes |
OpenAI.ItemResourceApplyPatchToolCallOutput
The output emitted by an apply patch tool call.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| call_id | string | The unique ID of the apply patch tool call generated by the model. | Yes | |
| created_by | string | The ID of the entity that created this tool call output. | No | |
| id | string | The unique ID of the apply patch tool call output. Populated when this item is returned via API. | Yes | |
| output | string or null | No | ||
| status | OpenAI.ApplyPatchCallOutputStatus | Yes | ||
| type | enum | The type of the item. Always apply_patch_call_output.Possible values: apply_patch_call_output |
Yes |
OpenAI.ItemResourceCodeInterpreterToolCall
A tool call to run code.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| code | string or null | Yes | ||
| container_id | string | The ID of the container used to run the code. | Yes | |
| id | string | The unique ID of the code interpreter tool call. | Yes | |
| outputs | array of OpenAI.CodeInterpreterOutputLogs or OpenAI.CodeInterpreterOutputImage or null | Yes | ||
| status | enum | The status of the code interpreter tool call. Valid values are in_progress, completed, incomplete, interpreting, and failed.Possible values: in_progress, completed, incomplete, interpreting, failed |
Yes | |
| type | enum | The type of the code interpreter tool call. Always code_interpreter_call.Possible values: code_interpreter_call |
Yes |
OpenAI.ItemResourceComputerToolCall
A tool call to a computer use tool. See the computer use guide for more information.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| action | OpenAI.ComputerAction | Yes | ||
| call_id | string | An identifier used when responding to the tool call with output. | Yes | |
| id | string | The unique ID of the computer call. | Yes | |
| pending_safety_checks | array of OpenAI.ComputerCallSafetyCheckParam | The pending safety checks for the computer call. | Yes | |
| status | enum | The status of the item. One of in_progress, completed, orincomplete. Populated when items are returned via API.Possible values: in_progress, completed, incomplete |
Yes | |
| type | enum | The type of the computer call. Always computer_call.Possible values: computer_call |
Yes |
OpenAI.ItemResourceComputerToolCallOutputResource
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| acknowledged_safety_checks | array of OpenAI.ComputerCallSafetyCheckParam | The safety checks reported by the API that have been acknowledged by the developer. |
No | |
| call_id | string | The ID of the computer tool call that produced the output. | Yes | |
| id | string | The ID of the computer tool call output. | No | |
| output | OpenAI.ComputerScreenshotImage | A computer screenshot image used with the computer use tool. | Yes | |
| status | enum | The status of the message input. One of in_progress, completed, orincomplete. Populated when input items are returned via API.Possible values: in_progress, completed, incomplete |
No | |
| type | enum | The type of the computer tool call output. Always computer_call_output.Possible values: computer_call_output |
Yes |
OpenAI.ItemResourceFileSearchToolCall
The results of a file search tool call. See the file search guide for more information.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| id | string | The unique ID of the file search tool call. | Yes | |
| queries | array of string | The queries used to search for files. | Yes | |
| results | array of OpenAI.FileSearchToolCallResults or null | No | ||
| status | enum | The status of the file search tool call. One of in_progress,searching, incomplete or failed,Possible values: in_progress, searching, completed, incomplete, failed |
Yes | |
| type | enum | The type of the file search tool call. Always file_search_call.Possible values: file_search_call |
Yes |
OpenAI.ItemResourceFunctionShellCall
A tool call that executes one or more shell commands in a managed environment.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| action | OpenAI.FunctionShellAction | Execute a shell command. | Yes | |
| └─ commands | array of string | Yes | ||
| └─ max_output_length | integer or null | Yes | ||
| └─ timeout_ms | integer or null | Yes | ||
| call_id | string | The unique ID of the shell tool call generated by the model. | Yes | |
| created_by | string | The ID of the entity that created this tool call. | No | |
| id | string | The unique ID of the shell tool call. Populated when this item is returned via API. | Yes | |
| status | OpenAI.LocalShellCallStatus | Yes | ||
| type | enum | The type of the item. Always shell_call.Possible values: shell_call |
Yes |
OpenAI.ItemResourceFunctionShellCallOutput
The output of a shell tool call that was emitted.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| call_id | string | The unique ID of the shell tool call generated by the model. | Yes | |
| created_by | string | The identifier of the actor that created the item. | No | |
| id | string | The unique ID of the shell call output. Populated when this item is returned via API. | Yes | |
| max_output_length | integer or null | Yes | ||
| output | array of OpenAI.FunctionShellCallOutputContent | An array of shell call output contents | Yes | |
| type | enum | The type of the shell call output. Always shell_call_output.Possible values: shell_call_output |
Yes |
OpenAI.ItemResourceFunctionToolCallOutputResource
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| call_id | string | The unique ID of the function tool call generated by the model. | Yes | |
| id | string | The unique ID of the function tool call output. Populated when this item is returned via API. |
No | |
| output | string or array of OpenAI.FunctionAndCustomToolCallOutput | The output from the function call generated by your code. Can be a string or a list of output content. |
Yes | |
| status | enum | The status of the item. One of in_progress, completed, orincomplete. Populated when items are returned via API.Possible values: in_progress, completed, incomplete |
No | |
| type | enum | The type of the function tool call output. Always function_call_output.Possible values: function_call_output |
Yes |
OpenAI.ItemResourceFunctionToolCallResource
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| arguments | string | A JSON string of the arguments to pass to the function. | Yes | |
| call_id | string | The unique ID of the function tool call generated by the model. | Yes | |
| id | string | The unique ID of the function tool call. | No | |
| name | string | The name of the function to run. | Yes | |
| status | enum | The status of the item. One of in_progress, completed, orincomplete. Populated when items are returned via API.Possible values: in_progress, completed, incomplete |
No | |
| type | enum | The type of the function tool call. Always function_call.Possible values: function_call |
Yes |
OpenAI.ItemResourceImageGenToolCall
An image generation request made by the model.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| id | string | The unique ID of the image generation call. | Yes | |
| result | string or null | Yes | ||
| status | enum | The status of the image generation call. Possible values: in_progress, completed, generating, failed |
Yes | |
| type | enum | The type of the image generation call. Always image_generation_call.Possible values: image_generation_call |
Yes |
OpenAI.ItemResourceLocalShellToolCall
A tool call to run a command on the local shell.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| action | OpenAI.LocalShellExecAction | Execute a shell command on the server. | Yes | |
| call_id | string | The unique ID of the local shell tool call generated by the model. | Yes | |
| id | string | The unique ID of the local shell call. | Yes | |
| status | enum | The status of the local shell call. Possible values: in_progress, completed, incomplete |
Yes | |
| type | enum | The type of the local shell call. Always local_shell_call.Possible values: local_shell_call |
Yes |
OpenAI.ItemResourceLocalShellToolCallOutput
The output of a local shell tool call.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| id | string | The unique ID of the local shell tool call generated by the model. | Yes | |
| output | string | A JSON string of the output of the local shell tool call. | Yes | |
| status | string or null | No | ||
| type | enum | The type of the local shell tool call output. Always local_shell_call_output.Possible values: local_shell_call_output |
Yes |
OpenAI.ItemResourceMcpApprovalRequest
A request for human approval of a tool invocation.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| arguments | string | A JSON string of arguments for the tool. | Yes | |
| id | string | The unique ID of the approval request. | Yes | |
| name | string | The name of the tool to run. | Yes | |
| server_label | string | The label of the MCP server making the request. | Yes | |
| type | enum | The type of the item. Always mcp_approval_request.Possible values: mcp_approval_request |
Yes |
OpenAI.ItemResourceMcpApprovalResponseResource
A response to an MCP approval request.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| approval_request_id | string | The ID of the approval request being answered. | Yes | |
| approve | boolean | Whether the request was approved. | Yes | |
| id | string | The unique ID of the approval response | Yes | |
| reason | string or null | No | ||
| type | enum | The type of the item. Always mcp_approval_response.Possible values: mcp_approval_response |
Yes |
OpenAI.ItemResourceMcpListTools
A list of tools available on an MCP server.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| error | string or null | No | ||
| id | string | The unique ID of the list. | Yes | |
| server_label | string | The label of the MCP server. | Yes | |
| tools | array of OpenAI.MCPListToolsTool | The tools available on the server. | Yes | |
| type | enum | The type of the item. Always mcp_list_tools.Possible values: mcp_list_tools |
Yes |
OpenAI.ItemResourceMcpToolCall
An invocation of a tool on an MCP server.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| approval_request_id | string or null | No | ||
| arguments | string | A JSON string of the arguments passed to the tool. | Yes | |
| error | string or null | No | ||
| id | string | The unique ID of the tool call. | Yes | |
| name | string | The name of the tool that was run. | Yes | |
| output | string or null | No | ||
| server_label | string | The label of the MCP server running the tool. | Yes | |
| status | OpenAI.MCPToolCallStatus | No | ||
| type | enum | The type of the item. Always mcp_call.Possible values: mcp_call |
Yes |
OpenAI.ItemResourceOutputMessage
An output message from the model.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| content | array of OpenAI.OutputMessageContent | The content of the output message. | Yes | |
| id | string | The unique ID of the output message. | Yes | |
| role | enum | The role of the output message. Always assistant.Possible values: assistant |
Yes | |
| status | enum | The status of the message input. One of in_progress, completed, orincomplete. Populated when input items are returned via API.Possible values: in_progress, completed, incomplete |
Yes | |
| type | enum | The type of the output message. Always message.Possible values: output_message |
Yes |
OpenAI.ItemResourceType
| Property | Value |
|---|---|
| Type | string |
| Values | messageoutput_messagefile_search_callcomputer_callcomputer_call_outputweb_search_callfunction_callfunction_call_outputimage_generation_callcode_interpreter_calllocal_shell_calllocal_shell_call_outputshell_callshell_call_outputapply_patch_callapply_patch_call_outputmcp_list_toolsmcp_approval_requestmcp_approval_responsemcp_call |
OpenAI.ItemResourceWebSearchToolCall
The results of a web search tool call. See the web search guide for more information.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| action | OpenAI.WebSearchActionSearch or OpenAI.WebSearchActionOpenPage or OpenAI.WebSearchActionFind | An object describing the specific action taken in this web search call. Includes details on how the model used the web (search, open_page, find). |
Yes | |
| id | string | The unique ID of the web search tool call. | Yes | |
| status | enum | The status of the web search tool call. Possible values: in_progress, searching, completed, failed |
Yes | |
| type | enum | The type of the web search tool call. Always web_search_call.Possible values: web_search_call |
Yes |
OpenAI.KeyPressAction
A collection of keypresses the model would like to perform.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| keys | array of string | The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key. | Yes | |
| type | enum | Specifies the event type. For a keypress action, this property is always set to keypress.Possible values: keypress |
Yes |
OpenAI.ListBatchesResponse
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| data | array of OpenAI.Batch | Yes | ||
| first_id | string | No | ||
| has_more | boolean | Yes | ||
| last_id | string | No | ||
| object | enum | Possible values: list |
Yes |
OpenAI.ListFilesResponse
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| data | array of OpenAI.OpenAIFile | Yes | ||
| first_id | string | Yes | ||
| has_more | boolean | Yes | ||
| last_id | string | Yes | ||
| object | string | Yes |
OpenAI.ListFineTuningCheckpointPermissionResponse
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| data | array of OpenAI.FineTuningCheckpointPermission | Yes | ||
| first_id | string or null | No | ||
| has_more | boolean | Yes | ||
| last_id | string or null | No | ||
| object | enum | Possible values: list |
Yes |
OpenAI.ListFineTuningJobCheckpointsResponse
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| data | array of OpenAI.FineTuningJobCheckpoint | Yes | ||
| first_id | string or null | No | ||
| has_more | boolean | Yes | ||
| last_id | string or null | No | ||
| object | enum | Possible values: list |
Yes |
OpenAI.ListFineTuningJobEventsResponse
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| data | array of OpenAI.FineTuningJobEvent | Yes | ||
| has_more | boolean | Yes | ||
| object | enum | Possible values: list |
Yes |
OpenAI.ListMessagesResponse
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| data | array of OpenAI.MessageObject | Yes | ||
| first_id | string | Yes | ||
| has_more | boolean | Yes | ||
| last_id | string | Yes | ||
| object | string | Yes |
OpenAI.ListModelsResponse
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| data | array of OpenAI.Model | Yes | ||
| object | enum | Possible values: list |
Yes |
OpenAI.ListPaginatedFineTuningJobsResponse
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| data | array of OpenAI.FineTuningJob | Yes | ||
| has_more | boolean | Yes | ||
| object | enum | Possible values: list |
Yes |
OpenAI.ListRunStepsResponse
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| data | array of OpenAI.RunStepObject | Yes | ||
| first_id | string | Yes | ||
| has_more | boolean | Yes | ||
| last_id | string | Yes | ||
| object | string | Yes |
OpenAI.ListRunsResponse
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| data | array of OpenAI.RunObject | Yes | ||
| first_id | string | Yes | ||
| has_more | boolean | Yes | ||
| last_id | string | Yes | ||
| object | string | Yes |
OpenAI.ListVectorStoreFilesResponse
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| data | array of OpenAI.VectorStoreFileObject | Yes | ||
| first_id | string | Yes | ||
| has_more | boolean | Yes | ||
| last_id | string | Yes | ||
| object | string | Yes |
OpenAI.ListVectorStoresResponse
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| data | array of OpenAI.VectorStoreObject | Yes | ||
| first_id | string | Yes | ||
| has_more | boolean | Yes | ||
| last_id | string | Yes | ||
| object | string | Yes |
OpenAI.LocalShellCallStatus
| Property | Value |
|---|---|
| Type | string |
| Values | in_progresscompletedincomplete |
OpenAI.LocalShellExecAction
Execute a shell command on the server.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| command | array of string | The command to run. | Yes | |
| env | object | Environment variables to set for the command. | Yes | |
| timeout_ms | integer or null | No | ||
| type | enum | The type of the local shell action. Always exec.Possible values: exec |
Yes | |
| user | string or null | No | ||
| working_directory | string or null | No |
OpenAI.LocalShellToolParam
A tool that allows the model to execute shell commands in a local environment.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | enum | The type of the local shell tool. Always local_shell.Possible values: local_shell |
Yes |
OpenAI.LogProb
The log probability of a token.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| bytes | array of integer | Yes | ||
| logprob | number | Yes | ||
| token | string | Yes | ||
| top_logprobs | array of OpenAI.TopLogProb | Yes |
OpenAI.MCPListToolsTool
A tool available on an MCP server.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| annotations | OpenAI.MCPListToolsToolAnnotations or null | No | ||
| description | string or null | No | ||
| input_schema | OpenAI.MCPListToolsToolInputSchema | Yes | ||
| name | string | The name of the tool. | Yes |
OpenAI.MCPListToolsToolAnnotations
Type: object
OpenAI.MCPListToolsToolInputSchema
Type: object
OpenAI.MCPTool
Give the model access to additional tools via remote Model Context Protocol (MCP) servers. Learn more about MCP.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| allowed_tools | array of string or OpenAI.MCPToolFilter or null | No | ||
| authorization | string | An OAuth access token that can be used with a remote MCP server, either with a custom MCP server URL or a service connector. Your application must handle the OAuth authorization flow and provide the token here. |
No | |
| connector_id | enum | Identifier for service connectors, like those available in ChatGPT. One ofserver_url or connector_id must be provided. Learn more about serviceconnectors here. Currently supported connector_id values are:- Dropbox: connector_dropbox- Gmail: connector_gmail- Google Calendar: connector_googlecalendar- Google Drive: connector_googledrive- Microsoft Teams: connector_microsoftteams- Outlook Calendar: connector_outlookcalendar- Outlook Email: connector_outlookemail- SharePoint: connector_sharepointPossible values: connector_dropbox, connector_gmail, connector_googlecalendar, connector_googledrive, connector_microsoftteams, connector_outlookcalendar, connector_outlookemail, connector_sharepoint |
No | |
| headers | object or null | No | ||
| require_approval | OpenAI.MCPToolRequireApproval or string or null | No | ||
| server_description | string | Optional description of the MCP server, used to provide more context. | No | |
| server_label | string | A label for this MCP server, used to identify it in tool calls. | Yes | |
| server_url | string | The URL for the MCP server. One of server_url or connector_id must beprovided. |
No | |
| type | enum | The type of the MCP tool. Always mcp.Possible values: mcp |
Yes |
OpenAI.MCPToolCallStatus
| Property | Value |
|---|---|
| Type | string |
| Values | in_progresscompletedincompletecallingfailed |
OpenAI.MCPToolFilter
A filter object to specify which tools are allowed.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| read_only | boolean | Indicates whether or not a tool modifies data or is read-only. If an MCP server is annotated with readOnlyHint,it will match this filter. |
No | |
| tool_names | array of string | List of allowed tool names. | No |
OpenAI.MCPToolRequireApproval
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| always | OpenAI.MCPToolFilter | A filter object to specify which tools are allowed. | No | |
| never | OpenAI.MCPToolFilter | A filter object to specify which tools are allowed. | No |
OpenAI.MessageContent
Discriminator for OpenAI.MessageContent
This component uses the property type to discriminate between different types:
| Type Value | Schema |
|---|---|
image_url |
OpenAI.MessageContentImageUrlObject |
text |
OpenAI.MessageContentTextObject |
refusal |
OpenAI.MessageContentRefusalObject |
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | OpenAI.MessageContentType | Yes |
OpenAI.MessageContentImageFileObject
References an image File in the content of a message.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| image_file | OpenAI.MessageContentImageFileObjectImageFile | Yes | ||
| type | enum | Always image_file.Possible values: image_file |
Yes |
OpenAI.MessageContentImageFileObjectImageFile
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| detail | enum | Possible values: auto, low, high |
No | |
| file_id | string | Yes |
OpenAI.MessageContentImageUrlObject
References an image URL in the content of a message.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| image_url | OpenAI.MessageContentImageUrlObjectImageUrl | Yes | ||
| type | enum | The type of the content part. Possible values: image_url |
Yes |
OpenAI.MessageContentImageUrlObjectImageUrl
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| detail | enum | Possible values: auto, low, high |
No | |
| url | string | Yes |
OpenAI.MessageContentRefusalObject
The refusal content generated by the assistant.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| refusal | string | Yes | ||
| type | enum | Always refusal.Possible values: refusal |
Yes |
OpenAI.MessageContentTextAnnotationsFileCitationObject
A citation within the message that points to a specific quote from a specific File associated with the assistant or the message. Generated when the assistant uses the "file_search" tool to search files.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| end_index | integer | Constraints: min: 0 | Yes | |
| file_citation | OpenAI.MessageContentTextAnnotationsFileCitationObjectFileCitation | Yes | ||
| start_index | integer | Constraints: min: 0 | Yes | |
| text | string | The text in the message content that needs to be replaced. | Yes | |
| type | enum | Always file_citation.Possible values: file_citation |
Yes |
OpenAI.MessageContentTextAnnotationsFileCitationObjectFileCitation
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| file_id | string | Yes |
OpenAI.MessageContentTextAnnotationsFilePathObject
A URL for the file that's generated when the assistant used the code_interpreter tool to generate a file.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| end_index | integer | Constraints: min: 0 | Yes | |
| file_path | OpenAI.MessageContentTextAnnotationsFilePathObjectFilePath | Yes | ||
| start_index | integer | Constraints: min: 0 | Yes | |
| text | string | The text in the message content that needs to be replaced. | Yes | |
| type | enum | Always file_path.Possible values: file_path |
Yes |
OpenAI.MessageContentTextAnnotationsFilePathObjectFilePath
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| file_id | string | Yes |
OpenAI.MessageContentTextObject
The text content that is part of a message.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| text | OpenAI.MessageContentTextObjectText | Yes | ||
| type | enum | Always text.Possible values: text |
Yes |
OpenAI.MessageContentTextObjectText
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| annotations | array of OpenAI.TextAnnotation | Yes | ||
| value | string | Yes |
OpenAI.MessageContentType
| Property | Value |
|---|---|
| Type | string |
| Values | image_fileimage_urltextrefusal |
OpenAI.MessageObject
Represents a message within a thread.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| assistant_id | string or null | Yes | ||
| attachments | array of OpenAI.MessageObjectAttachments or null | Yes | ||
| completed_at | string or null | Yes | ||
| content | array of OpenAI.MessageContent | The content of the message in array of text and/or images. | Yes | |
| created_at | integer | The Unix timestamp (in seconds) for when the message was created. | Yes | |
| id | string | The identifier, which can be referenced in API endpoints. | Yes | |
| incomplete_at | string or null | Yes | ||
| incomplete_details | OpenAI.MessageObjectIncompleteDetails or null | Yes | ||
| metadata | OpenAI.Metadata or null | Yes | ||
| object | enum | The object type, which is always thread.message.Possible values: thread.message |
Yes | |
| role | enum | The entity that produced the message. One of user or assistant.Possible values: user, assistant |
Yes | |
| run_id | string or null | Yes | ||
| status | enum | The status of the message, which can be either in_progress, incomplete, or completed.Possible values: in_progress, incomplete, completed |
Yes | |
| thread_id | string | The thread ID that this message belongs to. | Yes |
OpenAI.MessageObjectAttachments
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| file_id | string | No | ||
| tools | array of OpenAI.AssistantToolsCode or OpenAI.AssistantToolsFileSearchTypeOnly | No |
OpenAI.MessageObjectIncompleteDetails
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| reason | enum | Possible values: content_filter, max_tokens, run_cancelled, run_expired, run_failed |
Yes |
OpenAI.MessageRequestContentTextObject
The text content that is part of a message.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| text | string | Text content to be sent to the model | Yes | |
| type | enum | Always text.Possible values: text |
Yes |
OpenAI.MessageRole
| Property | Value |
|---|---|
| Type | string |
| Values | unknownuserassistantsystemcriticdiscriminatordevelopertool |
OpenAI.MessageStatus
| Property | Value |
|---|---|
| Type | string |
| Values | in_progresscompletedincomplete |
OpenAI.Metadata
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.
Type: object
OpenAI.Model
Describes an OpenAI model offering that can be used with the API.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| created | integer | The Unix timestamp (in seconds) when the model was created. | Yes | |
| id | string | The model identifier, which can be referenced in the API endpoints. | Yes | |
| object | enum | The object type, which is always "model". Possible values: model |
Yes | |
| owned_by | string | The organization that owns the model. | Yes |
OpenAI.ModifyMessageRequest
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| metadata | OpenAI.Metadata or null | No |
OpenAI.ModifyRunRequest
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| metadata | OpenAI.Metadata or null | No |
OpenAI.ModifyThreadRequest
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| metadata | OpenAI.Metadata or null | No | ||
| tool_resources | OpenAI.ModifyThreadRequestToolResources or null | No |
OpenAI.ModifyThreadRequestToolResources
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| code_interpreter | OpenAI.ModifyThreadRequestToolResourcesCodeInterpreter | No | ||
| file_search | OpenAI.ModifyThreadRequestToolResourcesFileSearch | No |
OpenAI.ModifyThreadRequestToolResourcesCodeInterpreter
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| file_ids | array of string | No |
OpenAI.ModifyThreadRequestToolResourcesFileSearch
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| vector_store_ids | array of string | No |
OpenAI.Move
A mouse move action.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | enum | Specifies the event type. For a move action, this property is always set to move.Possible values: move |
Yes | |
| x | integer | The x-coordinate to move to. | Yes | |
| y | integer | The y-coordinate to move to. | Yes |
OpenAI.NoiseReductionType
Type of noise reduction. near_field is for close-talking microphones such as headphones, far_field is for far-field microphones such as laptop or conference room microphones.
| Property | Value |
|---|---|
| Type | string |
| Values | near_fieldfar_field |
OpenAI.OpenAIFile
The File object represents a document that has been uploaded to OpenAI.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| bytes | integer | The size of the file, in bytes. | Yes | |
| created_at | integer | The Unix timestamp (in seconds) for when the file was created. | Yes | |
| expires_at | integer | The Unix timestamp (in seconds) for when the file will expire. | No | |
| filename | string | The name of the file. | Yes | |
| id | string | The file identifier, which can be referenced in the API endpoints. | Yes | |
| object | enum | The object type, which is always file.Possible values: file |
Yes | |
| purpose | enum | The intended purpose of the file. Supported values are assistants, assistants_output, batch, batch_output, fine-tune and fine-tune-results.Possible values: assistants, assistants_output, batch, batch_output, fine-tune, fine-tune-results, evals |
Yes | |
| status | enum | Possible values: uploaded, pending, running, processed, error, deleting, deleted |
Yes | |
| status_details | string (deprecated) | Deprecated. For details on why a fine-tuning training file failed validation, see the error field on fine_tuning.job. |
No |
OpenAI.OtherChunkingStrategyResponseParam
This is returned when the chunking strategy is unknown. Typically, this is because the file was indexed before the chunking_strategy concept was introduced in the API.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | enum | Always other.Possible values: other |
Yes |
OpenAI.OutputContent
Discriminator for OpenAI.OutputContent
This component uses the property type to discriminate between different types:
| Type Value | Schema |
|---|---|
output_text |
OpenAI.OutputContentOutputTextContent |
refusal |
OpenAI.OutputContentRefusalContent |
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | OpenAI.OutputContentType | Yes |
OpenAI.OutputContentOutputTextContent
A text output from the model.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| annotations | array of OpenAI.Annotation | The annotations of the text output. | Yes | |
| logprobs | array of OpenAI.LogProb | No | ||
| text | string | The text output from the model. | Yes | |
| type | enum | The type of the output text. Always output_text.Possible values: output_text |
Yes |
OpenAI.OutputContentRefusalContent
A refusal from the model.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| refusal | string | The refusal explanation from the model. | Yes | |
| type | enum | The type of the refusal. Always refusal.Possible values: refusal |
Yes |
OpenAI.OutputContentType
| Property | Value |
|---|---|
| Type | string |
| Values | output_textrefusalreasoning_text |
OpenAI.OutputItem
Discriminator for OpenAI.OutputItem
This component uses the property type to discriminate between different types:
| Type Value | Schema |
|---|---|
output_message |
OpenAI.OutputItemOutputMessage |
file_search_call |
OpenAI.OutputItemFileSearchToolCall |
function_call |
OpenAI.OutputItemFunctionToolCall |
web_search_call |
OpenAI.OutputItemWebSearchToolCall |
computer_call |
OpenAI.OutputItemComputerToolCall |
reasoning |
OpenAI.OutputItemReasoningItem |
compaction |
OpenAI.OutputItemCompactionBody |
image_generation_call |
OpenAI.OutputItemImageGenToolCall |
code_interpreter_call |
OpenAI.OutputItemCodeInterpreterToolCall |
local_shell_call |
OpenAI.OutputItemLocalShellToolCall |
shell_call |
OpenAI.OutputItemFunctionShellCall |
shell_call_output |
OpenAI.OutputItemFunctionShellCallOutput |
apply_patch_call |
OpenAI.OutputItemApplyPatchToolCall |
apply_patch_call_output |
OpenAI.OutputItemApplyPatchToolCallOutput |
mcp_call |
OpenAI.OutputItemMcpToolCall |
mcp_list_tools |
OpenAI.OutputItemMcpListTools |
mcp_approval_request |
OpenAI.OutputItemMcpApprovalRequest |
custom_tool_call |
OpenAI.OutputItemCustomToolCall |
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | OpenAI.OutputItemType | Yes |
OpenAI.OutputItemApplyPatchToolCall
A tool call that applies file diffs by creating, deleting, or updating files.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| call_id | string | The unique ID of the apply patch tool call generated by the model. | Yes | |
| created_by | string | The ID of the entity that created this tool call. | No | |
| id | string | The unique ID of the apply patch tool call. Populated when this item is returned via API. | Yes | |
| operation | OpenAI.ApplyPatchFileOperation | One of the create_file, delete_file, or update_file operations applied via apply_patch. | Yes | |
| └─ type | OpenAI.ApplyPatchFileOperationType | Yes | ||
| status | OpenAI.ApplyPatchCallStatus | Yes | ||
| type | enum | The type of the item. Always apply_patch_call.Possible values: apply_patch_call |
Yes |
OpenAI.OutputItemApplyPatchToolCallOutput
The output emitted by an apply patch tool call.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| call_id | string | The unique ID of the apply patch tool call generated by the model. | Yes | |
| created_by | string | The ID of the entity that created this tool call output. | No | |
| id | string | The unique ID of the apply patch tool call output. Populated when this item is returned via API. | Yes | |
| output | string or null | No | ||
| status | OpenAI.ApplyPatchCallOutputStatus | Yes | ||
| type | enum | The type of the item. Always apply_patch_call_output.Possible values: apply_patch_call_output |
Yes |
OpenAI.OutputItemCodeInterpreterToolCall
A tool call to run code.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| code | string or null | Yes | ||
| container_id | string | The ID of the container used to run the code. | Yes | |
| id | string | The unique ID of the code interpreter tool call. | Yes | |
| outputs | array of OpenAI.CodeInterpreterOutputLogs or OpenAI.CodeInterpreterOutputImage or null | Yes | ||
| status | enum | The status of the code interpreter tool call. Valid values are in_progress, completed, incomplete, interpreting, and failed.Possible values: in_progress, completed, incomplete, interpreting, failed |
Yes | |
| type | enum | The type of the code interpreter tool call. Always code_interpreter_call.Possible values: code_interpreter_call |
Yes |
OpenAI.OutputItemCompactionBody
A compaction item generated by the v1/responses/compact API.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| created_by | string | The identifier of the actor that created the item. | No | |
| encrypted_content | string | The encrypted content that was produced by compaction. | Yes | |
| id | string | The unique ID of the compaction item. | Yes | |
| type | enum | The type of the item. Always compaction.Possible values: compaction |
Yes |
OpenAI.OutputItemComputerToolCall
A tool call to a computer use tool. See the computer use guide for more information.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| action | OpenAI.ComputerAction | Yes | ||
| call_id | string | An identifier used when responding to the tool call with output. | Yes | |
| id | string | The unique ID of the computer call. | Yes | |
| pending_safety_checks | array of OpenAI.ComputerCallSafetyCheckParam | The pending safety checks for the computer call. | Yes | |
| status | enum | The status of the item. One of in_progress, completed, orincomplete. Populated when items are returned via API.Possible values: in_progress, completed, incomplete |
Yes | |
| type | enum | The type of the computer call. Always computer_call.Possible values: computer_call |
Yes |
OpenAI.OutputItemCustomToolCall
A call to a custom tool created by the model.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| call_id | string | An identifier used to map this custom tool call to a tool call output. | Yes | |
| id | string | The unique ID of the custom tool call in the OpenAI platform. | No | |
| input | string | The input for the custom tool call generated by the model. | Yes | |
| name | string | The name of the custom tool being called. | Yes | |
| type | enum | The type of the custom tool call. Always custom_tool_call.Possible values: custom_tool_call |
Yes |
OpenAI.OutputItemFileSearchToolCall
The results of a file search tool call. See the file search guide for more information.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| id | string | The unique ID of the file search tool call. | Yes | |
| queries | array of string | The queries used to search for files. | Yes | |
| results | array of OpenAI.FileSearchToolCallResults or null | No | ||
| status | enum | The status of the file search tool call. One of in_progress,searching, incomplete or failed,Possible values: in_progress, searching, completed, incomplete, failed |
Yes | |
| type | enum | The type of the file search tool call. Always file_search_call.Possible values: file_search_call |
Yes |
OpenAI.OutputItemFunctionShellCall
A tool call that executes one or more shell commands in a managed environment.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| action | OpenAI.FunctionShellAction | Execute a shell command. | Yes | |
| └─ commands | array of string | Yes | ||
| └─ max_output_length | integer or null | Yes | ||
| └─ timeout_ms | integer or null | Yes | ||
| call_id | string | The unique ID of the shell tool call generated by the model. | Yes | |
| created_by | string | The ID of the entity that created this tool call. | No | |
| id | string | The unique ID of the shell tool call. Populated when this item is returned via API. | Yes | |
| status | OpenAI.LocalShellCallStatus | Yes | ||
| type | enum | The type of the item. Always shell_call.Possible values: shell_call |
Yes |
OpenAI.OutputItemFunctionShellCallOutput
The output of a shell tool call that was emitted.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| call_id | string | The unique ID of the shell tool call generated by the model. | Yes | |
| created_by | string | The identifier of the actor that created the item. | No | |
| id | string | The unique ID of the shell call output. Populated when this item is returned via API. | Yes | |
| max_output_length | integer or null | Yes | ||
| output | array of OpenAI.FunctionShellCallOutputContent | An array of shell call output contents | Yes | |
| type | enum | The type of the shell call output. Always shell_call_output.Possible values: shell_call_output |
Yes |
OpenAI.OutputItemFunctionToolCall
A tool call to run a function. See the function calling guide for more information.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| arguments | string | A JSON string of the arguments to pass to the function. | Yes | |
| call_id | string | The unique ID of the function tool call generated by the model. | Yes | |
| id | string | The unique ID of the function tool call. | No | |
| name | string | The name of the function to run. | Yes | |
| status | enum | The status of the item. One of in_progress, completed, orincomplete. Populated when items are returned via API.Possible values: in_progress, completed, incomplete |
No | |
| type | enum | The type of the function tool call. Always function_call.Possible values: function_call |
Yes |
OpenAI.OutputItemImageGenToolCall
An image generation request made by the model.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| id | string | The unique ID of the image generation call. | Yes | |
| result | string or null | Yes | ||
| status | enum | The status of the image generation call. Possible values: in_progress, completed, generating, failed |
Yes | |
| type | enum | The type of the image generation call. Always image_generation_call.Possible values: image_generation_call |
Yes |
OpenAI.OutputItemLocalShellToolCall
A tool call to run a command on the local shell.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| action | OpenAI.LocalShellExecAction | Execute a shell command on the server. | Yes | |
| call_id | string | The unique ID of the local shell tool call generated by the model. | Yes | |
| id | string | The unique ID of the local shell call. | Yes | |
| status | enum | The status of the local shell call. Possible values: in_progress, completed, incomplete |
Yes | |
| type | enum | The type of the local shell call. Always local_shell_call.Possible values: local_shell_call |
Yes |
OpenAI.OutputItemMcpApprovalRequest
A request for human approval of a tool invocation.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| arguments | string | A JSON string of arguments for the tool. | Yes | |
| id | string | The unique ID of the approval request. | Yes | |
| name | string | The name of the tool to run. | Yes | |
| server_label | string | The label of the MCP server making the request. | Yes | |
| type | enum | The type of the item. Always mcp_approval_request.Possible values: mcp_approval_request |
Yes |
OpenAI.OutputItemMcpListTools
A list of tools available on an MCP server.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| error | string or null | No | ||
| id | string | The unique ID of the list. | Yes | |
| server_label | string | The label of the MCP server. | Yes | |
| tools | array of OpenAI.MCPListToolsTool | The tools available on the server. | Yes | |
| type | enum | The type of the item. Always mcp_list_tools.Possible values: mcp_list_tools |
Yes |
OpenAI.OutputItemMcpToolCall
An invocation of a tool on an MCP server.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| approval_request_id | string or null | No | ||
| arguments | string | A JSON string of the arguments passed to the tool. | Yes | |
| error | string or null | No | ||
| id | string | The unique ID of the tool call. | Yes | |
| name | string | The name of the tool that was run. | Yes | |
| output | string or null | No | ||
| server_label | string | The label of the MCP server running the tool. | Yes | |
| status | OpenAI.MCPToolCallStatus | No | ||
| type | enum | The type of the item. Always mcp_call.Possible values: mcp_call |
Yes |
OpenAI.OutputItemOutputMessage
An output message from the model.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| content | array of OpenAI.OutputMessageContent | The content of the output message. | Yes | |
| id | string | The unique ID of the output message. | Yes | |
| role | enum | The role of the output message. Always assistant.Possible values: assistant |
Yes | |
| status | enum | The status of the message input. One of in_progress, completed, orincomplete. Populated when input items are returned via API.Possible values: in_progress, completed, incomplete |
Yes | |
| type | enum | The type of the output message. Always message.Possible values: output_message |
Yes |
OpenAI.OutputItemReasoningItem
A description of the chain of thought used by a reasoning model while generating
a response. Be sure to include these items in your input to the Responses API
for subsequent turns of a conversation if you are manually
managing context.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| content | array of OpenAI.ReasoningTextContent | Reasoning text content. | No | |
| encrypted_content | string or null | No | ||
| id | string | The unique identifier of the reasoning content. | Yes | |
| status | enum | The status of the item. One of in_progress, completed, orincomplete. Populated when items are returned via API.Possible values: in_progress, completed, incomplete |
No | |
| summary | array of OpenAI.Summary | Reasoning summary content. | Yes | |
| type | enum | The type of the object. Always reasoning.Possible values: reasoning |
Yes |
OpenAI.OutputItemType
| Property | Value |
|---|---|
| Type | string |
| Values | output_messagefile_search_callfunction_callweb_search_callcomputer_callreasoningcompactionimage_generation_callcode_interpreter_calllocal_shell_callshell_callshell_call_outputapply_patch_callapply_patch_call_outputmcp_callmcp_list_toolsmcp_approval_requestcustom_tool_call |
OpenAI.OutputItemWebSearchToolCall
The results of a web search tool call. See the web search guide for more information.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| action | OpenAI.WebSearchActionSearch or OpenAI.WebSearchActionOpenPage or OpenAI.WebSearchActionFind | An object describing the specific action taken in this web search call. Includes details on how the model used the web (search, open_page, find). |
Yes | |
| id | string | The unique ID of the web search tool call. | Yes | |
| status | enum | The status of the web search tool call. Possible values: in_progress, searching, completed, failed |
Yes | |
| type | enum | The type of the web search tool call. Always web_search_call.Possible values: web_search_call |
Yes |
OpenAI.OutputMessageContent
Discriminator for OpenAI.OutputMessageContent
This component uses the property type to discriminate between different types:
| Type Value | Schema |
|---|---|
output_text |
OpenAI.OutputMessageContentOutputTextContent |
refusal |
OpenAI.OutputMessageContentRefusalContent |
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | OpenAI.OutputMessageContentType | Yes |
OpenAI.OutputMessageContentOutputTextContent
A text output from the model.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| annotations | array of OpenAI.Annotation | The annotations of the text output. | Yes | |
| logprobs | array of OpenAI.LogProb | No | ||
| text | string | The text output from the model. | Yes | |
| type | enum | The type of the output text. Always output_text.Possible values: output_text |
Yes |
OpenAI.OutputMessageContentRefusalContent
A refusal from the model.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| refusal | string | The refusal explanation from the model. | Yes | |
| type | enum | The type of the refusal. Always refusal.Possible values: refusal |
Yes |
OpenAI.OutputMessageContentType
| Property | Value |
|---|---|
| Type | string |
| Values | output_textrefusal |
OpenAI.OutputTextContent
A text output from the model.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| annotations | array of OpenAI.Annotation | The annotations of the text output. | Yes | |
| logprobs | array of OpenAI.LogProb | No | ||
| text | string | The text output from the model. | Yes | |
| type | enum | The type of the output text. Always output_text.Possible values: output_text |
Yes |
OpenAI.ParallelToolCalls
Whether to enable parallel function calling during tool use.
Type: boolean
OpenAI.PredictionContent
Static predicted output content, such as the content of a text file that is being regenerated.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| content | string or array of OpenAI.ChatCompletionRequestMessageContentPartText | The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly. |
Yes | |
| type | enum | The type of the predicted content you want to provide. This type is currently always content.Possible values: content |
Yes |
OpenAI.Prompt
Reference to a prompt template and its variables. Learn more.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| id | string | The unique identifier of the prompt template to use. | Yes | |
| variables | OpenAI.ResponsePromptVariables or null | No | ||
| version | string or null | No |
OpenAI.RankerVersionType
| Property | Value |
|---|---|
| Type | string |
| Values | autodefault-2024-11-15 |
OpenAI.RankingOptions
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| hybrid_search | OpenAI.HybridSearchOptions | No | ||
| └─ embedding_weight | number | The weight of the embedding in the reciprocal ranking fusion. | Yes | |
| └─ text_weight | number | The weight of the text in the reciprocal ranking fusion. | Yes | |
| ranker | OpenAI.RankerVersionType | No | ||
| score_threshold | number | The score threshold for the file search, a number between 0 and 1. Numbers closer to 1 will attempt to return only the most relevant results, but may return fewer results. | No |
OpenAI.RealtimeAudioFormats
Discriminator for OpenAI.RealtimeAudioFormats
This component uses the property type to discriminate between different types:
| Type Value | Schema |
|---|---|
audio/pcm |
OpenAI.RealtimeAudioFormatsAudioPcm |
audio/pcmu |
OpenAI.RealtimeAudioFormatsAudioPcmu |
audio/pcma |
OpenAI.RealtimeAudioFormatsAudioPcma |
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | OpenAI.RealtimeAudioFormatsType | Yes |
OpenAI.RealtimeAudioFormatsAudioPcm
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| rate | enum | Possible values: 24000 |
No | |
| type | enum | Possible values: audio/pcm |
Yes |
OpenAI.RealtimeAudioFormatsAudioPcma
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | enum | Possible values: audio/pcma |
Yes |
OpenAI.RealtimeAudioFormatsAudioPcmu
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | enum | Possible values: audio/pcmu |
Yes |
OpenAI.RealtimeAudioFormatsType
| Property | Value |
|---|---|
| Type | string |
| Values | audio/pcmaudio/pcmuaudio/pcma |
OpenAI.RealtimeCallCreateRequest
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| sdp | string | WebRTC Session Description Protocol (SDP) offer generated by the caller. | Yes | |
| session | OpenAI.RealtimeSessionCreateRequestGA | Realtime session object configuration. | No | |
| └─ audio | OpenAI.RealtimeSessionCreateRequestGAAudio | Configuration for input and output audio. | No | |
| └─ include | array of string | Additional fields to include in server outputs.item.input_audio_transcription.logprobs: Include logprobs for input audio transcription. |
No | |
| └─ instructions | string | The default system instructions (i.e. system message) prepended to model calls. This field allows the client to guide the model on desired responses. The model can be instructed on response content and format, (for example "be extremely succinct", "act friendly", "here are examples of good responses") and on audio behavior (for example "talk quickly", "inject emotion into your voice", "laugh frequently"). The instructions are not guaranteed to be followed by the model, but they provide guidance to the model on the desired behavior. Note that the server sets default instructions which will be used if this field is not set and are visible in the session.created event at the start of the session. |
No | |
| └─ max_output_tokens | integer (see valid models below) | Maximum number of output tokens for a single assistant response, inclusive of tool calls. Provide an integer between 1 and 4096 to limit output tokens, or inf for the maximum available tokens for agiven model. Defaults to inf. |
No | |
| └─ model | string | The Realtime model used for this session. | No | |
| └─ output_modalities | array of string | The set of modalities the model can respond with. It defaults to ["audio"], indicatingthat the model will respond with audio plus a transcript. ["text"] can be used to makethe model respond with text only. It is not possible to request both text and audio at the same time. |
No | ['audio'] |
| └─ prompt | OpenAI.Prompt | Reference to a prompt template and its variables. Learn more. |
No | |
| └─ tool_choice | OpenAI.ToolChoiceOptions or OpenAI.ToolChoiceFunction or OpenAI.ToolChoiceMCP | How the model chooses tools. Provide one of the string modes or force a specific function/MCP tool. |
No | auto |
| └─ tools | array of OpenAI.RealtimeFunctionTool or OpenAI.MCPTool | Tools available to the model. | No | |
| └─ tracing | string or OpenAI.RealtimeSessionCreateRequestGATracing or null | "" Set to null to disable tracing. Once tracing is enabled for a session, the configuration cannot be modified. auto will create a trace for the session with default values for theworkflow name, group id, and metadata. |
No | auto |
| └─ truncation | OpenAI.RealtimeTruncation | When the number of tokens in a conversation exceeds the model's input token limit, the conversation be truncated, meaning messages (starting from the oldest) will not be included in the model's context. A 32k context model with 4,096 max output tokens can only include 28,224 tokens in the context before truncation occurs. Clients can configure truncation behavior to truncate with a lower max token limit, which is an effective way to control token usage and cost. Truncation will reduce the number of cached tokens on the next turn (busting the cache), since messages are dropped from the beginning of the context. However, clients can also configure truncation to retain messages up to a fraction of the maximum context size, which will reduce the need for future truncations and thus improve the cache rate. Truncation can be disabled entirely, which means the server will never truncate but would instead return an error if the conversation exceeds the model's input token limit. |
No | |
| └─ type | enum | The type of session to create. Always realtime for the Realtime API.Possible values: realtime |
Yes |
OpenAI.RealtimeCallReferRequest
Parameters required to transfer a SIP call to a new destination using the Realtime API.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| target_uri | string | URI that should appear in the SIP Refer-To header. Supports values liketel:+14155550123 or sip:agent\@example.com. |
Yes |
OpenAI.RealtimeCallRejectRequest
Parameters used to decline an incoming SIP call handled by the Realtime API.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| status_code | integer | SIP response code to send back to the caller. Defaults to 603 (Decline)when omitted. |
No |
OpenAI.RealtimeCreateClientSecretRequest
Create a session and client secret for the Realtime API. The request can specify either a realtime or a transcription session configuration. Learn more about the Realtime API.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| expires_after | OpenAI.RealtimeCreateClientSecretRequestExpiresAfter | No | ||
| └─ anchor | enum | Possible values: created_at |
No | |
| └─ seconds | integer | Constraints: min: 10, max: 7200 | No | 600 |
| session | OpenAI.RealtimeSessionCreateRequestUnion | No | ||
| └─ type | OpenAI.RealtimeSessionCreateRequestUnionType | Yes |
OpenAI.RealtimeCreateClientSecretRequestExpiresAfter
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| anchor | enum | Possible values: created_at |
No | |
| seconds | integer | Constraints: min: 10, max: 7200 | No | 600 |
OpenAI.RealtimeCreateClientSecretResponse
Response from creating a session and client secret for the Realtime API.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| expires_at | integer | Expiration timestamp for the client secret, in seconds since epoch. | Yes | |
| session | OpenAI.RealtimeSessionCreateResponseUnion | Yes | ||
| └─ type | OpenAI.RealtimeSessionCreateResponseUnionType | Yes | ||
| value | string | The generated client secret value. | Yes |
OpenAI.RealtimeFunctionTool
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| description | string | The description of the function, including guidance on when and how to call it, and guidance about what to tell the user when calling (if anything). |
No | |
| name | string | The name of the function. | No | |
| parameters | OpenAI.RealtimeFunctionToolParameters | No | ||
| type | enum | The type of the tool, i.e. function.Possible values: function |
No |
OpenAI.RealtimeFunctionToolParameters
Type: object
OpenAI.RealtimeSessionCreateRequest
A new Realtime session configuration, with an ephemeral key. Default TTL for keys is one minute.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| client_secret | OpenAI.RealtimeSessionCreateRequestClientSecret | Yes | ||
| └─ expires_at | integer | Yes | ||
| └─ value | string | Yes | ||
| input_audio_format | string | The format of input audio. Options are pcm16, g711_ulaw, or g711_alaw. |
No | |
| input_audio_transcription | OpenAI.RealtimeSessionCreateRequestInputAudioTranscription | No | ||
| └─ model | string | No | ||
| instructions | string | The default system instructions (i.e. system message) prepended to model calls. This field allows the client to guide the model on desired responses. The model can be instructed on response content and format, (for example "be extremely succinct", "act friendly", "here are examples of good responses") and on audio behavior (for example "talk quickly", "inject emotion into your voice", "laugh frequently"). The instructions are not guaranteed to be followed by the model, but they provide guidance to the model on the desired behavior. Note that the server sets default instructions which will be used if this field is not set and are visible in the session.created event at the start of the session. |
No | |
| max_response_output_tokens | integer (see valid models below) | Maximum number of output tokens for a single assistant response, inclusive of tool calls. Provide an integer between 1 and 4096 to limit output tokens, or inf for the maximum available tokens for agiven model. Defaults to inf. |
No | |
| modalities | array of string | The set of modalities the model can respond with. To disable audio, set this to ["text"]. |
No | ['text', 'audio'] |
| output_audio_format | string | The format of output audio. Options are pcm16, g711_ulaw, or g711_alaw. |
No | |
| prompt | OpenAI.Prompt | Reference to a prompt template and its variables. Learn more. |
No | |
| speed | number | The speed of the model's spoken response. 1.0 is the default speed. 0.25 is the minimum speed. 1.5 is the maximum speed. This value can only be changed in between model turns, not while a response is in progress. Constraints: min: 0.25, max: 1.5 |
No | 1 |
| temperature | number | Sampling temperature for the model, limited to [0.6, 1.2]. Defaults to 0.8. | No | |
| tool_choice | string | How the model chooses tools. Options are auto, none, required, orspecify a function. |
No | |
| tools | array of OpenAI.RealtimeSessionCreateRequestTools | Tools (functions) available to the model. | No | |
| tracing | string or object | Configuration options for tracing. Set to null to disable tracing. Once tracing is enabled for a session, the configuration cannot be modified. auto will create a trace for the session with default values for theworkflow name, group id, and metadata. |
No | |
| truncation | OpenAI.RealtimeTruncation | When the number of tokens in a conversation exceeds the model's input token limit, the conversation be truncated, meaning messages (starting from the oldest) will not be included in the model's context. A 32k context model with 4,096 max output tokens can only include 28,224 tokens in the context before truncation occurs. Clients can configure truncation behavior to truncate with a lower max token limit, which is an effective way to control token usage and cost. Truncation will reduce the number of cached tokens on the next turn (busting the cache), since messages are dropped from the beginning of the context. However, clients can also configure truncation to retain messages up to a fraction of the maximum context size, which will reduce the need for future truncations and thus improve the cache rate. Truncation can be disabled entirely, which means the server will never truncate but would instead return an error if the conversation exceeds the model's input token limit. |
No | |
| turn_detection | OpenAI.RealtimeSessionCreateRequestTurnDetection | No | ||
| └─ prefix_padding_ms | integer | No | ||
| └─ silence_duration_ms | integer | No | ||
| └─ threshold | number | No | ||
| └─ type | string | No | ||
| type | enum | Possible values: realtime |
Yes | |
| voice | OpenAI.VoiceIdsShared | No |
OpenAI.RealtimeSessionCreateRequestClientSecret
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| expires_at | integer | Yes | ||
| value | string | Yes |
OpenAI.RealtimeSessionCreateRequestGA
Realtime session object configuration.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| audio | OpenAI.RealtimeSessionCreateRequestGAAudio | No | ||
| └─ input | OpenAI.RealtimeSessionCreateRequestGAAudioInput | No | ||
| └─ output | OpenAI.RealtimeSessionCreateRequestGAAudioOutput | No | ||
| include | array of string | Additional fields to include in server outputs.item.input_audio_transcription.logprobs: Include logprobs for input audio transcription. |
No | |
| instructions | string | The default system instructions (i.e. system message) prepended to model calls. This field allows the client to guide the model on desired responses. The model can be instructed on response content and format, (for example "be extremely succinct", "act friendly", "here are examples of good responses") and on audio behavior (for example "talk quickly", "inject emotion into your voice", "laugh frequently"). The instructions are not guaranteed to be followed by the model, but they provide guidance to the model on the desired behavior. Note that the server sets default instructions which will be used if this field is not set and are visible in the session.created event at the start of the session. |
No | |
| max_output_tokens | integer (see valid models below) | Maximum number of output tokens for a single assistant response, inclusive of tool calls. Provide an integer between 1 and 4096 to limit output tokens, or inf for the maximum available tokens for agiven model. Defaults to inf. |
No | |
| model | string | The Realtime model used for this session. | No | |
| output_modalities | array of string | The set of modalities the model can respond with. It defaults to ["audio"], indicatingthat the model will respond with audio plus a transcript. ["text"] can be used to makethe model respond with text only. It is not possible to request both text and audio at the same time. |
No | ['audio'] |
| prompt | OpenAI.Prompt | Reference to a prompt template and its variables. Learn more. |
No | |
| tool_choice | OpenAI.ToolChoiceOptions or OpenAI.ToolChoiceFunction or OpenAI.ToolChoiceMCP | How the model chooses tools. Provide one of the string modes or force a specific function/MCP tool. |
No | |
| tools | array of OpenAI.RealtimeFunctionTool or OpenAI.MCPTool | Tools available to the model. | No | |
| tracing | string or OpenAI.RealtimeSessionCreateRequestGATracing or null | "" Set to null to disable tracing. Once tracing is enabled for a session, the configuration cannot be modified. auto will create a trace for the session with default values for theworkflow name, group id, and metadata. |
No | |
| truncation | OpenAI.RealtimeTruncation | When the number of tokens in a conversation exceeds the model's input token limit, the conversation be truncated, meaning messages (starting from the oldest) will not be included in the model's context. A 32k context model with 4,096 max output tokens can only include 28,224 tokens in the context before truncation occurs. Clients can configure truncation behavior to truncate with a lower max token limit, which is an effective way to control token usage and cost. Truncation will reduce the number of cached tokens on the next turn (busting the cache), since messages are dropped from the beginning of the context. However, clients can also configure truncation to retain messages up to a fraction of the maximum context size, which will reduce the need for future truncations and thus improve the cache rate. Truncation can be disabled entirely, which means the server will never truncate but would instead return an error if the conversation exceeds the model's input token limit. |
No | |
| type | enum | The type of session to create. Always realtime for the Realtime API.Possible values: realtime |
Yes |
OpenAI.RealtimeSessionCreateRequestGAAudio
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| input | OpenAI.RealtimeSessionCreateRequestGAAudioInput | No | ||
| output | OpenAI.RealtimeSessionCreateRequestGAAudioOutput | No |
OpenAI.RealtimeSessionCreateRequestGAAudioInput
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| format | OpenAI.RealtimeAudioFormats | No | ||
| noise_reduction | OpenAI.RealtimeSessionCreateRequestGAAudioInputNoiseReduction | No | ||
| transcription | OpenAI.AudioTranscription | No | ||
| turn_detection | OpenAI.RealtimeTurnDetection | No |
OpenAI.RealtimeSessionCreateRequestGAAudioInputNoiseReduction
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | OpenAI.NoiseReductionType | Type of noise reduction. near_field is for close-talking microphones such as headphones, far_field is for far-field microphones such as laptop or conference room microphones. |
No |
OpenAI.RealtimeSessionCreateRequestGAAudioOutput
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| format | OpenAI.RealtimeAudioFormats | No | ||
| speed | number | Constraints: min: 0.25, max: 1.5 | No | 1 |
| voice | OpenAI.VoiceIdsShared | No |
OpenAI.RealtimeSessionCreateRequestGATracing
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| group_id | string | No | ||
| metadata | object | No | ||
| workflow_name | string | No |
OpenAI.RealtimeSessionCreateRequestInputAudioTranscription
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| model | string | No |
OpenAI.RealtimeSessionCreateRequestTools
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| description | string | No | ||
| name | string | No | ||
| parameters | OpenAI.RealtimeSessionCreateRequestToolsParameters | No | ||
| type | enum | Possible values: function |
No |
OpenAI.RealtimeSessionCreateRequestToolsParameters
Type: object
OpenAI.RealtimeSessionCreateRequestTurnDetection
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| prefix_padding_ms | integer | No | ||
| silence_duration_ms | integer | No | ||
| threshold | number | No | ||
| type | string | No |
OpenAI.RealtimeSessionCreateRequestUnion
Discriminator for OpenAI.RealtimeSessionCreateRequestUnion
This component uses the property type to discriminate between different types:
| Type Value | Schema |
|---|---|
realtime |
OpenAI.RealtimeSessionCreateRequest |
transcription |
OpenAI.RealtimeTranscriptionSessionCreateRequest |
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | OpenAI.RealtimeSessionCreateRequestUnionType | Yes |
OpenAI.RealtimeSessionCreateRequestUnionType
| Property | Value |
|---|---|
| Type | string |
| Values | realtimetranscription |
OpenAI.RealtimeSessionCreateResponse
A Realtime session configuration object.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| audio | OpenAI.RealtimeSessionCreateResponseAudio | No | ||
| └─ input | OpenAI.RealtimeSessionCreateResponseAudioInput | No | ||
| └─ output | OpenAI.RealtimeSessionCreateResponseAudioOutput | No | ||
| expires_at | integer | Expiration timestamp for the session, in seconds since epoch. | No | |
| id | string | Unique identifier for the session that looks like sess_1234567890abcdef. |
No | |
| include | array of string | Additional fields to include in server outputs. - item.input_audio_transcription.logprobs: Include logprobs for input audio transcription. |
No | |
| instructions | string | The default system instructions (i.e. system message) prepended to model calls. This field allows the client to guide the model on desired responses. The model can be instructed on response content and format, (for example "be extremely succinct", "act friendly", "here are examples of good responses") and on audio behavior (for example "talk quickly", "inject emotion into your voice", "laugh frequently"). The instructions are not guaranteed to be followed by the model, but they provide guidance to the model on the desired behavior. Note that the server sets default instructions which will be used if this field is not set and are visible in the session.created event at thestart of the session. |
No | |
| max_output_tokens | integer (see valid models below) | Maximum number of output tokens for a single assistant response, inclusive of tool calls. Provide an integer between 1 and 4096 to limit output tokens, or inf for the maximum available tokens for agiven model. Defaults to inf. |
No | |
| model | string | The Realtime model used for this session. | No | |
| object | string | The object type. Always realtime.session. |
No | |
| output_modalities | array of string | The set of modalities the model can respond with. To disable audio, set this to ["text"]. |
No | |
| tool_choice | string | How the model chooses tools. Options are auto, none, required, orspecify a function. |
No | |
| tools | array of OpenAI.RealtimeFunctionTool | Tools (functions) available to the model. | No | |
| tracing | string or object | Configuration options for tracing. Set to null to disable tracing. Once tracing is enabled for a session, the configuration cannot be modified. auto will create a trace for the session with default values for theworkflow name, group id, and metadata. |
No | |
| turn_detection | OpenAI.RealtimeSessionCreateResponseTurnDetection | No | ||
| └─ prefix_padding_ms | integer | No | ||
| └─ silence_duration_ms | integer | No | ||
| └─ threshold | number | No | ||
| └─ type | string | No | ||
| type | enum | Possible values: realtime |
Yes |
OpenAI.RealtimeSessionCreateResponseAudio
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| input | OpenAI.RealtimeSessionCreateResponseAudioInput | No | ||
| output | OpenAI.RealtimeSessionCreateResponseAudioOutput | No |
OpenAI.RealtimeSessionCreateResponseAudioInput
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| format | OpenAI.RealtimeAudioFormats | No | ||
| noise_reduction | OpenAI.RealtimeSessionCreateResponseAudioInputNoiseReduction | No | ||
| transcription | OpenAI.AudioTranscription | No | ||
| turn_detection | OpenAI.RealtimeSessionCreateResponseAudioInputTurnDetection | No |
OpenAI.RealtimeSessionCreateResponseAudioInputNoiseReduction
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | OpenAI.NoiseReductionType | Type of noise reduction. near_field is for close-talking microphones such as headphones, far_field is for far-field microphones such as laptop or conference room microphones. |
No |
OpenAI.RealtimeSessionCreateResponseAudioInputTurnDetection
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| prefix_padding_ms | integer | No | ||
| silence_duration_ms | integer | No | ||
| threshold | number | No | ||
| type | string | No |
OpenAI.RealtimeSessionCreateResponseAudioOutput
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| format | OpenAI.RealtimeAudioFormats | No | ||
| speed | number | No | ||
| voice | OpenAI.VoiceIdsShared | No |
OpenAI.RealtimeSessionCreateResponseTurnDetection
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| prefix_padding_ms | integer | No | ||
| silence_duration_ms | integer | No | ||
| threshold | number | No | ||
| type | string | No |
OpenAI.RealtimeSessionCreateResponseUnion
Discriminator for OpenAI.RealtimeSessionCreateResponseUnion
This component uses the property type to discriminate between different types:
| Type Value | Schema |
|---|---|
realtime |
OpenAI.RealtimeSessionCreateResponse |
transcription |
OpenAI.RealtimeTranscriptionSessionCreateResponse |
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | OpenAI.RealtimeSessionCreateResponseUnionType | Yes |
OpenAI.RealtimeSessionCreateResponseUnionType
| Property | Value |
|---|---|
| Type | string |
| Values | realtimetranscription |
OpenAI.RealtimeTranscriptionSessionCreateRequest
Realtime transcription session object configuration.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| include | array of string | The set of items to include in the transcription. Current available items are:item.input_audio_transcription.logprobs |
No | |
| input_audio_format | enum | The format of input audio. Options are pcm16, g711_ulaw, or g711_alaw.For pcm16, input audio must be 16-bit PCM at a 24-kHz sample rate,single channel (mono), and little-endian byte order. Possible values: pcm16, g711_ulaw, g711_alaw |
No | |
| input_audio_noise_reduction | OpenAI.RealtimeTranscriptionSessionCreateRequestInputAudioNoiseReduction | No | ||
| └─ type | OpenAI.NoiseReductionType | Type of noise reduction. near_field is for close-talking microphones such as headphones, far_field is for far-field microphones such as laptop or conference room microphones. |
No | |
| input_audio_transcription | OpenAI.AudioTranscription | No | ||
| └─ language | string | The language of the input audio. Supplying the input language in ISO-639-1 (e.g. en) formatwill improve accuracy and latency. |
No | |
| └─ model | string | The model to use for transcription. Current options are whisper-1, gpt-4o-mini-transcribe, gpt-4o-mini-transcribe-2025-12-15, gpt-4o-transcribe, and gpt-4o-transcribe-diarize. Use gpt-4o-transcribe-diarize when you need diarization with speaker labels. |
No | |
| └─ prompt | string | An optional text to guide the model's style or continue a previous audio segment. For whisper-1, the prompt is a list of keywords.For gpt-4o-transcribe models (excluding gpt-4o-transcribe-diarize), the prompt is a free text string, for example "expect words related to technology". |
No | |
| turn_detection | OpenAI.RealtimeTranscriptionSessionCreateRequestTurnDetection | No | ||
| └─ prefix_padding_ms | integer | No | ||
| └─ silence_duration_ms | integer | No | ||
| └─ threshold | number | No | ||
| └─ type | enum | Possible values: server_vad |
No | |
| type | enum | Possible values: transcription |
Yes |
OpenAI.RealtimeTranscriptionSessionCreateRequestInputAudioNoiseReduction
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | OpenAI.NoiseReductionType | Type of noise reduction. near_field is for close-talking microphones such as headphones, far_field is for far-field microphones such as laptop or conference room microphones. |
No |
OpenAI.RealtimeTranscriptionSessionCreateRequestTurnDetection
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| prefix_padding_ms | integer | No | ||
| silence_duration_ms | integer | No | ||
| threshold | number | No | ||
| type | enum | Possible values: server_vad |
No |
OpenAI.RealtimeTranscriptionSessionCreateResponse
A new Realtime transcription session configuration. When a session is created on the server via REST API, the session object also contains an ephemeral key. Default TTL for keys is 10 minutes. This property is not present when a session is updated via the WebSocket API.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| client_secret | OpenAI.RealtimeTranscriptionSessionCreateResponseClientSecret | Yes | ||
| └─ expires_at | integer | Yes | ||
| └─ value | string | Yes | ||
| input_audio_format | string | The format of input audio. Options are pcm16, g711_ulaw, or g711_alaw. |
No | |
| input_audio_transcription | OpenAI.AudioTranscription | No | ||
| └─ language | string | The language of the input audio. Supplying the input language in ISO-639-1 (e.g. en) formatwill improve accuracy and latency. |
No | |
| └─ model | string | The model to use for transcription. Current options are whisper-1, gpt-4o-mini-transcribe, gpt-4o-mini-transcribe-2025-12-15, gpt-4o-transcribe, and gpt-4o-transcribe-diarize. Use gpt-4o-transcribe-diarize when you need diarization with speaker labels. |
No | |
| └─ prompt | string | An optional text to guide the model's style or continue a previous audio segment. For whisper-1, the prompt is a list of keywords.For gpt-4o-transcribe models (excluding gpt-4o-transcribe-diarize), the prompt is a free text string, for example "expect words related to technology". |
No | |
| modalities | array of string | The set of modalities the model can respond with. To disable audio, set this to ["text"]. |
No | |
| turn_detection | OpenAI.RealtimeTranscriptionSessionCreateResponseTurnDetection | No | ||
| └─ prefix_padding_ms | integer | No | ||
| └─ silence_duration_ms | integer | No | ||
| └─ threshold | number | No | ||
| └─ type | string | No | ||
| type | enum | Possible values: transcription |
Yes |
OpenAI.RealtimeTranscriptionSessionCreateResponseClientSecret
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| expires_at | integer | Yes | ||
| value | string | Yes |
OpenAI.RealtimeTranscriptionSessionCreateResponseTurnDetection
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| prefix_padding_ms | integer | No | ||
| silence_duration_ms | integer | No | ||
| threshold | number | No | ||
| type | string | No |
OpenAI.RealtimeTruncation
When the number of tokens in a conversation exceeds the model's input token limit, the conversation be truncated, meaning messages (starting from the oldest) will not be included in the model's context. A 32k context model with 4,096 max output tokens can only include 28,224 tokens in the context before truncation occurs. Clients can configure truncation behavior to truncate with a lower max token limit, which is an effective way to control token usage and cost. Truncation will reduce the number of cached tokens on the next turn (busting the cache), since messages are dropped from the beginning of the context. However, clients can also configure truncation to retain messages up to a fraction of the maximum context size, which will reduce the need for future truncations and thus improve the cache rate. Truncation can be disabled entirely, which means the server will never truncate but would instead return an error if the conversation exceeds the model's input token limit.
| Property | Value |
|---|---|
| Type | string |
| Values | autodisabled |
OpenAI.RealtimeTurnDetection
Discriminator for OpenAI.RealtimeTurnDetection
This component uses the property type to discriminate between different types:
| Type Value | Schema |
|---|
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | OpenAI.RealtimeTurnDetectionType | Yes |
OpenAI.RealtimeTurnDetectionType
Type: string
OpenAI.Reasoning
gpt-5 and o-series models only Configuration options for reasoning models.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| effort | OpenAI.ReasoningEffort | Constrains effort on reasoning for reasoning models. Currently supported values are none, minimal, low, medium, high, and xhigh. Reducingreasoning effort can result in faster responses and fewer tokens used on reasoning in a response. - gpt-5.1 defaults to none, which does not perform reasoning. The supported reasoning values for gpt-5.1 are none, low, medium, and high. Tool calls are supported for all reasoning values in gpt-5.1.- All models before gpt-5.1 default to medium reasoning effort, and do not support none.- The gpt-5-pro model defaults to (and only supports) high reasoning effort.- xhigh is supported for all models after gpt-5.1-codex-max. |
No | |
| generate_summary | string or null | No | ||
| summary | string or null | No |
OpenAI.ReasoningEffort
Constrains effort on reasoning for
reasoning models.
Currently supported values are none, minimal, low, medium, high, and xhigh. Reducing
reasoning effort can result in faster responses and fewer tokens used
on reasoning in a response.
gpt-5.1defaults tonone, which does not perform reasoning. The supported reasoning values forgpt-5.1arenone,low,medium, andhigh. Tool calls are supported for all reasoning values in gpt-5.1.- All models before
gpt-5.1default tomediumreasoning effort, and do not supportnone. - The
gpt-5-promodel defaults to (and only supports)highreasoning effort. xhighis supported for all models aftergpt-5.1-codex-max.
| Property | Value |
|---|---|
| Type | string |
| Values | noneminimallowmediumhighxhigh |
OpenAI.ReasoningTextContent
Reasoning text from the model.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| text | string | The reasoning text from the model. | Yes | |
| type | enum | The type of the reasoning text. Always reasoning_text.Possible values: reasoning_text |
Yes |
OpenAI.RefusalContent
A refusal from the model.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| refusal | string | The refusal explanation from the model. | Yes | |
| type | enum | The type of the refusal. Always refusal.Possible values: refusal |
Yes |
OpenAI.Response
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| background | boolean or null | No | ||
| completed_at | string or null | No | ||
| content_filters | array of AzureContentFilterForResponsesAPI | The content filter results from RAI. | Yes | |
| conversation | OpenAI.ConversationReference or null | No | ||
| created_at | integer | Unix timestamp (in seconds) of when this Response was created. | Yes | |
| error | OpenAI.ResponseError or null | Yes | ||
| id | string | Unique identifier for this Response. | Yes | |
| incomplete_details | OpenAI.ResponseIncompleteDetails or null | Yes | ||
| instructions | string or array of OpenAI.InputItem or null | Yes | ||
| max_output_tokens | integer or null | No | ||
| max_tool_calls | integer or null | No | ||
| metadata | OpenAI.Metadata or null | No | ||
| model | string | Model ID used to generate the response, like gpt-4o or o3. OpenAIoffers a wide range of models with different capabilities, performance characteristics, and price points. Refer to the model guide to browse and compare available models. |
No | |
| object | enum | The object type of this resource - always set to response.Possible values: response |
Yes | |
| output | array of OpenAI.OutputItem | An array of content items generated by the model. - The length and order of items in the output array is dependenton the model's response. - Rather than accessing the first item in the output array andassuming it's an assistant message with the content generated bythe model, you might consider using the output_text property wheresupported in SDKs. |
Yes | |
| output_text | string or null | No | ||
| parallel_tool_calls | boolean | Whether to allow the model to run tool calls in parallel. | Yes | True |
| previous_response_id | string or null | No | ||
| prompt | OpenAI.Prompt | Reference to a prompt template and its variables. Learn more. |
No | |
| prompt_cache_key | string | Used by OpenAI to cache responses for similar requests to optimize your cache hit rates. Replaces the user field. Learn more. |
No | |
| prompt_cache_retention | string or null | No | ||
| reasoning | OpenAI.Reasoning or null | No | ||
| safety_identifier | string | A stable identifier used to help detect users of your application that may be violating OpenAI's usage policies. The IDs should be a string that uniquely identifies each user. We recommend hashing their username or email address, in order to avoid sending us any identifying information. Learn more. |
No | |
| status | enum | The status of the response generation. One of completed, failed,in_progress, cancelled, queued, or incomplete.Possible values: completed, failed, in_progress, cancelled, queued, incomplete |
No | |
| temperature | number or null | No | ||
| text | OpenAI.ResponseTextParam | Configuration options for a text response from the model. Can be plain text or structured JSON data. Learn more: - Text inputs and outputs - Structured Outputs |
No | |
| tool_choice | OpenAI.ToolChoiceParam | How the model should select which tool (or tools) to use when generating a response. See the tools parameter to see how to specify which toolsthe model can call. |
No | |
| tools | OpenAI.ToolsArray | An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter.We support the following categories of tools: - Built-in tools: Tools that are provided by OpenAI that extend the model's capabilities, like web search or file search. Learn more about built-in tools. - MCP Tools: Integrations with third-party systems via custom MCP servers or predefined connectors such as Google Drive and SharePoint. Learn more about MCP Tools. - Function calls (custom tools): Functions that are defined by you, enabling the model to call your own code with strongly typed arguments and outputs. Learn more about function calling. You can also use custom tools to call your own code. |
No | |
| top_logprobs | integer or null | No | ||
| top_p | number or null | No | ||
| truncation | string or null | No | ||
| usage | OpenAI.ResponseUsage | Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used. |
No | |
| user | string (deprecated) | This field is being replaced by safety_identifier and prompt_cache_key. Use prompt_cache_key instead to maintain caching optimizations.A stable identifier for your end-users. Used to boost cache hit rates by better bucketing similar requests and to help OpenAI detect and prevent abuse. Learn more. |
No |
OpenAI.ResponseAudioDeltaEvent
Emitted when there is a partial audio response.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| delta | string | A chunk of Base64 encoded response audio bytes. | Yes | |
| sequence_number | integer | A sequence number for this chunk of the stream response. | Yes | |
| type | enum | The type of the event. Always response.audio.delta.Possible values: response.audio.delta |
Yes |
OpenAI.ResponseAudioTranscriptDeltaEvent
Emitted when there is a partial transcript of audio.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| delta | string | The partial transcript of the audio response. | Yes | |
| sequence_number | integer | The sequence number of this event. | Yes | |
| type | enum | The type of the event. Always response.audio.transcript.delta.Possible values: response.audio.transcript.delta |
Yes |
OpenAI.ResponseCodeInterpreterCallCodeDeltaEvent
Emitted when a partial code snippet is streamed by the code interpreter.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| delta | string | The partial code snippet being streamed by the code interpreter. | Yes | |
| item_id | string | The unique identifier of the code interpreter tool call item. | Yes | |
| output_index | integer | The index of the output item in the response for which the code is being streamed. | Yes | |
| sequence_number | integer | The sequence number of this event, used to order streaming events. | Yes | |
| type | enum | The type of the event. Always response.code_interpreter_call_code.delta.Possible values: response.code_interpreter_call_code.delta |
Yes |
OpenAI.ResponseCodeInterpreterCallInProgressEvent
Emitted when a code interpreter call is in progress.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| item_id | string | The unique identifier of the code interpreter tool call item. | Yes | |
| output_index | integer | The index of the output item in the response for which the code interpreter call is in progress. | Yes | |
| sequence_number | integer | The sequence number of this event, used to order streaming events. | Yes | |
| type | enum | The type of the event. Always response.code_interpreter_call.in_progress.Possible values: response.code_interpreter_call.in_progress |
Yes |
OpenAI.ResponseCodeInterpreterCallInterpretingEvent
Emitted when the code interpreter is actively interpreting the code snippet.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| item_id | string | The unique identifier of the code interpreter tool call item. | Yes | |
| output_index | integer | The index of the output item in the response for which the code interpreter is interpreting code. | Yes | |
| sequence_number | integer | The sequence number of this event, used to order streaming events. | Yes | |
| type | enum | The type of the event. Always response.code_interpreter_call.interpreting.Possible values: response.code_interpreter_call.interpreting |
Yes |
OpenAI.ResponseContentPartAddedEvent
Emitted when a new content part is added.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| content_index | integer | The index of the content part that was added. | Yes | |
| item_id | string | The ID of the output item that the content part was added to. | Yes | |
| output_index | integer | The index of the output item that the content part was added to. | Yes | |
| part | OpenAI.OutputContent | Yes | ||
| └─ type | OpenAI.OutputContentType | Yes | ||
| sequence_number | integer | The sequence number of this event. | Yes | |
| type | enum | The type of the event. Always response.content_part.added.Possible values: response.content_part.added |
Yes |
OpenAI.ResponseCreatedEvent
An event that is emitted when a response is created.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| response | OpenAI.Response | Yes | ||
| └─ background | boolean or null | No | ||
| └─ completed_at | string or null | No | ||
| └─ content_filters | array of AzureContentFilterForResponsesAPI | The content filter results from RAI. | Yes | |
| └─ conversation | OpenAI.ConversationReference or null | No | ||
| └─ created_at | integer | Unix timestamp (in seconds) of when this Response was created. | Yes | |
| └─ error | OpenAI.ResponseError or null | Yes | ||
| └─ id | string | Unique identifier for this Response. | Yes | |
| └─ incomplete_details | OpenAI.ResponseIncompleteDetails or null | Yes | ||
| └─ instructions | string or array of OpenAI.InputItem or null | Yes | ||
| └─ max_output_tokens | integer or null | No | ||
| └─ max_tool_calls | integer or null | No | ||
| └─ metadata | OpenAI.Metadata or null | No | ||
| └─ model | string | Model ID used to generate the response, like gpt-4o or o3. OpenAIoffers a wide range of models with different capabilities, performance characteristics, and price points. Refer to the model guide to browse and compare available models. |
No | |
| └─ object | enum | The object type of this resource - always set to response.Possible values: response |
Yes | |
| └─ output | array of OpenAI.OutputItem | An array of content items generated by the model. - The length and order of items in the output array is dependenton the model's response. - Rather than accessing the first item in the output array andassuming it's an assistant message with the content generated bythe model, you might consider using the output_text property wheresupported in SDKs. |
Yes | |
| └─ output_text | string or null | No | ||
| └─ parallel_tool_calls | boolean | Whether to allow the model to run tool calls in parallel. | Yes | True |
| └─ previous_response_id | string or null | No | ||
| └─ prompt | OpenAI.Prompt | Reference to a prompt template and its variables. Learn more. |
No | |
| └─ prompt_cache_key | string | Used by OpenAI to cache responses for similar requests to optimize your cache hit rates. Replaces the user field. Learn more. |
No | |
| └─ prompt_cache_retention | string or null | No | ||
| └─ reasoning | OpenAI.Reasoning or null | No | ||
| └─ safety_identifier | string | A stable identifier used to help detect users of your application that may be violating OpenAI's usage policies. The IDs should be a string that uniquely identifies each user. We recommend hashing their username or email address, in order to avoid sending us any identifying information. Learn more. |
No | |
| └─ status | enum | The status of the response generation. One of completed, failed,in_progress, cancelled, queued, or incomplete.Possible values: completed, failed, in_progress, cancelled, queued, incomplete |
No | |
| └─ temperature | number or null | No | 1 | |
| └─ text | OpenAI.ResponseTextParam | Configuration options for a text response from the model. Can be plain text or structured JSON data. Learn more: - Text inputs and outputs - Structured Outputs |
No | |
| └─ tool_choice | OpenAI.ToolChoiceParam | How the model should select which tool (or tools) to use when generating a response. See the tools parameter to see how to specify which toolsthe model can call. |
No | |
| └─ tools | OpenAI.ToolsArray | An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter.We support the following categories of tools: - Built-in tools: Tools that are provided by OpenAI that extend the model's capabilities, like web search or file search. Learn more about built-in tools. - MCP Tools: Integrations with third-party systems via custom MCP servers or predefined connectors such as Google Drive and SharePoint. Learn more about MCP Tools. - Function calls (custom tools): Functions that are defined by you, enabling the model to call your own code with strongly typed arguments and outputs. Learn more about function calling. You can also use custom tools to call your own code. |
No | |
| └─ top_logprobs | integer or null | No | ||
| └─ top_p | number or null | No | 1 | |
| └─ truncation | string or null | No | disabled | |
| └─ usage | OpenAI.ResponseUsage | Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used. |
No | |
| └─ user | string (deprecated) | This field is being replaced by safety_identifier and prompt_cache_key. Use prompt_cache_key instead to maintain caching optimizations.A stable identifier for your end-users. Used to boost cache hit rates by better bucketing similar requests and to help OpenAI detect and prevent abuse. Learn more. |
No | |
| sequence_number | integer | The sequence number for this event. | Yes | |
| type | enum | The type of the event. Always response.created.Possible values: response.created |
Yes |
OpenAI.ResponseCustomToolCallInputDeltaEvent
Event representing a delta (partial update) to the input of a custom tool call.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| delta | string | The incremental input data (delta) for the custom tool call. | Yes | |
| item_id | string | Unique identifier for the API item associated with this event. | Yes | |
| output_index | integer | The index of the output this delta applies to. | Yes | |
| sequence_number | integer | The sequence number of this event. | Yes | |
| type | enum | The event type identifier. Possible values: response.custom_tool_call_input.delta |
Yes |
OpenAI.ResponseError
An error object returned when the model fails to generate a Response.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| code | OpenAI.ResponseErrorCode | The error code for the response. | Yes | |
| message | string | A human-readable description of the error. | Yes |
OpenAI.ResponseErrorCode
The error code for the response.
| Property | Value |
|---|---|
| Type | string |
| Values | server_errorrate_limit_exceededinvalid_promptvector_store_timeoutinvalid_imageinvalid_image_formatinvalid_base64_imageinvalid_image_urlimage_too_largeimage_too_smallimage_parse_errorimage_content_policy_violationinvalid_image_modeimage_file_too_largeunsupported_image_media_typeempty_image_filefailed_to_download_imageimage_file_not_found |
OpenAI.ResponseErrorEvent
Emitted when an error occurs.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| code | string or null | Yes | ||
| message | string | The error message. | Yes | |
| param | string or null | Yes | ||
| sequence_number | integer | The sequence number of this event. | Yes | |
| type | enum | The type of the event. Always error.Possible values: error |
Yes |
OpenAI.ResponseFailedEvent
An event that is emitted when a response fails.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| response | OpenAI.Response | Yes | ||
| └─ background | boolean or null | No | ||
| └─ completed_at | string or null | No | ||
| └─ content_filters | array of AzureContentFilterForResponsesAPI | The content filter results from RAI. | Yes | |
| └─ conversation | OpenAI.ConversationReference or null | No | ||
| └─ created_at | integer | Unix timestamp (in seconds) of when this Response was created. | Yes | |
| └─ error | OpenAI.ResponseError or null | Yes | ||
| └─ id | string | Unique identifier for this Response. | Yes | |
| └─ incomplete_details | OpenAI.ResponseIncompleteDetails or null | Yes | ||
| └─ instructions | string or array of OpenAI.InputItem or null | Yes | ||
| └─ max_output_tokens | integer or null | No | ||
| └─ max_tool_calls | integer or null | No | ||
| └─ metadata | OpenAI.Metadata or null | No | ||
| └─ model | string | Model ID used to generate the response, like gpt-4o or o3. OpenAIoffers a wide range of models with different capabilities, performance characteristics, and price points. Refer to the model guide to browse and compare available models. |
No | |
| └─ object | enum | The object type of this resource - always set to response.Possible values: response |
Yes | |
| └─ output | array of OpenAI.OutputItem | An array of content items generated by the model. - The length and order of items in the output array is dependenton the model's response. - Rather than accessing the first item in the output array andassuming it's an assistant message with the content generated bythe model, you might consider using the output_text property wheresupported in SDKs. |
Yes | |
| └─ output_text | string or null | No | ||
| └─ parallel_tool_calls | boolean | Whether to allow the model to run tool calls in parallel. | Yes | True |
| └─ previous_response_id | string or null | No | ||
| └─ prompt | OpenAI.Prompt | Reference to a prompt template and its variables. Learn more. |
No | |
| └─ prompt_cache_key | string | Used by OpenAI to cache responses for similar requests to optimize your cache hit rates. Replaces the user field. Learn more. |
No | |
| └─ prompt_cache_retention | string or null | No | ||
| └─ reasoning | OpenAI.Reasoning or null | No | ||
| └─ safety_identifier | string | A stable identifier used to help detect users of your application that may be violating OpenAI's usage policies. The IDs should be a string that uniquely identifies each user. We recommend hashing their username or email address, in order to avoid sending us any identifying information. Learn more. |
No | |
| └─ status | enum | The status of the response generation. One of completed, failed,in_progress, cancelled, queued, or incomplete.Possible values: completed, failed, in_progress, cancelled, queued, incomplete |
No | |
| └─ temperature | number or null | No | 1 | |
| └─ text | OpenAI.ResponseTextParam | Configuration options for a text response from the model. Can be plain text or structured JSON data. Learn more: - Text inputs and outputs - Structured Outputs |
No | |
| └─ tool_choice | OpenAI.ToolChoiceParam | How the model should select which tool (or tools) to use when generating a response. See the tools parameter to see how to specify which toolsthe model can call. |
No | |
| └─ tools | OpenAI.ToolsArray | An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter.We support the following categories of tools: - Built-in tools: Tools that are provided by OpenAI that extend the model's capabilities, like web search or file search. Learn more about built-in tools. - MCP Tools: Integrations with third-party systems via custom MCP servers or predefined connectors such as Google Drive and SharePoint. Learn more about MCP Tools. - Function calls (custom tools): Functions that are defined by you, enabling the model to call your own code with strongly typed arguments and outputs. Learn more about function calling. You can also use custom tools to call your own code. |
No | |
| └─ top_logprobs | integer or null | No | ||
| └─ top_p | number or null | No | 1 | |
| └─ truncation | string or null | No | disabled | |
| └─ usage | OpenAI.ResponseUsage | Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used. |
No | |
| └─ user | string (deprecated) | This field is being replaced by safety_identifier and prompt_cache_key. Use prompt_cache_key instead to maintain caching optimizations.A stable identifier for your end-users. Used to boost cache hit rates by better bucketing similar requests and to help OpenAI detect and prevent abuse. Learn more. |
No | |
| sequence_number | integer | The sequence number of this event. | Yes | |
| type | enum | The type of the event. Always response.failed.Possible values: response.failed |
Yes |
OpenAI.ResponseFileSearchCallInProgressEvent
Emitted when a file search call is initiated.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| item_id | string | The ID of the output item that the file search call is initiated. | Yes | |
| output_index | integer | The index of the output item that the file search call is initiated. | Yes | |
| sequence_number | integer | The sequence number of this event. | Yes | |
| type | enum | The type of the event. Always response.file_search_call.in_progress.Possible values: response.file_search_call.in_progress |
Yes |
OpenAI.ResponseFileSearchCallSearchingEvent
Emitted when a file search is currently searching.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| item_id | string | The ID of the output item that the file search call is initiated. | Yes | |
| output_index | integer | The index of the output item that the file search call is searching. | Yes | |
| sequence_number | integer | The sequence number of this event. | Yes | |
| type | enum | The type of the event. Always response.file_search_call.searching.Possible values: response.file_search_call.searching |
Yes |
OpenAI.ResponseFormatJsonObject
JSON object response format. An older method of generating JSON responses.
Using json_schema is recommended for models that support it. Note that the
model will not generate JSON without a system or user message instructing it
to do so.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | enum | The type of response format being defined. Always json_object.Possible values: json_object |
Yes |
OpenAI.ResponseFormatJsonSchema
JSON Schema response format. Used to generate structured JSON responses. Learn more about Structured Outputs.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| json_schema | OpenAI.ResponseFormatJsonSchemaJsonSchema | Yes | ||
| └─ description | string | No | ||
| └─ name | string | Yes | ||
| └─ schema | OpenAI.ResponseFormatJsonSchemaSchema | The schema for the response format, described as a JSON Schema object. Learn how to build JSON schemas here. |
No | |
| └─ strict | boolean or null | No | ||
| type | enum | The type of response format being defined. Always json_schema.Possible values: json_schema |
Yes |
OpenAI.ResponseFormatJsonSchemaJsonSchema
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| description | string | No | ||
| name | string | Yes | ||
| schema | OpenAI.ResponseFormatJsonSchemaSchema | The schema for the response format, described as a JSON Schema object. Learn how to build JSON schemas here. |
No | |
| strict | boolean or null | No |
OpenAI.ResponseFormatJsonSchemaSchema
The schema for the response format, described as a JSON Schema object. Learn how to build JSON schemas here.
Type: object
OpenAI.ResponseFormatText
Default response format. Used to generate text responses.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | enum | The type of response format being defined. Always text.Possible values: text |
Yes |
OpenAI.ResponseFunctionCallArgumentsDeltaEvent
Emitted when there is a partial function-call arguments delta.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| delta | string | The function-call arguments delta that is added. | Yes | |
| item_id | string | The ID of the output item that the function-call arguments delta is added to. | Yes | |
| output_index | integer | The index of the output item that the function-call arguments delta is added to. | Yes | |
| sequence_number | integer | The sequence number of this event. | Yes | |
| type | enum | The type of the event. Always response.function_call_arguments.delta.Possible values: response.function_call_arguments.delta |
Yes |
OpenAI.ResponseImageGenCallGeneratingEvent
Emitted when an image generation tool call is actively generating an image (intermediate state).
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| item_id | string | The unique identifier of the image generation item being processed. | Yes | |
| output_index | integer | The index of the output item in the response's output array. | Yes | |
| sequence_number | integer | The sequence number of the image generation item being processed. | Yes | |
| type | enum | The type of the event. Always 'response.image_generation_call.generating'. Possible values: response.image_generation_call.generating |
Yes |
OpenAI.ResponseImageGenCallInProgressEvent
Emitted when an image generation tool call is in progress.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| item_id | string | The unique identifier of the image generation item being processed. | Yes | |
| output_index | integer | The index of the output item in the response's output array. | Yes | |
| sequence_number | integer | The sequence number of the image generation item being processed. | Yes | |
| type | enum | The type of the event. Always 'response.image_generation_call.in_progress'. Possible values: response.image_generation_call.in_progress |
Yes |
OpenAI.ResponseImageGenCallPartialImageEvent
Emitted when a partial image is available during image generation streaming.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| item_id | string | The unique identifier of the image generation item being processed. | Yes | |
| output_index | integer | The index of the output item in the response's output array. | Yes | |
| partial_image_b64 | string | Base64-encoded partial image data, suitable for rendering as an image. | Yes | |
| partial_image_index | integer | 0-based index for the partial image (backend is 1-based, but this is 0-based for the user). | Yes | |
| sequence_number | integer | The sequence number of the image generation item being processed. | Yes | |
| type | enum | The type of the event. Always 'response.image_generation_call.partial_image'. Possible values: response.image_generation_call.partial_image |
Yes |
OpenAI.ResponseInProgressEvent
Emitted when the response is in progress.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| response | OpenAI.Response | Yes | ||
| └─ background | boolean or null | No | ||
| └─ completed_at | string or null | No | ||
| └─ content_filters | array of AzureContentFilterForResponsesAPI | The content filter results from RAI. | Yes | |
| └─ conversation | OpenAI.ConversationReference or null | No | ||
| └─ created_at | integer | Unix timestamp (in seconds) of when this Response was created. | Yes | |
| └─ error | OpenAI.ResponseError or null | Yes | ||
| └─ id | string | Unique identifier for this Response. | Yes | |
| └─ incomplete_details | OpenAI.ResponseIncompleteDetails or null | Yes | ||
| └─ instructions | string or array of OpenAI.InputItem or null | Yes | ||
| └─ max_output_tokens | integer or null | No | ||
| └─ max_tool_calls | integer or null | No | ||
| └─ metadata | OpenAI.Metadata or null | No | ||
| └─ model | string | Model ID used to generate the response, like gpt-4o or o3. OpenAIoffers a wide range of models with different capabilities, performance characteristics, and price points. Refer to the model guide to browse and compare available models. |
No | |
| └─ object | enum | The object type of this resource - always set to response.Possible values: response |
Yes | |
| └─ output | array of OpenAI.OutputItem | An array of content items generated by the model. - The length and order of items in the output array is dependenton the model's response. - Rather than accessing the first item in the output array andassuming it's an assistant message with the content generated bythe model, you might consider using the output_text property wheresupported in SDKs. |
Yes | |
| └─ output_text | string or null | No | ||
| └─ parallel_tool_calls | boolean | Whether to allow the model to run tool calls in parallel. | Yes | True |
| └─ previous_response_id | string or null | No | ||
| └─ prompt | OpenAI.Prompt | Reference to a prompt template and its variables. Learn more. |
No | |
| └─ prompt_cache_key | string | Used by OpenAI to cache responses for similar requests to optimize your cache hit rates. Replaces the user field. Learn more. |
No | |
| └─ prompt_cache_retention | string or null | No | ||
| └─ reasoning | OpenAI.Reasoning or null | No | ||
| └─ safety_identifier | string | A stable identifier used to help detect users of your application that may be violating OpenAI's usage policies. The IDs should be a string that uniquely identifies each user. We recommend hashing their username or email address, in order to avoid sending us any identifying information. Learn more. |
No | |
| └─ status | enum | The status of the response generation. One of completed, failed,in_progress, cancelled, queued, or incomplete.Possible values: completed, failed, in_progress, cancelled, queued, incomplete |
No | |
| └─ temperature | number or null | No | 1 | |
| └─ text | OpenAI.ResponseTextParam | Configuration options for a text response from the model. Can be plain text or structured JSON data. Learn more: - Text inputs and outputs - Structured Outputs |
No | |
| └─ tool_choice | OpenAI.ToolChoiceParam | How the model should select which tool (or tools) to use when generating a response. See the tools parameter to see how to specify which toolsthe model can call. |
No | |
| └─ tools | OpenAI.ToolsArray | An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter.We support the following categories of tools: - Built-in tools: Tools that are provided by OpenAI that extend the model's capabilities, like web search or file search. Learn more about built-in tools. - MCP Tools: Integrations with third-party systems via custom MCP servers or predefined connectors such as Google Drive and SharePoint. Learn more about MCP Tools. - Function calls (custom tools): Functions that are defined by you, enabling the model to call your own code with strongly typed arguments and outputs. Learn more about function calling. You can also use custom tools to call your own code. |
No | |
| └─ top_logprobs | integer or null | No | ||
| └─ top_p | number or null | No | 1 | |
| └─ truncation | string or null | No | disabled | |
| └─ usage | OpenAI.ResponseUsage | Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used. |
No | |
| └─ user | string (deprecated) | This field is being replaced by safety_identifier and prompt_cache_key. Use prompt_cache_key instead to maintain caching optimizations.A stable identifier for your end-users. Used to boost cache hit rates by better bucketing similar requests and to help OpenAI detect and prevent abuse. Learn more. |
No | |
| sequence_number | integer | The sequence number of this event. | Yes | |
| type | enum | The type of the event. Always response.in_progress.Possible values: response.in_progress |
Yes |
OpenAI.ResponseIncompleteDetails
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| reason | enum | Possible values: max_output_tokens, content_filter |
No |
OpenAI.ResponseIncompleteEvent
An event that is emitted when a response finishes as incomplete.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| response | OpenAI.Response | Yes | ||
| └─ background | boolean or null | No | ||
| └─ completed_at | string or null | No | ||
| └─ content_filters | array of AzureContentFilterForResponsesAPI | The content filter results from RAI. | Yes | |
| └─ conversation | OpenAI.ConversationReference or null | No | ||
| └─ created_at | integer | Unix timestamp (in seconds) of when this Response was created. | Yes | |
| └─ error | OpenAI.ResponseError or null | Yes | ||
| └─ id | string | Unique identifier for this Response. | Yes | |
| └─ incomplete_details | OpenAI.ResponseIncompleteDetails or null | Yes | ||
| └─ instructions | string or array of OpenAI.InputItem or null | Yes | ||
| └─ max_output_tokens | integer or null | No | ||
| └─ max_tool_calls | integer or null | No | ||
| └─ metadata | OpenAI.Metadata or null | No | ||
| └─ model | string | Model ID used to generate the response, like gpt-4o or o3. OpenAIoffers a wide range of models with different capabilities, performance characteristics, and price points. Refer to the model guide to browse and compare available models. |
No | |
| └─ object | enum | The object type of this resource - always set to response.Possible values: response |
Yes | |
| └─ output | array of OpenAI.OutputItem | An array of content items generated by the model. - The length and order of items in the output array is dependenton the model's response. - Rather than accessing the first item in the output array andassuming it's an assistant message with the content generated bythe model, you might consider using the output_text property wheresupported in SDKs. |
Yes | |
| └─ output_text | string or null | No | ||
| └─ parallel_tool_calls | boolean | Whether to allow the model to run tool calls in parallel. | Yes | True |
| └─ previous_response_id | string or null | No | ||
| └─ prompt | OpenAI.Prompt | Reference to a prompt template and its variables. Learn more. |
No | |
| └─ prompt_cache_key | string | Used by OpenAI to cache responses for similar requests to optimize your cache hit rates. Replaces the user field. Learn more. |
No | |
| └─ prompt_cache_retention | string or null | No | ||
| └─ reasoning | OpenAI.Reasoning or null | No | ||
| └─ safety_identifier | string | A stable identifier used to help detect users of your application that may be violating OpenAI's usage policies. The IDs should be a string that uniquely identifies each user. We recommend hashing their username or email address, in order to avoid sending us any identifying information. Learn more. |
No | |
| └─ status | enum | The status of the response generation. One of completed, failed,in_progress, cancelled, queued, or incomplete.Possible values: completed, failed, in_progress, cancelled, queued, incomplete |
No | |
| └─ temperature | number or null | No | 1 | |
| └─ text | OpenAI.ResponseTextParam | Configuration options for a text response from the model. Can be plain text or structured JSON data. Learn more: - Text inputs and outputs - Structured Outputs |
No | |
| └─ tool_choice | OpenAI.ToolChoiceParam | How the model should select which tool (or tools) to use when generating a response. See the tools parameter to see how to specify which toolsthe model can call. |
No | |
| └─ tools | OpenAI.ToolsArray | An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter.We support the following categories of tools: - Built-in tools: Tools that are provided by OpenAI that extend the model's capabilities, like web search or file search. Learn more about built-in tools. - MCP Tools: Integrations with third-party systems via custom MCP servers or predefined connectors such as Google Drive and SharePoint. Learn more about MCP Tools. - Function calls (custom tools): Functions that are defined by you, enabling the model to call your own code with strongly typed arguments and outputs. Learn more about function calling. You can also use custom tools to call your own code. |
No | |
| └─ top_logprobs | integer or null | No | ||
| └─ top_p | number or null | No | 1 | |
| └─ truncation | string or null | No | disabled | |
| └─ usage | OpenAI.ResponseUsage | Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used. |
No | |
| └─ user | string (deprecated) | This field is being replaced by safety_identifier and prompt_cache_key. Use prompt_cache_key instead to maintain caching optimizations.A stable identifier for your end-users. Used to boost cache hit rates by better bucketing similar requests and to help OpenAI detect and prevent abuse. Learn more. |
No | |
| sequence_number | integer | The sequence number of this event. | Yes | |
| type | enum | The type of the event. Always response.incomplete.Possible values: response.incomplete |
Yes |
OpenAI.ResponseItemList
A list of Response items.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| data | array of OpenAI.ItemResource | A list of items used to generate this response. | Yes | |
| first_id | string | The ID of the first item in the list. | Yes | |
| has_more | boolean | Whether there are more items available. | Yes | |
| last_id | string | The ID of the last item in the list. | Yes | |
| object | enum | The type of object returned, must be list.Possible values: list |
Yes |
OpenAI.ResponseLogProb
A logprob is the logarithmic probability that the model assigns to producing a particular token at a given position in the sequence. Less-negative (higher) logprob values indicate greater model confidence in that token choice.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| logprob | number | The log probability of this token. | Yes | |
| token | string | A possible text token. | Yes | |
| top_logprobs | array of OpenAI.ResponseLogProbTopLogprobs | The log probability of the top 20 most likely tokens. | No |
OpenAI.ResponseLogProbTopLogprobs
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| logprob | number | No | ||
| token | string | No |
OpenAI.ResponseMCPCallArgumentsDeltaEvent
Emitted when there is a delta (partial update) to the arguments of an MCP tool call.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| delta | string | A JSON string containing the partial update to the arguments for the MCP tool call. | Yes | |
| item_id | string | The unique identifier of the MCP tool call item being processed. | Yes | |
| output_index | integer | The index of the output item in the response's output array. | Yes | |
| sequence_number | integer | The sequence number of this event. | Yes | |
| type | enum | The type of the event. Always 'response.mcp_call_arguments.delta'. Possible values: response.mcp_call_arguments.delta |
Yes |
OpenAI.ResponseMCPCallFailedEvent
Emitted when an MCP tool call has failed.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| item_id | string | The ID of the MCP tool call item that failed. | Yes | |
| output_index | integer | The index of the output item that failed. | Yes | |
| sequence_number | integer | The sequence number of this event. | Yes | |
| type | enum | The type of the event. Always 'response.mcp_call.failed'. Possible values: response.mcp_call.failed |
Yes |
OpenAI.ResponseMCPCallInProgressEvent
Emitted when an MCP tool call is in progress.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| item_id | string | The unique identifier of the MCP tool call item being processed. | Yes | |
| output_index | integer | The index of the output item in the response's output array. | Yes | |
| sequence_number | integer | The sequence number of this event. | Yes | |
| type | enum | The type of the event. Always 'response.mcp_call.in_progress'. Possible values: response.mcp_call.in_progress |
Yes |
OpenAI.ResponseMCPListToolsFailedEvent
Emitted when the attempt to list available MCP tools has failed.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| item_id | string | The ID of the MCP tool call item that failed. | Yes | |
| output_index | integer | The index of the output item that failed. | Yes | |
| sequence_number | integer | The sequence number of this event. | Yes | |
| type | enum | The type of the event. Always 'response.mcp_list_tools.failed'. Possible values: response.mcp_list_tools.failed |
Yes |
OpenAI.ResponseMCPListToolsInProgressEvent
Emitted when the system is in the process of retrieving the list of available MCP tools.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| item_id | string | The ID of the MCP tool call item that is being processed. | Yes | |
| output_index | integer | The index of the output item that is being processed. | Yes | |
| sequence_number | integer | The sequence number of this event. | Yes | |
| type | enum | The type of the event. Always 'response.mcp_list_tools.in_progress'. Possible values: response.mcp_list_tools.in_progress |
Yes |
OpenAI.ResponseModalities
Output types that you would like the model to generate.
Most models are capable of generating text, which is the default:
["text"]
The gpt-4o-audio-preview model can also be used to
generate audio. To request that this model generate
both text and audio responses, you can use:
["text", "audio"]
This schema accepts one of the following types:
- array
- null
OpenAI.ResponseOutputItemAddedEvent
Emitted when a new output item is added.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| item | OpenAI.OutputItem | Yes | ||
| └─ type | OpenAI.OutputItemType | Yes | ||
| output_index | integer | The index of the output item that was added. | Yes | |
| sequence_number | integer | The sequence number of this event. | Yes | |
| type | enum | The type of the event. Always response.output_item.added.Possible values: response.output_item.added |
Yes |
OpenAI.ResponseOutputTextAnnotationAddedEvent
Emitted when an annotation is added to output text content.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| annotation | OpenAI.Annotation | An annotation that applies to a span of output text. | Yes | |
| └─ type | OpenAI.AnnotationType | Yes | ||
| annotation_index | integer | The index of the annotation within the content part. | Yes | |
| content_index | integer | The index of the content part within the output item. | Yes | |
| item_id | string | The unique identifier of the item to which the annotation is being added. | Yes | |
| output_index | integer | The index of the output item in the response's output array. | Yes | |
| sequence_number | integer | The sequence number of this event. | Yes | |
| type | enum | The type of the event. Always 'response.output_text.annotation.added'. Possible values: response.output_text.annotation.added |
Yes |
OpenAI.ResponsePromptVariables
Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files.
Type: object
OpenAI.ResponseQueuedEvent
Emitted when a response is queued and waiting to be processed.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| response | OpenAI.Response | Yes | ||
| └─ background | boolean or null | No | ||
| └─ completed_at | string or null | No | ||
| └─ content_filters | array of AzureContentFilterForResponsesAPI | The content filter results from RAI. | Yes | |
| └─ conversation | OpenAI.ConversationReference or null | No | ||
| └─ created_at | integer | Unix timestamp (in seconds) of when this Response was created. | Yes | |
| └─ error | OpenAI.ResponseError or null | Yes | ||
| └─ id | string | Unique identifier for this Response. | Yes | |
| └─ incomplete_details | OpenAI.ResponseIncompleteDetails or null | Yes | ||
| └─ instructions | string or array of OpenAI.InputItem or null | Yes | ||
| └─ max_output_tokens | integer or null | No | ||
| └─ max_tool_calls | integer or null | No | ||
| └─ metadata | OpenAI.Metadata or null | No | ||
| └─ model | string | Model ID used to generate the response, like gpt-4o or o3. OpenAIoffers a wide range of models with different capabilities, performance characteristics, and price points. Refer to the model guide to browse and compare available models. |
No | |
| └─ object | enum | The object type of this resource - always set to response.Possible values: response |
Yes | |
| └─ output | array of OpenAI.OutputItem | An array of content items generated by the model. - The length and order of items in the output array is dependenton the model's response. - Rather than accessing the first item in the output array andassuming it's an assistant message with the content generated bythe model, you might consider using the output_text property wheresupported in SDKs. |
Yes | |
| └─ output_text | string or null | No | ||
| └─ parallel_tool_calls | boolean | Whether to allow the model to run tool calls in parallel. | Yes | True |
| └─ previous_response_id | string or null | No | ||
| └─ prompt | OpenAI.Prompt | Reference to a prompt template and its variables. Learn more. |
No | |
| └─ prompt_cache_key | string | Used by OpenAI to cache responses for similar requests to optimize your cache hit rates. Replaces the user field. Learn more. |
No | |
| └─ prompt_cache_retention | string or null | No | ||
| └─ reasoning | OpenAI.Reasoning or null | No | ||
| └─ safety_identifier | string | A stable identifier used to help detect users of your application that may be violating OpenAI's usage policies. The IDs should be a string that uniquely identifies each user. We recommend hashing their username or email address, in order to avoid sending us any identifying information. Learn more. |
No | |
| └─ status | enum | The status of the response generation. One of completed, failed,in_progress, cancelled, queued, or incomplete.Possible values: completed, failed, in_progress, cancelled, queued, incomplete |
No | |
| └─ temperature | number or null | No | 1 | |
| └─ text | OpenAI.ResponseTextParam | Configuration options for a text response from the model. Can be plain text or structured JSON data. Learn more: - Text inputs and outputs - Structured Outputs |
No | |
| └─ tool_choice | OpenAI.ToolChoiceParam | How the model should select which tool (or tools) to use when generating a response. See the tools parameter to see how to specify which toolsthe model can call. |
No | |
| └─ tools | OpenAI.ToolsArray | An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter.We support the following categories of tools: - Built-in tools: Tools that are provided by OpenAI that extend the model's capabilities, like web search or file search. Learn more about built-in tools. - MCP Tools: Integrations with third-party systems via custom MCP servers or predefined connectors such as Google Drive and SharePoint. Learn more about MCP Tools. - Function calls (custom tools): Functions that are defined by you, enabling the model to call your own code with strongly typed arguments and outputs. Learn more about function calling. You can also use custom tools to call your own code. |
No | |
| └─ top_logprobs | integer or null | No | ||
| └─ top_p | number or null | No | 1 | |
| └─ truncation | string or null | No | disabled | |
| └─ usage | OpenAI.ResponseUsage | Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used. |
No | |
| └─ user | string (deprecated) | This field is being replaced by safety_identifier and prompt_cache_key. Use prompt_cache_key instead to maintain caching optimizations.A stable identifier for your end-users. Used to boost cache hit rates by better bucketing similar requests and to help OpenAI detect and prevent abuse. Learn more. |
No | |
| sequence_number | integer | The sequence number for this event. | Yes | |
| type | enum | The type of the event. Always 'response.queued'. Possible values: response.queued |
Yes |
OpenAI.ResponseReasoningSummaryPartAddedEvent
Emitted when a new reasoning summary part is added.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| item_id | string | The ID of the item this summary part is associated with. | Yes | |
| output_index | integer | The index of the output item this summary part is associated with. | Yes | |
| part | OpenAI.ResponseReasoningSummaryPartAddedEventPart | Yes | ||
| └─ text | string | Yes | ||
| └─ type | enum | Possible values: summary_text |
Yes | |
| sequence_number | integer | The sequence number of this event. | Yes | |
| summary_index | integer | The index of the summary part within the reasoning summary. | Yes | |
| type | enum | The type of the event. Always response.reasoning_summary_part.added.Possible values: response.reasoning_summary_part.added |
Yes |
OpenAI.ResponseReasoningSummaryPartAddedEventPart
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| text | string | Yes | ||
| type | enum | Possible values: summary_text |
Yes |
OpenAI.ResponseReasoningSummaryTextDeltaEvent
Emitted when a delta is added to a reasoning summary text.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| delta | string | The text delta that was added to the summary. | Yes | |
| item_id | string | The ID of the item this summary text delta is associated with. | Yes | |
| output_index | integer | The index of the output item this summary text delta is associated with. | Yes | |
| sequence_number | integer | The sequence number of this event. | Yes | |
| summary_index | integer | The index of the summary part within the reasoning summary. | Yes | |
| type | enum | The type of the event. Always response.reasoning_summary_text.delta.Possible values: response.reasoning_summary_text.delta |
Yes |
OpenAI.ResponseReasoningTextDeltaEvent
Emitted when a delta is added to a reasoning text.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| content_index | integer | The index of the reasoning content part this delta is associated with. | Yes | |
| delta | string | The text delta that was added to the reasoning content. | Yes | |
| item_id | string | The ID of the item this reasoning text delta is associated with. | Yes | |
| output_index | integer | The index of the output item this reasoning text delta is associated with. | Yes | |
| sequence_number | integer | The sequence number of this event. | Yes | |
| type | enum | The type of the event. Always response.reasoning_text.delta.Possible values: response.reasoning_text.delta |
Yes |
OpenAI.ResponseRefusalDeltaEvent
Emitted when there is a partial refusal text.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| content_index | integer | The index of the content part that the refusal text is added to. | Yes | |
| delta | string | The refusal text that is added. | Yes | |
| item_id | string | The ID of the output item that the refusal text is added to. | Yes | |
| output_index | integer | The index of the output item that the refusal text is added to. | Yes | |
| sequence_number | integer | The sequence number of this event. | Yes | |
| type | enum | The type of the event. Always response.refusal.delta.Possible values: response.refusal.delta |
Yes |
OpenAI.ResponseStreamOptions
Options for streaming responses. Only set this when you set stream: true.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| include_obfuscation | boolean | When true, stream obfuscation will be enabled. Stream obfuscation adds random characters to an obfuscation field on streaming delta events tonormalize payload sizes as a mitigation to certain side-channel attacks. These obfuscation fields are included by default, but add a small amount of overhead to the data stream. You can set include_obfuscation tofalse to optimize for bandwidth if you trust the network links between your application and the OpenAI API. |
No |
OpenAI.ResponseTextDeltaEvent
Emitted when there is an additional text delta.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| content_index | integer | The index of the content part that the text delta was added to. | Yes | |
| delta | string | The text delta that was added. | Yes | |
| item_id | string | The ID of the output item that the text delta was added to. | Yes | |
| logprobs | array of OpenAI.ResponseLogProb | The log probabilities of the tokens in the delta. | Yes | |
| output_index | integer | The index of the output item that the text delta was added to. | Yes | |
| sequence_number | integer | The sequence number for this event. | Yes | |
| type | enum | The type of the event. Always response.output_text.delta.Possible values: response.output_text.delta |
Yes |
OpenAI.ResponseTextParam
Configuration options for a text response from the model. Can be plain text or structured JSON data. Learn more:
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| format | OpenAI.TextResponseFormatConfiguration | An object specifying the format that the model must output. Configuring { "type": "json_schema" } enables Structured Outputs,which ensures the model will match your supplied JSON schema. Learn more in the The default format is { "type": "text" } with no additional options.*Not recommended for gpt-4o and newer models: Setting to { "type": "json_object" } enables the older JSON mode, whichensures the message the model generates is valid JSON. Using json_schemais preferred for models that support it. |
No | |
| verbosity | OpenAI.Verbosity | Constrains the verbosity of the model's response. Lower values will result in more concise responses, while higher values will result in more verbose responses. Currently supported values are low, medium, and high. |
No |
OpenAI.ResponseUsage
Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| input_tokens | integer | The number of input tokens. | Yes | |
| input_tokens_details | OpenAI.ResponseUsageInputTokensDetails | Yes | ||
| └─ cached_tokens | integer | Yes | ||
| output_tokens | integer | The number of output tokens. | Yes | |
| output_tokens_details | OpenAI.ResponseUsageOutputTokensDetails | Yes | ||
| └─ reasoning_tokens | integer | Yes | ||
| total_tokens | integer | The total number of tokens used. | Yes |
OpenAI.ResponseUsageInputTokensDetails
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| cached_tokens | integer | Yes |
OpenAI.ResponseUsageOutputTokensDetails
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| reasoning_tokens | integer | Yes |
OpenAI.ResponseWebSearchCallInProgressEvent
Note: web_search is not yet available via Azure OpenAI.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| item_id | string | Unique ID for the output item associated with the web search call. | Yes | |
| output_index | integer | The index of the output item that the web search call is associated with. | Yes | |
| sequence_number | integer | The sequence number of the web search call being processed. | Yes | |
| type | enum | The type of the event. Always response.web_search_call.in_progress.Possible values: response.web_search_call.in_progress |
Yes |
OpenAI.ResponseWebSearchCallSearchingEvent
Note: web_search is not yet available via Azure OpenAI.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| item_id | string | Unique ID for the output item associated with the web search call. | Yes | |
| output_index | integer | The index of the output item that the web search call is associated with. | Yes | |
| sequence_number | integer | The sequence number of the web search call being processed. | Yes | |
| type | enum | The type of the event. Always response.web_search_call.searching.Possible values: response.web_search_call.searching |
Yes |
OpenAI.RunCompletionUsage
Usage statistics related to the run. This value will be null if the run is not in a terminal state (i.e. in_progress, queued, etc.).
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| completion_tokens | integer | Number of completion tokens used over the course of the run. | Yes | |
| prompt_tokens | integer | Number of prompt tokens used over the course of the run. | Yes | |
| total_tokens | integer | Total number of tokens used (prompt + completion). | Yes |
OpenAI.RunGraderRequest
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| grader | OpenAI.GraderStringCheck or OpenAI.GraderTextSimilarity or OpenAI.GraderPython or OpenAI.GraderScoreModel or OpenAI.GraderMulti or GraderEndpoint | The grader used for the fine-tuning job. | Yes | |
| item | OpenAI.RunGraderRequestItem | No | ||
| model_sample | string | The model sample to be evaluated. This value will be used to populate the sample namespace. See the guide for more details.The output_json variable will be populated if the model sample is avalid JSON string. |
Yes |
OpenAI.RunGraderRequestItem
Type: object
OpenAI.RunGraderResponse
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| metadata | OpenAI.RunGraderResponseMetadata | Yes | ||
| model_grader_token_usage_per_model | object | Yes | ||
| reward | number | Yes | ||
| sub_rewards | object | Yes |
OpenAI.RunGraderResponseMetadata
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| errors | OpenAI.RunGraderResponseMetadataErrors | Yes | ||
| execution_time | number | Yes | ||
| name | string | Yes | ||
| sampled_model_name | string or null | Yes | ||
| scores | object | Yes | ||
| token_usage | integer or null | Yes | ||
| type | string | Yes |
OpenAI.RunGraderResponseMetadataErrors
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| formula_parse_error | boolean | Yes | ||
| invalid_variable_error | boolean | Yes | ||
| model_grader_parse_error | boolean | Yes | ||
| model_grader_refusal_error | boolean | Yes | ||
| model_grader_server_error | boolean | Yes | ||
| model_grader_server_error_details | string or null | Yes | ||
| other_error | boolean | Yes | ||
| python_grader_runtime_error | boolean | Yes | ||
| python_grader_runtime_error_details | string or null | Yes | ||
| python_grader_server_error | boolean | Yes | ||
| python_grader_server_error_type | string or null | Yes | ||
| sample_parse_error | boolean | Yes | ||
| truncated_observation_error | boolean | Yes | ||
| unresponsive_reward_error | boolean | Yes |
OpenAI.RunObject
Represents an execution run on a thread.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| assistant_id | string | The ID of the assistant used for execution of this run. | Yes | |
| cancelled_at | string or null | The Unix timestamp (in seconds) for when the run was cancelled. | Yes | |
| completed_at | string or null | The Unix timestamp (in seconds) for when the run was completed. | Yes | |
| created_at | integer | The Unix timestamp (in seconds) for when the run was created. | Yes | |
| expires_at | string or null | The Unix timestamp (in seconds) for when the run will expire. | Yes | |
| failed_at | string or null | The Unix timestamp (in seconds) for when the run failed. | Yes | |
| id | string | The identifier, which can be referenced in API endpoints. | Yes | |
| incomplete_details | OpenAI.RunObjectIncompleteDetails or null | Details on why the run is incomplete. Will be null if the run is not incomplete. |
Yes | |
| instructions | string | The instructions that the assistant used for this run. | Yes | |
| last_error | OpenAI.RunObjectLastError or null | The last error associated with this run. Will be null if there are no errors. |
Yes | |
| max_completion_tokens | integer or null | The maximum number of completion tokens specified to have been used over the course of the run. | Yes | |
| max_prompt_tokens | integer or null | The maximum number of prompt tokens specified to have been used over the course of the run. | Yes | |
| metadata | OpenAI.Metadata or null | Yes | ||
| model | string | The model that the assistant used for this run. | Yes | |
| object | enum | The object type, which is always thread.run.Possible values: thread.run |
Yes | |
| parallel_tool_calls | OpenAI.ParallelToolCalls | Whether to enable parallel function calling during tool use. | Yes | |
| required_action | OpenAI.RunObjectRequiredAction or null | Details on the action required to continue the run. Will be null if no action is required. |
Yes | |
| response_format | OpenAI.AssistantsApiResponseFormatOption | Specifies the format that the model must output. Compatible with GPT-4o, GPT-4 Turbo, and all GPT-3.5 Turbo models since gpt-3.5-turbo-1106.Setting to { "type": "json_schema", "json_schema": {...} } enables Structured Outputs which ensure the model will match your supplied JSON schema. Learn more in the Setting to { "type": "json_object" } enables JSON mode, which ensures the message the model generates is valid JSON.Important:* when using JSON mode, you must also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly "stuck" request. Also note that the message content may be partially cut off if finish_reason="length", which indicates the generation exceeded max_tokens or the conversation exceeded the max context length. |
Yes | |
| started_at | string or null | The Unix timestamp (in seconds) for when the run was started. | Yes | |
| status | OpenAI.RunStatus | The status of the run, which can be either queued, in_progress, requires_action, cancelling, cancelled, failed, completed, incomplete, or expired. |
Yes | |
| temperature | number or null | The sampling temperature used for this run. If not set, defaults to 1. | No | |
| thread_id | string | The ID of the thread that was executed on as a part of this run. | Yes | |
| tool_choice | OpenAI.AssistantsApiToolChoiceOption | Controls which (if any) tool is called by the model.none means the model will not call any tools and instead generates a message.auto is the default value and means the model can pick between generating a message or calling one or more tools.required means the model must call one or more tools before responding to the user.Specifying a particular tool like {"type": "file_search"} or {"type": "function", "function": {"name": "my_function"}} forces the model to call that tool. |
Yes | |
| tools | array of OpenAI.AssistantTool | The list of tools that the assistant used for this run. | Yes | [] |
| top_p | number or null | The nucleus sampling value used for this run. If not set, defaults to 1. | No | |
| truncation_strategy | OpenAI.TruncationObject | Controls for how a thread will be truncated prior to the run. Use this to control the initial context window of the run. | Yes | |
| usage | OpenAI.RunCompletionUsage or null | Yes |
OpenAI.RunObjectIncompleteDetails
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| reason | enum | Possible values: max_completion_tokens, max_prompt_tokens |
No |
OpenAI.RunObjectLastError
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| code | enum | Possible values: server_error, rate_limit_exceeded, invalid_prompt |
Yes | |
| message | string | Yes |
OpenAI.RunObjectRequiredAction
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| submit_tool_outputs | OpenAI.RunObjectRequiredActionSubmitToolOutputs | Yes | ||
| type | enum | Possible values: submit_tool_outputs |
Yes |
OpenAI.RunObjectRequiredActionSubmitToolOutputs
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| tool_calls | array of OpenAI.RunToolCallObject | Yes |
OpenAI.RunStatus
The status of the run, which can be either queued, in_progress, requires_action, cancelling, cancelled, failed, completed, incomplete, or expired.
| Property | Value |
|---|---|
| Type | string |
| Values | queuedin_progressrequires_actioncancellingcancelledfailedcompletedincompleteexpired |
OpenAI.RunStepCompletionUsage
Usage statistics related to the run step. This value will be null while the run step's status is in_progress.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| completion_tokens | integer | Number of completion tokens used over the course of the run step. | Yes | |
| prompt_tokens | integer | Number of prompt tokens used over the course of the run step. | Yes | |
| total_tokens | integer | Total number of tokens used (prompt + completion). | Yes |
OpenAI.RunStepDetailsMessageCreationObject
Details of the message creation by the run step.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| message_creation | OpenAI.RunStepDetailsMessageCreationObjectMessageCreation | Yes | ||
| type | enum | Always message_creation.Possible values: message_creation |
Yes |
OpenAI.RunStepDetailsMessageCreationObjectMessageCreation
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| message_id | string | Yes |
OpenAI.RunStepDetailsToolCall
Discriminator for OpenAI.RunStepDetailsToolCall
This component uses the property type to discriminate between different types:
| Type Value | Schema |
|---|---|
code_interpreter |
OpenAI.RunStepDetailsToolCallsCodeObject |
file_search |
OpenAI.RunStepDetailsToolCallsFileSearchObject |
function |
OpenAI.RunStepDetailsToolCallsFunctionObject |
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | OpenAI.RunStepDetailsToolCallType | Yes |
OpenAI.RunStepDetailsToolCallType
| Property | Value |
|---|---|
| Type | string |
| Values | code_interpreterfile_searchfunction |
OpenAI.RunStepDetailsToolCallsCodeObject
Details of the Code Interpreter tool call the run step was involved in.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| code_interpreter | OpenAI.RunStepDetailsToolCallsCodeObjectCodeInterpreter | Yes | ||
| └─ input | string | Yes | ||
| └─ outputs | array of OpenAI.RunStepDetailsToolCallsCodeOutputLogsObject or OpenAI.RunStepDetailsToolCallsCodeOutputImageObject | Yes | ||
| id | string | The ID of the tool call. | Yes | |
| type | enum | The type of tool call. This is always going to be code_interpreter for this type of tool call.Possible values: code_interpreter |
Yes |
OpenAI.RunStepDetailsToolCallsCodeObjectCodeInterpreter
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| input | string | Yes | ||
| outputs | array of OpenAI.RunStepDetailsToolCallsCodeOutputLogsObject or OpenAI.RunStepDetailsToolCallsCodeOutputImageObject | Yes |
OpenAI.RunStepDetailsToolCallsCodeOutputImageObject
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| image | OpenAI.RunStepDetailsToolCallsCodeOutputImageObjectImage | Yes | ||
| type | enum | Always image.Possible values: image |
Yes |
OpenAI.RunStepDetailsToolCallsCodeOutputImageObjectImage
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| file_id | string | Yes |
OpenAI.RunStepDetailsToolCallsCodeOutputLogsObject
Text output from the Code Interpreter tool call as part of a run step.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| logs | string | The text output from the Code Interpreter tool call. | Yes | |
| type | enum | Always logs.Possible values: logs |
Yes |
OpenAI.RunStepDetailsToolCallsFileSearchObject
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| file_search | OpenAI.RunStepDetailsToolCallsFileSearchObjectFileSearch | Yes | ||
| └─ ranking_options | OpenAI.RunStepDetailsToolCallsFileSearchRankingOptionsObject | The ranking options for the file search. | No | |
| └─ results | array of OpenAI.RunStepDetailsToolCallsFileSearchResultObject | No | ||
| id | string | The ID of the tool call object. | Yes | |
| type | enum | The type of tool call. This is always going to be file_search for this type of tool call.Possible values: file_search |
Yes |
OpenAI.RunStepDetailsToolCallsFileSearchObjectFileSearch
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| ranking_options | OpenAI.RunStepDetailsToolCallsFileSearchRankingOptionsObject | The ranking options for the file search. | No | |
| results | array of OpenAI.RunStepDetailsToolCallsFileSearchResultObject | No |
OpenAI.RunStepDetailsToolCallsFileSearchRankingOptionsObject
The ranking options for the file search.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| ranker | OpenAI.FileSearchRanker | The ranker to use for the file search. If not specified will use the auto ranker. |
Yes | |
| score_threshold | number | The score threshold for the file search. All values must be a floating point number between 0 and 1. Constraints: min: 0, max: 1 |
Yes |
OpenAI.RunStepDetailsToolCallsFileSearchResultObject
A result instance of the file search.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| content | array of OpenAI.RunStepDetailsToolCallsFileSearchResultObjectContent | The content of the result that was found. The content is only included if requested via the include query parameter. | No | |
| file_id | string | The ID of the file that result was found in. | Yes | |
| file_name | string | The name of the file that result was found in. | Yes | |
| score | number | The score of the result. All values must be a floating point number between 0 and 1. Constraints: min: 0, max: 1 |
Yes |
OpenAI.RunStepDetailsToolCallsFileSearchResultObjectContent
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| text | string | No | ||
| type | enum | Possible values: text |
No |
OpenAI.RunStepDetailsToolCallsFunctionObject
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| function | OpenAI.RunStepDetailsToolCallsFunctionObjectFunction | Yes | ||
| └─ arguments | string | Yes | ||
| └─ name | string | Yes | ||
| └─ output | string or null | Yes | ||
| id | string | The ID of the tool call object. | Yes | |
| type | enum | The type of tool call. This is always going to be function for this type of tool call.Possible values: function |
Yes |
OpenAI.RunStepDetailsToolCallsFunctionObjectFunction
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| arguments | string | Yes | ||
| name | string | Yes | ||
| output | string or null | Yes |
OpenAI.RunStepDetailsToolCallsObject
Details of the tool call.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| tool_calls | array of OpenAI.RunStepDetailsToolCall | An array of tool calls the run step was involved in. These can be associated with one of three types of tools: code_interpreter, file_search, or function. |
Yes | |
| type | enum | Always tool_calls.Possible values: tool_calls |
Yes |
OpenAI.RunStepObject
Represents a step in execution of a run.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| assistant_id | string | The ID of the assistant associated with the run step. | Yes | |
| cancelled_at | string or null | Yes | ||
| completed_at | string or null | Yes | ||
| created_at | integer | The Unix timestamp (in seconds) for when the run step was created. | Yes | |
| expired_at | string or null | Yes | ||
| failed_at | string or null | Yes | ||
| id | string | The identifier of the run step, which can be referenced in API endpoints. | Yes | |
| last_error | OpenAI.RunStepObjectLastError or null | Yes | ||
| metadata | OpenAI.Metadata or null | Yes | ||
| object | enum | The object type, which is always thread.run.step.Possible values: thread.run.step |
Yes | |
| run_id | string | The ID of the run that this run step is a part of. | Yes | |
| status | enum | The status of the run step, which can be either in_progress, cancelled, failed, completed, or expired.Possible values: in_progress, cancelled, failed, completed, expired |
Yes | |
| step_details | OpenAI.RunStepDetailsMessageCreationObject or OpenAI.RunStepDetailsToolCallsObject | The details of the run step. | Yes | |
| thread_id | string | The ID of the thread that was run. | Yes | |
| type | enum | The type of run step, which can be either message_creation or tool_calls.Possible values: message_creation, tool_calls |
Yes | |
| usage | OpenAI.RunStepCompletionUsage | Usage statistics related to the run step. This value will be null while the run step's status is in_progress. |
Yes |
OpenAI.RunStepObjectLastError
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| code | enum | Possible values: server_error, rate_limit_exceeded |
Yes | |
| message | string | Yes |
OpenAI.RunToolCallObject
Tool call objects
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| function | OpenAI.RunToolCallObjectFunction | Yes | ||
| └─ arguments | string | Yes | ||
| └─ name | string | Yes | ||
| id | string | The ID of the tool call. This ID must be referenced when you submit the tool outputs in using the Submit tool outputs to run endpoint. | Yes | |
| type | enum | The type of tool call the output is required for. For now, this is always function.Possible values: function |
Yes |
OpenAI.RunToolCallObjectFunction
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| arguments | string | Yes | ||
| name | string | Yes |
OpenAI.Screenshot
A screenshot action.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | enum | Specifies the event type. For a screenshot action, this property is always set to screenshot.Possible values: screenshot |
Yes |
OpenAI.Scroll
A scroll action.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| scroll_x | integer | The horizontal scroll distance. | Yes | |
| scroll_y | integer | The vertical scroll distance. | Yes | |
| type | enum | Specifies the event type. For a scroll action, this property is always set to scroll.Possible values: scroll |
Yes | |
| x | integer | The x-coordinate where the scroll occurred. | Yes | |
| y | integer | The y-coordinate where the scroll occurred. | Yes |
OpenAI.SearchContextSize
| Property | Value |
|---|---|
| Type | string |
| Values | lowmediumhigh |
OpenAI.SpecificApplyPatchParam
Forces the model to call the apply_patch tool when executing a tool call.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | enum | The tool to call. Always apply_patch.Possible values: apply_patch |
Yes |
OpenAI.SpecificFunctionShellParam
Forces the model to call the shell tool when a tool call is required.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | enum | The tool to call. Always shell.Possible values: shell |
Yes |
OpenAI.StaticChunkingStrategy
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| chunk_overlap_tokens | integer | The number of tokens that overlap between chunks. The default value is 400.Note that the overlap must not exceed half of max_chunk_size_tokens. |
Yes | |
| max_chunk_size_tokens | integer | The maximum number of tokens in each chunk. The default value is 800. The minimum value is 100 and the maximum value is 4096.Constraints: min: 100, max: 4096 |
Yes |
OpenAI.StaticChunkingStrategyRequestParam
Customize your own chunking strategy by setting chunk size and chunk overlap.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| static | OpenAI.StaticChunkingStrategy | Yes | ||
| type | enum | Always static.Possible values: static |
Yes |
OpenAI.StaticChunkingStrategyResponseParam
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| static | OpenAI.StaticChunkingStrategy | Yes | ||
| type | enum | Always static.Possible values: static |
Yes |
OpenAI.StopConfiguration
Not supported with latest reasoning models o3 and o4-mini.
Up to four sequences where the API will stop generating further tokens. The
returned text will not contain the stop sequence.
This schema accepts one of the following types:
- array
- null
OpenAI.SubmitToolOutputsRunRequest
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| stream | boolean or null | No | ||
| tool_outputs | array of OpenAI.SubmitToolOutputsRunRequestToolOutputs | A list of tools for which the outputs are being submitted. | Yes |
OpenAI.SubmitToolOutputsRunRequestToolOutputs
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| output | string | No | ||
| tool_call_id | string | No |
OpenAI.Summary
A summary text from the model.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| text | string | A summary of the reasoning output from the model so far. | Yes | |
| type | enum | The type of the object. Always summary_text.Possible values: summary_text |
Yes |
OpenAI.SummaryTextContent
A summary text from the model.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| text | string | A summary of the reasoning output from the model so far. | Yes | |
| type | enum | The type of the object. Always summary_text.Possible values: summary_text |
Yes |
OpenAI.TextAnnotation
Discriminator for OpenAI.TextAnnotation
This component uses the property type to discriminate between different types:
| Type Value | Schema |
|---|---|
file_citation |
OpenAI.MessageContentTextAnnotationsFileCitationObject |
file_path |
OpenAI.MessageContentTextAnnotationsFilePathObject |
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | OpenAI.TextAnnotationType | Yes |
OpenAI.TextAnnotationType
| Property | Value |
|---|---|
| Type | string |
| Values | file_citationfile_path |
OpenAI.TextContent
A text content.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| text | string | Yes | ||
| type | enum | Possible values: text |
Yes |
OpenAI.TextResponseFormatConfiguration
An object specifying the format that the model must output.
Configuring { "type": "json_schema" } enables Structured Outputs,
which ensures the model will match your supplied JSON schema. Learn more in the
The default format is { "type": "text" } with no additional options.
Not recommended for gpt-4o and newer models:*
Setting to { "type": "json_object" } enables the older JSON mode, which
ensures the message the model generates is valid JSON. Using json_schema
is preferred for models that support it.
Discriminator for OpenAI.TextResponseFormatConfiguration
This component uses the property type to discriminate between different types:
| Type Value | Schema |
|---|---|
json_schema |
OpenAI.TextResponseFormatJsonSchema |
text |
OpenAI.TextResponseFormatConfigurationResponseFormatText |
json_object |
OpenAI.TextResponseFormatConfigurationResponseFormatJsonObject |
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | OpenAI.TextResponseFormatConfigurationType | Yes |
OpenAI.TextResponseFormatConfigurationResponseFormatJsonObject
JSON object response format. An older method of generating JSON responses.
Using json_schema is recommended for models that support it. Note that the
model will not generate JSON without a system or user message instructing it
to do so.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | enum | The type of response format being defined. Always json_object.Possible values: json_object |
Yes |
OpenAI.TextResponseFormatConfigurationResponseFormatText
Default response format. Used to generate text responses.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | enum | The type of response format being defined. Always text.Possible values: text |
Yes |
OpenAI.TextResponseFormatConfigurationType
| Property | Value |
|---|---|
| Type | string |
| Values | textjson_schemajson_object |
OpenAI.TextResponseFormatJsonSchema
JSON Schema response format. Used to generate structured JSON responses. Learn more about Structured Outputs.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| description | string | A description of what the response format is for, used by the model to determine how to respond in the format. |
No | |
| name | string | The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64. |
Yes | |
| schema | OpenAI.ResponseFormatJsonSchemaSchema | The schema for the response format, described as a JSON Schema object. Learn how to build JSON schemas here. |
Yes | |
| strict | boolean or null | No | ||
| type | enum | The type of response format being defined. Always json_schema.Possible values: json_schema |
Yes |
OpenAI.ThreadObject
Represents a thread that contains messages.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| created_at | integer | The Unix timestamp (in seconds) for when the thread was created. | Yes | |
| id | string | The identifier, which can be referenced in API endpoints. | Yes | |
| metadata | OpenAI.Metadata or null | Yes | ||
| object | enum | The object type, which is always thread.Possible values: thread |
Yes | |
| tool_resources | OpenAI.ThreadObjectToolResources or null | Yes |
OpenAI.ThreadObjectToolResources
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| code_interpreter | OpenAI.ThreadObjectToolResourcesCodeInterpreter | No | ||
| file_search | OpenAI.ThreadObjectToolResourcesFileSearch | No |
OpenAI.ThreadObjectToolResourcesCodeInterpreter
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| file_ids | array of string | No |
OpenAI.ThreadObjectToolResourcesFileSearch
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| vector_store_ids | array of string | No |
OpenAI.TokenLimits
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| post_instructions | integer | Constraints: min: 0 | No |
OpenAI.Tool
A tool that can be used to generate a response.
Discriminator for OpenAI.Tool
This component uses the property type to discriminate between different types:
| Type Value | Schema |
|---|---|
code_interpreter |
OpenAI.CodeInterpreterTool |
function |
OpenAI.FunctionTool |
file_search |
OpenAI.FileSearchTool |
computer_use_preview |
OpenAI.ComputerUsePreviewTool |
web_search |
OpenAI.WebSearchTool |
mcp |
OpenAI.MCPTool |
image_generation |
OpenAI.ImageGenTool |
local_shell |
OpenAI.LocalShellToolParam |
shell |
OpenAI.FunctionShellToolParam |
custom |
OpenAI.CustomToolParam |
web_search_preview |
OpenAI.WebSearchPreviewTool |
apply_patch |
OpenAI.ApplyPatchToolParam |
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | OpenAI.ToolType | Yes |
OpenAI.ToolChoiceAllowed
Constrains the tools available to the model to a predefined set.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| mode | enum | Constrains the tools available to the model to a predefined set.auto allows the model to pick from among the allowed tools and generate amessage. required requires the model to call one or more of the allowed tools.Possible values: auto, required |
Yes | |
| tools | array of object | A list of tool definitions that the model should be allowed to call. For the Responses API, the list of tool definitions might look like: json<br> [<br> { "type": "function", "name": "get_weather" },<br> { "type": "mcp", "server_label": "deepwiki" },<br> { "type": "image_generation" }<br> ]<br> |
Yes | |
| type | enum | Allowed tool configuration type. Always allowed_tools.Possible values: allowed_tools |
Yes |
OpenAI.ToolChoiceCodeInterpreter
Indicates that the model should use a built-in tool to generate a response. Learn more about built-in tools.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | enum | Possible values: code_interpreter |
Yes |
OpenAI.ToolChoiceComputerUsePreview
Indicates that the model should use a built-in tool to generate a response. Learn more about built-in tools.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | enum | Possible values: computer_use_preview |
Yes |
OpenAI.ToolChoiceCustom
Use this option to force the model to call a specific custom tool.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| name | string | The name of the custom tool to call. | Yes | |
| type | enum | For custom tool calling, the type is always custom.Possible values: custom |
Yes |
OpenAI.ToolChoiceFileSearch
Indicates that the model should use a built-in tool to generate a response. Learn more about built-in tools.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | enum | Possible values: file_search |
Yes |
OpenAI.ToolChoiceFunction
Use this option to force the model to call a specific function.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| name | string | The name of the function to call. | Yes | |
| type | enum | For function calling, the type is always function.Possible values: function |
Yes |
OpenAI.ToolChoiceImageGeneration
Indicates that the model should use a built-in tool to generate a response. Learn more about built-in tools.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | enum | Possible values: image_generation |
Yes |
OpenAI.ToolChoiceMCP
Use this option to force the model to call a specific tool on a remote MCP server.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| name | string or null | No | ||
| server_label | string | The label of the MCP server to use. | Yes | |
| type | enum | For MCP tools, the type is always mcp.Possible values: mcp |
Yes |
OpenAI.ToolChoiceOptions
Controls which (if any) tool is called by the model.
none means the model will not call any tool and instead generates a message.
auto means the model can pick between generating a message or calling one or
more tools.
required means the model must call one or more tools.
| Property | Value |
|---|---|
| Type | string |
| Values | noneautorequired |
OpenAI.ToolChoiceParam
How the model should select which tool (or tools) to use when generating
a response. See the tools parameter to see how to specify which tools
the model can call.
Discriminator for OpenAI.ToolChoiceParam
This component uses the property type to discriminate between different types:
| Type Value | Schema |
|---|---|
allowed_tools |
OpenAI.ToolChoiceAllowed |
mcp |
OpenAI.ToolChoiceMCP |
custom |
OpenAI.ToolChoiceCustom |
apply_patch |
OpenAI.SpecificApplyPatchParam |
shell |
OpenAI.SpecificFunctionShellParam |
file_search |
OpenAI.ToolChoiceFileSearch |
web_search_preview |
OpenAI.ToolChoiceWebSearchPreview |
computer_use_preview |
OpenAI.ToolChoiceComputerUsePreview |
web_search_preview_2025_03_11 |
OpenAI.ToolChoiceWebSearchPreview20250311 |
image_generation |
OpenAI.ToolChoiceImageGeneration |
code_interpreter |
OpenAI.ToolChoiceCodeInterpreter |
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | OpenAI.ToolChoiceParamType | Yes |
OpenAI.ToolChoiceParamType
| Property | Value |
|---|---|
| Type | string |
| Values | allowed_toolsfunctionmcpcustomapply_patchshellfile_searchweb_search_previewcomputer_use_previewweb_search_preview_2025_03_11image_generationcode_interpreter |
OpenAI.ToolChoiceWebSearchPreview
Note: web_search is not yet available via Azure OpenAI.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | enum | Possible values: web_search_preview |
Yes |
OpenAI.ToolChoiceWebSearchPreview20250311
Indicates that the model should use a built-in tool to generate a response. Learn more about built-in tools.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | enum | Possible values: web_search_preview_2025_03_11 |
Yes |
OpenAI.ToolType
| Property | Value |
|---|---|
| Type | string |
| Values | functionfile_searchcomputer_use_previewweb_searchmcpcode_interpreterimage_generationlocal_shellshellcustomweb_search_previewapply_patch |
OpenAI.ToolsArray
An array of tools the model may call while generating a response. You
can specify which tool to use by setting the tool_choice parameter.
We support the following categories of tools:
- Built-in tools: Tools that are provided by OpenAI that extend the model's capabilities, like web search or file search. Learn more about built-in tools.
- MCP Tools: Integrations with third-party systems via custom MCP servers or predefined connectors such as Google Drive and SharePoint. Learn more about MCP Tools.
- Function calls (custom tools): Functions that are defined by you, enabling the model to call your own code with strongly typed arguments and outputs. Learn more about function calling. You can also use custom tools to call your own code.
Array of: OpenAI.Tool
OpenAI.TopLogProb
The top log probability of a token.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| bytes | array of integer | Yes | ||
| logprob | number | Yes | ||
| token | string | Yes |
OpenAI.TranscriptionSegment
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| avg_logprob | number | Average logprob of the segment. If the value is lower than -1, consider the logprobs failed. | Yes | |
| compression_ratio | number | Compression ratio of the segment. If the value is greater than 2.4, consider the compression failed. | Yes | |
| end | number | End time of the segment in seconds. | Yes | |
| id | integer | Unique identifier of the segment. | Yes | |
| no_speech_prob | number | Probability of no speech in the segment. If the value is higher than 1.0 and the avg_logprob is below -1, consider this segment silent. |
Yes | |
| seek | integer | Seek offset of the segment. | Yes | |
| start | number | Start time of the segment in seconds. | Yes | |
| temperature | number | Temperature parameter used for generating the segment. | Yes | |
| text | string | Text content of the segment. | Yes | |
| tokens | array of integer | Array of token IDs for the text content. | Yes |
OpenAI.TranscriptionWord
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| end | number | End time of the word in seconds. | Yes | |
| start | number | Start time of the word in seconds. | Yes | |
| word | string | The text content of the word. | Yes |
OpenAI.TruncationObject
Controls for how a thread will be truncated prior to the run. Use this to control the initial context window of the run.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| last_messages | integer or null | No | ||
| type | enum | The truncation strategy to use for the thread. The default is auto. If set to last_messages, the thread will be truncated to the n most recent messages in the thread. When set to auto, messages in the middle of the thread will be dropped to fit the context length of the model, max_prompt_tokens.Possible values: auto, last_messages |
Yes |
OpenAI.Type
An action to type in text.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| text | string | The text to type. | Yes | |
| type | enum | Specifies the event type. For a type action, this property is always set to type.Possible values: type |
Yes |
OpenAI.UpdateConversationBody
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| metadata | OpenAI.Metadata or null | Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters. |
Yes |
OpenAI.UpdateVectorStoreFileAttributesRequest
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| attributes | OpenAI.VectorStoreFileAttributes or null | Yes |
OpenAI.UpdateVectorStoreRequest
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| expires_after | OpenAI.VectorStoreExpirationAfter | The expiration policy for a vector store. | No | |
| metadata | OpenAI.Metadata or null | No | ||
| name | string or null | The name of the vector store. | No |
OpenAI.UrlCitationBody
A citation for a web resource used to generate a model response.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| end_index | integer | The index of the last character of the URL citation in the message. | Yes | |
| start_index | integer | The index of the first character of the URL citation in the message. | Yes | |
| title | string | The title of the web resource. | Yes | |
| type | enum | The type of the URL citation. Always url_citation.Possible values: url_citation |
Yes | |
| url | string | The URL of the web resource. | Yes |
OpenAI.ValidateGraderResponse
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| grader | OpenAI.GraderStringCheck or OpenAI.GraderTextSimilarity or OpenAI.GraderPython or OpenAI.GraderScoreModel or OpenAI.GraderMulti or GraderEndpoint | The grader used for the fine-tuning job. | No |
OpenAI.VectorStoreExpirationAfter
The expiration policy for a vector store.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| anchor | enum | Anchor timestamp after which the expiration policy applies. Supported anchors: last_active_at.Possible values: last_active_at |
Yes | |
| days | integer | The number of days after the anchor time that the vector store will expire. Constraints: min: 1, max: 365 |
Yes |
OpenAI.VectorStoreFileAttributes
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters, booleans, or numbers.
Type: object
OpenAI.VectorStoreFileBatchObject
A batch of files attached to a vector store.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| created_at | integer | The Unix timestamp (in seconds) for when the vector store files batch was created. | Yes | |
| file_counts | OpenAI.VectorStoreFileBatchObjectFileCounts | Yes | ||
| id | string | The identifier, which can be referenced in API endpoints. | Yes | |
| object | enum | The object type, which is always vector_store.file_batch.Possible values: vector_store.files_batch |
Yes | |
| status | enum | The status of the vector store files batch, which can be either in_progress, completed, cancelled or failed.Possible values: in_progress, completed, cancelled, failed |
Yes | |
| vector_store_id | string | The ID of the vector store that the File is attached to. | Yes |
OpenAI.VectorStoreFileBatchObjectFileCounts
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| cancelled | integer | Yes | ||
| completed | integer | Yes | ||
| failed | integer | Yes | ||
| in_progress | integer | Yes | ||
| total | integer | Yes |
OpenAI.VectorStoreFileObject
A list of files attached to a vector store.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| attributes | OpenAI.VectorStoreFileAttributes or null | No | ||
| chunking_strategy | OpenAI.ChunkingStrategyResponse | The strategy used to chunk the file. | No | |
| created_at | integer | The Unix timestamp (in seconds) for when the vector store file was created. | Yes | |
| id | string | The identifier, which can be referenced in API endpoints. | Yes | |
| last_error | OpenAI.VectorStoreFileObjectLastError or null | Yes | ||
| object | enum | The object type, which is always vector_store.file.Possible values: vector_store.file |
Yes | |
| status | enum | The status of the vector store file, which can be either in_progress, completed, cancelled, or failed. The status completed indicates that the vector store file is ready for use.Possible values: in_progress, completed, cancelled, failed |
Yes | |
| usage_bytes | integer | The total vector store usage in bytes. Note that this may be different from the original file size. | Yes | |
| vector_store_id | string | The ID of the vector store that the File is attached to. | Yes |
OpenAI.VectorStoreFileObjectLastError
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| code | enum | Possible values: server_error, unsupported_file, invalid_file |
Yes | |
| message | string | Yes |
OpenAI.VectorStoreObject
A vector store is a collection of processed files can be used by the file_search tool.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| created_at | integer | The Unix timestamp (in seconds) for when the vector store was created. | Yes | |
| expires_after | OpenAI.VectorStoreExpirationAfter | The expiration policy for a vector store. | No | |
| expires_at | string or null | No | ||
| file_counts | OpenAI.VectorStoreObjectFileCounts | Yes | ||
| id | string | The identifier, which can be referenced in API endpoints. | Yes | |
| last_active_at | string or null | Yes | ||
| metadata | OpenAI.Metadata or null | Yes | ||
| name | string | The name of the vector store. | Yes | |
| object | enum | The object type, which is always vector_store.Possible values: vector_store |
Yes | |
| status | enum | The status of the vector store, which can be either expired, in_progress, or completed. A status of completed indicates that the vector store is ready for use.Possible values: expired, in_progress, completed |
Yes | |
| usage_bytes | integer | The total number of bytes used by the files in the vector store. | Yes |
OpenAI.VectorStoreObjectFileCounts
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| cancelled | integer | Yes | ||
| completed | integer | Yes | ||
| failed | integer | Yes | ||
| in_progress | integer | Yes | ||
| total | integer | Yes |
OpenAI.VectorStoreSearchRequest
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| filters | OpenAI.ComparisonFilter or OpenAI.CompoundFilter | A filter to apply based on file attributes. | No | |
| max_num_results | integer | The maximum number of results to return. This number should be between 1 and 50 inclusive. Constraints: min: 1, max: 50 |
No | 10 |
| query | string or array of string | A query string for a search | Yes | |
| ranking_options | OpenAI.VectorStoreSearchRequestRankingOptions | No | ||
| └─ ranker | enum | Possible values: none, auto, default-2024-11-15 |
No | |
| └─ score_threshold | number | Constraints: min: 0, max: 1 | No | |
| rewrite_query | boolean | Whether to rewrite the natural language query for vector search. | No |
OpenAI.VectorStoreSearchRequestRankingOptions
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| ranker | enum | Possible values: none, auto, default-2024-11-15 |
No | |
| score_threshold | number | Constraints: min: 0, max: 1 | No |
OpenAI.VectorStoreSearchResultContentObject
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| text | string | The text content returned from search. | Yes | |
| type | enum | The type of content. Possible values: text |
Yes |
OpenAI.VectorStoreSearchResultItem
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| attributes | OpenAI.VectorStoreFileAttributes or null | Yes | ||
| content | array of OpenAI.VectorStoreSearchResultContentObject | Content chunks from the file. | Yes | |
| file_id | string | The ID of the vector store file. | Yes | |
| filename | string | The name of the vector store file. | Yes | |
| score | number | The similarity score for the result. Constraints: min: 0, max: 1 |
Yes |
OpenAI.VectorStoreSearchResultsPage
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| data | array of OpenAI.VectorStoreSearchResultItem | The list of search result items. | Yes | |
| has_more | boolean | Indicates if there are more results to fetch. | Yes | |
| next_page | string or null | Yes | ||
| object | enum | The object type, which is always vector_store.search_results.pagePossible values: vector_store.search_results.page |
Yes | |
| search_query | array of string | Yes |
OpenAI.Verbosity
Constrains the verbosity of the model's response. Lower values will result in
more concise responses, while higher values will result in more verbose responses.
Currently supported values are low, medium, and high.
| Property | Value |
|---|---|
| Type | string |
| Values | lowmediumhigh |
OpenAI.VoiceIdsShared
| Property | Value |
|---|---|
| Type | string |
| Values | alloyashballadcoralechosageshimmerversemarincedar |
OpenAI.Wait
A wait action.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | enum | Specifies the event type. For a wait action, this property is always set to wait.Possible values: wait |
Yes |
OpenAI.WebSearchActionFind
Action type "find": Searches for a pattern within a loaded page.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| pattern | string | The pattern or text to search for within the page. | Yes | |
| type | enum | The action type. Possible values: find_in_page |
Yes | |
| url | string | The URL of the page searched for the pattern. | Yes |
OpenAI.WebSearchActionOpenPage
Action type "open_page" - Opens a specific URL from search results.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | enum | The action type. Possible values: open_page |
Yes | |
| url | string | The URL opened by the model. | Yes |
OpenAI.WebSearchActionSearch
Action type "search" - Performs a web search query.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| queries | array of string | The search queries. | No | |
| query | string (deprecated) | [DEPRECATED] The search query. | Yes | |
| sources | array of OpenAI.WebSearchActionSearchSources | The sources used in the search. | No | |
| type | enum | The action type. Possible values: search |
Yes |
OpenAI.WebSearchActionSearchSources
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| type | enum | Possible values: url |
Yes | |
| url | string | Yes |
OpenAI.WebSearchApproximateLocation
The approximate location of the user.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| city | string or null | No | ||
| country | string or null | No | ||
| region | string or null | No | ||
| timezone | string or null | No | ||
| type | enum | The type of location approximation. Always approximate.Possible values: approximate |
No |
OpenAI.WebSearchPreviewTool
Note: web_search is not yet available via Azure OpenAI.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| search_context_size | OpenAI.SearchContextSize | No | ||
| type | enum | The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11.Possible values: web_search_preview |
Yes | |
| user_location | OpenAI.ApproximateLocation or null | No |
OpenAI.WebSearchTool
Note: web_search is not yet available via Azure OpenAI.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| filters | OpenAI.WebSearchToolFilters or null | No | ||
| search_context_size | enum | High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.Possible values: low, medium, high |
No | |
| type | enum | The type of the web search tool. One of web_search or web_search_2025_08_26.Possible values: web_search |
Yes | |
| user_location | OpenAI.WebSearchApproximateLocation or null | No |
OpenAI.WebSearchToolFilters
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| allowed_domains | array of string or null | No |
Order
| Property | Value |
|---|---|
| Type | string |
| Values | ascdesc |
ResponseFormatJSONSchemaRequest
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| json_schema | object | JSON Schema for the response format | Yes | |
| type | enum | Type of response format Possible values: json_schema |
Yes |
SpeechGenerationResponse
A representation of a response for a text-to-speech operation.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| audio | string | The generated audio, generated in the requested audio output format. | Yes |
SpeechGenerationResponseFormat
The supported audio output formats for text-to-speech.
This component can be one of the following:
- string
- string:
mp3,opus,aac,flac,wav,pcm
SpeechVoice
The available voices for text-to-speech.
| Property | Value |
|---|---|
| Description | The available voices for text-to-speech. |
| Type | string |
| Values | alloyechofableonyxnovashimmer |
VideoContent
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| content | string | Yes |
VideoContentVariant
Selectable asset variants for downloaded content.
| Property | Value |
|---|---|
| Description | Selectable asset variants for downloaded content. |
| Type | string |
| Values | videothumbnailspritesheet |
VideoIdParameter
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| video-id | string | The ID of the video to use for the Azure OpenAI request. | Yes |
VideoList
A list of video generation jobs.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| data | array of VideoResource | The list of video generation jobs. | Yes | |
| first_id | string | The ID of the first video in the current page, if available. | No | |
| has_more | boolean | A flag indicating whether there are more jobs available after the list. | Yes | |
| last_id | string | The ID of the last video in the current page, if available. | No | |
| object | enum | Possible values: list |
Yes |
VideoResource
Structured information describing a generated video job.
| Name | Type | Description | Required | Default |
|---|---|---|---|---|
| completed_at | integer | Unix timestamp (seconds) for when the job completed, if finished. | No | |
| created_at | integer | Unix timestamp (seconds) for when the job was created. | Yes | |
| error | Error | No | ||
| └─ code | string | Yes | ||
| └─ message | string | Yes | ||
| expires_at | integer | Unix timestamp (seconds) for when the video generation expires (and will be deleted). | No | |
| id | string | Unique identifier for the video job. | Yes | |
| model | string | The video generation model deployment that produced the job. | Yes | |
| object | string | The object type, which is always video. |
Yes | |
| progress | integer | Approximate completion percentage for the generation task. | Yes | |
| remixed_from_video_id | string | Identifier of the source video if this video is a remix. | No | |
| seconds | VideoSeconds | Supported clip durations, measured in seconds. | Yes | |
| size | VideoSize | Output dimensions formatted as {width}x{height}. |
Yes | |
| status | VideoStatus | Lifecycle state of a generated video. | Yes |
VideoSeconds
Supported clip durations, measured in seconds.
| Property | Value |
|---|---|
| Description | Supported clip durations, measured in seconds. |
| Type | string |
| Values | 4812 |
VideoSize
Output dimensions formatted as {width}x{height}.
| Property | Value |
|---|---|
| Description | Output dimensions formatted as {width}x{height}. |
| Type | string |
| Values | 720x12801280x7201024x17921792x1024 |
VideoStatus
Lifecycle state of a generated video.
| Property | Value |
|---|---|
| Description | Lifecycle state of a generated video. |
| Type | string |
| Values | queuedin_progresscompletedfailed |